text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Horizontal maps of echo power in the lower stratosphere using the MU radar In recent works, zenithal and azimuthal angle variations of echo power measured by VHF StratosphereTroposphere (ST) radars have been analyzed in detail using different radar multi-beam configurations. It was found that the azimuthal angle corresponding to maximum echo power is closely related to the direction of the horizontal wind shear. These properties indicate that local wind shear affects the tilt of the scatterers. Moreover, horizontal maps of echo power collected using a large set of beams steered pulse-to-pulse up to 40 degrees off zenith revealed that the power distribution pattern in the troposphere is often skewed. In this work, a three-dimensional description of echo power variations up to 24 degrees off zenith is shown for measurements in the lower stratosphere (i.e. up to approximately 20 km) using a “sequential multi-beam” (SMB) configuration. Such a description was not possible above the tropopause with classical multi-beam configurations because of the loss of radar sensitivity due to the limited integration time by the use of a large number of beams. This work attempts to complete previous descriptions of the phenomenon by some observations in the lower stratosphere discussed in association with complementary balloon measurements. Introduction Flexible VHF Stratosphere-Troposphere (ST) radars such as the Middle and Upper atmosphere (MU) radar (Shigaraki, Japan, 34.85 • N, 136.10 • E) make it possible to volumeimage the distribution of echo power by pulse-to-pulse steering of multiple radar beams.This ability permits us to improve our knowledge of VHF backscattering.The zenith aspect sensitivity, i.e. the decrease of echo power as the radar beam is pointed away from zenith, observed very early after the development of the ST radar technique, indicates that the Correspondence to: M. Hirono<EMAIL_ADDRESS>scatterers are anisotropic at the Bragg scale (i.e. the half radar wavelength).This property was mainly observed in the lower stratosphere (Gage and Green, 1978;Röttger and Liu, 1978).Recently, the azimuthal dependence of echo power was thoroughly analyzed (Tsuda et al., 1997;Worthington and Thomas, 1997;Palmer et al., 1998), and it was found that the azimuth angle for which the power maximum is observed is strongly related to the horizontal wind shear direction.This property implies that the local wind shear affects the azimuthal distribution of the scatterers.Worthington et al. (1999) reported that the phenomenon could be observed at zenith angles larger than 10 degrees in the troposphere, where it was previously believed that the isotropic backscattering mechanism is dominant.Power imbalances between oppositing beam directions can reach 10 dB or more at 10 degrees off zenith after 1-h averaging. From measurements performed using a multi-beam scheme with pulse-to-pulse beam steering, (e.g.Palmer et al., 1998;Worthington et al., 1999Worthington et al., , 2000)), it was clearly shown that horizontal maps of echo power are skewed and sometimes very scattered.This property indicates that the azimuthal dependence of echo power does not merely result from tilting of scattering layers (for example, tilts produced by a long-period gravity wave or the tilt of isentropic surfaces at synoptic scales), but rather results from facets of corrugated layers with a predominant orientation constrained by the local wind shear.It is now suggested that this mechanism is a more likely source than anisotropic turbulence (e.g.Hocking and Hamza, 1997;Worthington et al., 2000).The horizontal maps produce an image of the angular distribution of the facets' tilts smeared by the radar volume and time averaging.This idea is quite different from the hypothesis proposed by Tsuda et al. (1997), who considered a model of monochromatic gravity waves with large horizontal wavelengths to explain the azimuthal dependence of echo power at a fixed zenith angle.However, the latter model cannot explain the skew of the distribution which can also produce sinusoidal variations when considering a fixed zenith angle.Worthington et al. (1999) proposed that the characteristics of power maps result rather from a small-scale KHI mechanism.However, since the azimuthal dependence seems to be quite systematic, it is still not well-understood if the KHI mechanism is always the source of the phenomenon or not, and if the various characteristics result from KHI events at different stages of their development.More conclusive results could perhaps be obtained with more quantitative comparisons. The purpose of this work is to describe the characteristics of the zenithal and azimuthal aspect sensitivity in the lower stratosphere in comparison to balloon measurements.Horizontal power maps have already been obtained with the MU radar up to 40 degrees off zenith but only in the troposphere, i.e. below approximately 10 km because the radar sensitivity decreases as the number of beams increases when steered pulse-to-pulse.Indeed, if the number of beams is increased by factor n, the number of coherent integrations is divided by n for the same Nyquist frequency of the Doppler spectra.The received power is then reduced by factor n 2 .Maps have been obtained using the same technique in the lower stratosphere (up to approximately 15 km) but for zenith angles smaller than 5 degrees for the same reason (echoes cannot be detected at larger zenith angles).Therefore, the characteristics of the horizontal power maps at zenith angles larger than 5-10 degrees in the lower stratosphere are still poorly documented.Since the lower stratosphere is usually affected by instabilities due to long-lived gravity waves, the detailed characteristics of power maps may be particularly interesting.Mapping is not possible using a classical multibeam configuration, except perhaps during exceptional conditions of intense reflectivity episodes related to strong turbulent events.However, such events may not be representative of the dominant features in the lower stratosphere. We propose a configuration to perform observations up to 24 degrees off zenith by using several consecutive 5-beam DBS modes at different zenith and azimuth angles, which we call the "Sequential multi beam" (SMB) configuration.SMB permits us to preserve the same radar sensitivity as the standard DBS mode and yet obtain reliable observations up to 20 km or higher.Section 2 describes the advantages and drawbacks of SMB, and describes the radar parameters of the SMB experiment on 11-12 November 2001.Some observational results for stratospheric ranges are described in Sect. 3 and discussed under the dynamic stability conditions deduced from radar and balloon measurements.Finally, concluding remarks are given in Sect. 4. The SMB configuration Figure 1 shows the radar beam distribution viewed from above.Sixteen groups of five beams (steered every interpulse-period set at 400 µs), corresponding to a total number of 65 beam directions, have been used and observations with each group were performed sequentially.Each group includes the vertical beam for observations overhead, with a time resolution corresponding to the acquisition time of a single group.The oblique beams of the sixteen groups were steered such that observations could be performed up to 24 degrees off zenith in steps of 4 degrees and at azimuths given by (α, α + 90, α + 180, α + 270) degrees, where α is the azimuth of the first oblique beam (see Fig. 1).Such a configuration allows for 16 estimates of wind profiles to be obtained.Table 1 lists other parameters of the SMB experiment.The maximum radial velocity is 40.31 ms −1 , in order to avoid Doppler aliasing for beam angles far from the zenith.The maximum speed of the jet-stream was about 62 ms −1 at an altitude of 11.40 km. It should be noted that SMB experiments are carried out to the detriment of the acquisition time.Furthermore, the data for the first and last group are collected with significant time differences, while the data obtained with the classical multi-beam configuration are almost simultaneous for all the directions.The total acquisition time needed for operating all sixteen groups is about 10 min (8 min 11 s plus time lags corresponding to switching between groups) with the parameters of the experiment performed on 11-12 November 2001.However, it is worth noting that it is less than twice the total time acquisition of the 320 beam configuration used by Worthington et al. (1999) and that the acquisition time could be reduced to approximately 100 s without the 6 incoherent integrations performed in real time, to reduce the amount of data and to improve the detectability of the radar signal. Since the power imbalances between opposite beams strongly depend on the shear direction (Worthington and Thomas, 1997), it was expected that relevant data sets could be obtained in the lower stratosphere if the data acquisition time were significantly shorter than the dominant inertial gravity wave period such, that the wind shear direction, modulated by the the gravity wave, does not change significantly.In order to reduce possible power fluctuations resulting from time lags between observations using different groups, we performed 2-h averaging on the collected maps.This experiment was therefore intended to analyze the persistent features and was not used to describe the more or less sporadic variations which can appear at very high temporal resolution, as shown by Worthington et al. (2000). HTI plots of echo power Figure 2a shows the height-time intensity plot of vertical echo powers for the complete observation period.Strong vertical power enhancements are observed above 9 km, suggesting the tropopause altitude.This result is confirmed by the stability (Brünt Väisälä frequency (N 2 )) profile obtained from balloon temperature and pressure measurements at 00:43 LT (Fig. 3).This profile reveals a significant N 2 enhancement above 9 km characteristic of the tropopause.As classically reported with the MU radar, for example, (Tsuda et al. 1986), intense and long-lived echoing layers are detected in the lower stratosphere (Figure 2b).Their heights vary slowly while many successive echoing layers show downward motions in the high troposphere.These layers may result from stratospheric air intrusions (e.g.recently, Vaughan and Worthington, 2000).We did not analyze these structures in detail since their study is not the topic of the present paper.Figure 2b shows echo power profiles averaged from 22:39 to 00:46 LT for zenith angles from 0 to 24 degrees.The oblique-beam echo power profiles have been averaged in azimuth, in order to show the zenithal aspect sensitivity only.The profiles can be divided into three categories: (1) the echoes are particularly intense and aspect sensitive above 9 km and below 14 km (the vertical echo power enhancement is about 20 dB with respect to the power received at 24 • off zenith); (2) a nearly isotropic region can be noticed between 16 and 20 km (the vertical enhancement does not exceed 7 dB).A thick and nearly isotropic echoing layer is detected between 17 and 18 km; (3) the very thin peak located at 14.25 km has intermediate characteristics: It is associated with a vertical echo power enhancement similar to case (1) but the received power decreases more slowly than below 14 km (i.e. the angular aspect sensitivity is wider than below 14 km). The characteristics of echo power maps at two height ranges delimited by the horizontal dotted and dashed lines (13.05-14.25 km and 17.10-18.30km, respectively) will be analyzed in detail.They will be called hereafter "AS" (aspect sensitive) and "NI" (nearly-isotropic) regions, respectively, for simplicity.These regions have been chosen because the power maps reveal very specific characteristics and because the AS region contains ranges corresponding to cases (1) and (3) and the NI region to case (2). Stability conditions Figure 3c shows the Richardson number (Ri) profile at the radar range resolution (150 m).The Ri parameter describes the dynamic stability of the flow.It is estimated from the ratio between N 2 (Fig. 3b) and the modulus squared of the wind shear S= ( u/ z) 2 + ( v/ z) 2 (Fig. 3a) deduced from radar measurements of the zonal (u) and meridional (v) wind components.Unfortunately, the balloon burst at around 15 km, so that temperature data are not available above this altitude and the dynamic conditions cannot be analyzed for the NI region.The Ri number is larger than 0.25, indicated by vertical dotted-dashed line in Fig. 3c everywhere (a local Ri smaller than 0.25 is a necessarily condition for dynamic instability), but this parameter is scale-dependent and values larger than 0.25 at a 150-m scale do not necessary mean the absence of local instabilities.The AS region is roughly, but significantly, associated with a minimum Ri number (close to 0.25).It results mainly from a shear enhancement, since the static stability N 2 does not differ from the stability below the AS region and above 9 km.The presence of shear instabilities is therefore plausible. Azimuthal dependence of echo power at 12 degrees off zenith Figure 4 shows a typical example of height variations of the azimuth angle dependence of echo power averaged over 2 h between 22:39 LT and 00:46 LT at 12 degrees off zenith.The mean echo power value has been subtracted from the observed echo power at each height.A similar example was shown by Tsuda et al. (1997) but at 6 degrees off zenith between 11 and 15.6 km and for observation intervals of 137 s.The corresponding wind shear direction measured by the MU radar is also indicated by a red curve.The radius of the red circles indicates the wind shear magnitude.This figure confirms the relationship between the maximum of echo power and the direction of the wind shear vector as shown previously by Worthington and Thomas (1997).The plots reveal only single humped height variations; double humped variations, similar to those revealed by Tsuda et al. (1997, their Fig. 4), have not been found either because they are more seldom or because the phenomenon is more sporadic and disappeared due to time averaging.Similar effects can be deduced from echo power maps shown in Fig. 5 of Worthington et al. (2000) at smaller zenith angles and with a shorter time resolution. Above the tropopause, a clockwise rotation of both wind shear vectors and direction of echo power maximum is clearly observed at some altitude ranges (within the AS region, for example).This feature is a clear signature of a dominant inertia-gravity wave of 1-2 km vertical wavelength usually observed above the MU radar location (e.g.Fritts et al., 1988;Sato, 1994).The AS region is associated with the strongest azimuthal power dependence, while this dependence is not so strong within the NI region or between 9 and 13 km, where the aspect sensitivity is similar.It is also worth noting that the selected NI and AS regions correspond to enhanced wind shears, as indicated by the red circles. Power maps Figure 5a shows the corresponding horizontal maps of echo power for 9 consecutive gates within the AS region from 13.05 to 14.25 km.The wind shear, resulting from the average of several estimates from different groups at 8, 12 and 16 degrees off zenith is indicated by white arrows.The black dots indicate the direction of the radar beams and the white dashed circles correspond to a zenith angle of 10 degrees used for standard DBS observations. Owing to the configuration used, the characteristics of the horizontal distribution of echo power up to 24 degrees are then described in the lower stratosphere and complete the works by Palmer et al. (1998) and Worthington et al. (1999Worthington et al. ( , 2000)).As expected from Fig. 4, the relationship between the skew of the distribution and the direction of the wind shear is evident.Both parameters show a gradual clockwise rotation with altitude increase.The altitude range covered by the map series corresponds to one vertical wavelength of the quasimonochromatic gravity wave.Even at 10 degrees off zenith, the power dependence can still be very significant: for example, a difference of 8-10 dB is observed at 13.50 km between the north and south directions.It is interesting to note that the asymmetry of the power distribution can still reach 5 dB or more at a zenith angle of 24 degrees.These results for the stratosphere (a strongly stratified region) are consistent with those given by Worthington et al. (1999) for the troposphere (a less stratified region).The gates at 14.10 and 14.25 km (corresponding to case 3) and at 13.50 and 13.65 km (case 1) do not reveal significant differences but the former show a less pronounced skew. It is to be noted that, since the resolution of the grid is not sufficient around the zenith, it is not possible to identify the exact position of the maximum and then to show if the skew of the distribution is also related to a tilted structure on average. Figure 5b shows the power maps for the NI region between 17.10 and 18.30 km.The scatter of the distribution appears to be more pronounced.These maps are similar to some observed in the troposphere in the present data sets (not shown) and by Worthington et al. (2000).Some maps, at 17.70 km or 18.15 km, for example, seem to suggest a power maximum far away from the zenith, as already noted by Worthington et al. (1999), for weakly zenith aspect sensitive echoes in the troposphere.If produced by fully-developed, 3-D turbulence, these maps suggest that the tilts of the small-scale turbulent irregularities are still oriented predominantly by the local wind shear but that the distribution of tilt angles are, however, more scattered than for the case where the echoes are produced by stable layers. Figure 6 shows horizontal echo power distributions without time averaging at 13.50 km.It is interesting to note that all these maps reveal the same gross features as the averaged map shown in Fig. 5a, even though, as expected, they appear more variable.This result tends to indicate that the time lags between the 16 radar configurations for collecting a complete map are not a drawback in practice and that the spread of the distribution after 2-h averaging does not result from the time averaging.In the present case, the 2-h averaged maps are, therefore, representative of the main characteristics revealed by maps that would have been obtained at a shorter time acquisition. Dicussion and concluding remarks In this work, we analyzed the 3-D echo power distribution within the lower stratosphere obtained with a sequential multi-beam experiment in association with balloon observa-tions.The power maps reveal similar features to those reported by Palmer et al. (1998) and Worthington et al. (1999) within the troposphere.The skew of the distribution varies but is observed at all altitudes.The characteristics of the power maps are clearly affected by inertial gravity waves.However, in the troposphere, there is no signature of such waves.It is then important to emphasize that the relationship between the wind shear direction and the direction of the skew of the echo power distribution is not constrained by the presence of a gravity wave.A similar feature is observed within the NI region. The strong aspect sensitivity observed between 9 and 14 km is related to a strong azimuthal dependence only in the AS region (13.05-14.25 km), where the Ri number is close to 0.25 (Figs. 4 and 5a).This characteristic seems to indicate that stable layers are distorted by the onset of instabilities within the AS region, while they are not so distorted below 13 km.Since zenith aspect sensitivity is still strong, it may be suggested that the instabilities are not sufficient to produce wave-breaking after which turbulence could attenuate the zenithal and azimuthal echo power dependence.However, the detailed underlying mechanisms which produce the power maps of Figs.5a and b are still not clearly identified.The NI region may also correspond to a height range where dynamic instabilities occur.Contrary to the AS region, these instabilities may have produced a thick turbulent region leading to nearly isotropic echoes with a much weaker, but still significant, azimuthal dependence.The absence of rotation of wind shear within the NI region may indicate that turbulent mixing, possibly resulting from wave breaking, may have affected the wave propagation.However, we do not have Ri information from balloon measurements to confirm this hypothesis. Significant power imbalances are observed at beam angles larger than 10 degrees when echoes are zenithally aspect sensitive.Similar to the zenithal dependence of echo power, the observed azimuthal dependencies do not seem to affect dramatically the accuracy of the wind measurements, since no bias was reported by Luce et al. (2001) when comparing GPS and MU radar wind observations.However, small effects, statistically not representative, can still occur in the case of strong azimuthal dependence, and more dramatic effects cannot be ignored for smaller zenith angles.Thus, wind correction using a simple model, assuming solely zenith angle dependence of the echo power (Hocking et al., 1990) may not be suitable.A specific analysis will be performed in a future paper. Fig. 1 . Fig. 1.Layout of the 65 radar beams of the SMB experiment (left) and a table showing the azimuth and zenith directions of the each DBS set (right). Fig. 2 . Fig. 2. (a) Time-height variations of the echo power and (b) height variations of the echo power averaged from 22:39 to 00:46 LT corresponding to the period delimited by the two solid vertical lines in (a).The dotted and solid horizontal lines in (b) correspond to the analyzed ranges shown in Figs.5a and b, respectively. Fig. 3 . Fig. 3. Height variations of (a) the horizontal wind shear S, (b) the Brünt Väisälä frequency N 2 and (c) the Richardson number Ri = N 2 /S 2 .N 2 is obtained from temperature and pressure measurements by a balloon launched at 00:43 LT and the wind shear, measured by the MU radar, is averaged from 22:39 to 00:46 LT.The horizontal dotted lines in (c) correspond to the analyzed range shown in Figs.5a.The vertical dotted-dashed line in (c) indicates the critical Richardson number of 0.25 which is necessary for dynamic instability. Fig. 4 . Fig. 4. Height variations of the azimuth dependence of echo power averaged from 22:39 to 00:46 LT.The red and white lines indicate wind shear direction defined by S = (− u/ z, − v/ z) and S = ( u/ z, v/ z), respectively.The radius of red circles indicates the wind shear magnitude.The horizontal black dotted and solid lines correspond to the analyzed ranges shown in Figs.5a and b, respectively. Fig. 5 . Fig. 5. Horizontal maps of echo power from (a) 13.05 km to 14.25 km and (b) 17.10 km to 18.30 km averaged from 22:39 to 00:46 LT. White arrows indicate the wind shear direction and magnitude.The white dashed circles correspond to a zenith angle of 10 • .The black dots indicate the direction of the radar beams.
5,138
2004-03-19T00:00:00.000
[ "Environmental Science", "Physics" ]
Sun dual theory for bi-continuous semigroups The sun dual space corresponding to a strongly continuous semigroup is a known concept when dealing with dual semigroups, which are in general only weak$^*$-continuous. In this paper we develop a corresponding theory for bi-continuous semigroups under mild assumptions on the involved locally convex topologies. We also discuss sun reflexivity and Favard spaces in this context, extending classical results by van Neerven. Introduction Semigroup theory is a well-established tool in the abstract study of evolution equations.Classically, strongly continuous semigroups of bounded linear operators on Banach spaces (also called C 0 -semigroups) are considered, meaning that the semigroup is strongly continuous with respect to the norm topology.This, however, limits the applicability of the theory in spaces such as C b (R n ) or L ∞ , ruling out interesting examples arising from (partial) differential equations.This fact is underlined by Lotz's result [42] asserting that any strongly continuous semigroup on Grothendieck spaces with the Dunford-Pettis property is automatically uniformly continuous. On the other hand, it has long been known that strong continuity fails to be preserved for the dual semigroup (T ′ (t)) t≥0 ∶= (T (t) ′ ) t≥0 in general, and merely translates into weak * -continuity.Nevertheless, the strong continuity of the "presemigroup" (T (t)) t≥0 encodes enough structure to allow for a rich theory.Following first results in the early days of semigroup theory; by Phillips [48], Hille-Phillips [27], de Leeuw [12], see also Butzer-Berens [5]; intensified research on dual semigroups was conducted in the 1980s centred around a "Dutch school" in a series of papers such as by Clément, Diekmann, Gyllenberg, Heijmans and Thieme [6][7][8][9]14], de Pagter [13].The renewed interest in dual semigroups was partially driven by the interest from applications in e.g.delay equations [15].At a peak of these developments van Neerven [54] finally provided a general comprehensive treatment of the theory, together with many new results clarifying especially the topological aspects, see also [55][56][57].Since then, the interest in dual semigroups which fail to be strongly continuous remained, and we name particularly applications in mathematical neuroscience [52,53]. The key concept to compensate for the lack of strong continuity of dual semigroups is the notion of the sun dual space and the related sun dual semigroup.More precisely, given a strongly continuous semigroup (T (t)) t≥0 on a Banach space X, the sun dual space X ⊙ consists of the elements x ′ in the continuous dual X ′ such that lim t→0+ T ′ (t)x ′ = x ′ .As X ⊙ is closed and T ′ (t)-invariant, the restrictions of T ′ (t) to X ⊙ define a strongly continuous semigroup (T ⊙ (t)) t≥0 on X ⊙ , an object which is in many facets superior to the dual semigroup. Note that this approach can be viewed as a way to regain symmetry in duality for continuity properties of the semigroup.While this holds trivially for reflexive spaces X -in which case X ⊙ = X ′ -, it is not surprising that sun duality comes with an adapted notion of reflexivity, so-called sun reflexivity (or ⊙-reflexivity), which depends on the semigroup under consideration.In particular, if X is ⊙-reflexive with respect to the semigroup (T (t)) t≥0 , then (T ⊙⊙ (t)) t≥0 can be identified with (T (t)) t≥0 via the canonical isomorphism j∶ X → (X ⊙ ) ′ , x ↦ (x ⊙ ↦ ⟨x ⊙ , x⟩).That this framework indeed leads to a meaningful theory is also reflected by the existence of an Eberlein-Shmulyan type theorem due to van Neerven [56], and de Pagter's characterisation of sun reflexivity [13], which can be seen as a variant of Kakutani's theorem.About ten years after this flourishing period of dual semigroups, Kühnemund [38,39] conceptualized semigroups which only satisfy weaker continuity properties through the notion of bi-continuous semigroups.More precisely, the strong continuity was relaxed to hold with respect to a Hausdorff locally convex topology τ coarser than the norm topology on X.Under the additional conditions that τ is sequentially complete on norm-bounded sets and the dual space of (X, τ ) is norming, an exponentially bounded semigroup (T (t)) t≥0 on X is called τ -bi-continuous if the trajectories T (⋅)x are τ -strong continuous and locally sequentially τ -equicontinuous on norm-bounded sets.Since the weak * -topology shares these properties, dual semigroups naturally fall in this framework.Thus the question becomes how the construction of the sun dual can be seen in this light.With this paper we would like to answer this question and hence generalise existing results for strongly continuous semigroups in the presence of previously missing topological subtleties. The interest in bi-continuous semigroups goes beyond the above mentioned special case of dual semigroups, as they, for instance, naturally emerge in the study of evolution equations on spaces of bounded continuous functions, most prominently parabolic problems, see e.g.Farkas-Lorenzi [25], Metafune-Pallara-Wacker [45].In the last decades the abstract theory of bi-continuous semigroups has been further developed and variants of the classical case have been established, such as for instance perturbation results; Farkas [22,23], approximation results; Albanese-Mangino [1] and mean ergodic theorems; Albanese-Lorenzi-Manco [2].In [24] Farkas defined a proper concept for a dual bi-continuous semigroup by considering a suitable subspace X ○ of X ′ .In particular, the restriction of the dual semigroup on X ○ is again a σ(X ○ , X)-bi-continuous dual semigroup under some additional topological assumptions. In this work we develop a sun dual theory for bi-continuous semigroups and discuss its peculiarities with respect to properties of the present topologies.This generalises the classical case, i.e. strongly continuous semigroups with respect to the norm topology; henceforth simply called "strongly continuous".Apart from the abstract interest in developing a sun dual framework for bi-continuous semigroups, one of our main motivations to provide such generalizations are open problems of the following kind: We aim to extend the following theorem for strongly continuous semigroups to bi-countinuous ones. Here F av(T ) denotes the Favard space of (T (t)) t≥0 given by F av(T ) ∶= {x ∈ X lim sup Note that [28, Theorem 2.9, p. 152] lists another equivalent condition, which relates to control theory, see also [28,Remark 2.4,p. 148].Following this, the question whether Theorem 1.1 can be formulated for bi-continuous semigroups is relevant for studying generalizations of control theoretic notions in non-strongly continuous semigroup settings.The concept of sun dual spaces for strongly continuous semigroups is pivotal in the proof of the non-trivial implication (i) ⇒ (ii) in Theorem 1.1.The argument is based, among other tools, on two characterizations due to van Neerven, [54, Theorems 3.2.8,3.2.9,p. 57]: The first stating that an element x ∈ X belongs to F av(T ) if and only if that there exists a bounded sequence (y n ) n∈N in X such that lim n→∞ R(λ, A)y n = x for some (all) λ in the resolvent set ρ(A) of A where R(λ, A) ∶= (λ id −A) −1 .The second result claims that the property Let us briefly highlight some of our findings in the following.Starting from Farkas' dual space [24] X ○ ∶= {x ′ ∈ X ′ x ′ τ -sequentially continuous on ⋅ -bounded sets}, which is a closed subspace of X ′ and invariant under the dual semigroup, we define the bi-sun dual space X • as the space of strong continuity for the restricted dual semigroup T ○ (t) ∶= T ′ (t) X ○ , t ≥ 0. Under the additional assumptions that ⋅ -bounded sets, we can subsequently show that the norm defined by is equivalent to ⋅ .This result, Theorem 4.3, naturally generalises the corresponding known fact for strongly continuous semigroups (see [54,Theorem 1.3.5,p. 7] and the discussion in the previous paragraph).Further, let us point out that the assumptions (1)-( 3) are fulfilled by Theorem 3.8 if (X, γ) is a sequentially complete c 0 -barrelled Mazur space, e.g. a sequentially complete Mackey-Mazur space, where γ ∶= γ( ⋅ , τ ) denotes the mixed topology of Wiweger [61].We henceforth say that X is •-reflexive with respect to the τ -bi-continuous semigroup (T (t)) t≥0 if the canonical map j∶ X → X •′ given by ⟨j(x), maps the space of strong continuity X cont onto X •• .Given the latter property, we show that j∶ X → X •′ is surjective if and only if the unit ball The article is organized as follows.In the preparatory Section 2 we set the stage by discussing the topological assumptions and recapping some basics on bicontinuous semigroups as well as integral notions in this context.With the level of detail we aim for making the presentation rather self-contained, especially for readers less familiar with bi-continuous semigroups.In Sections 3 and 4 we present our approach to dual semigroups of bi-continuous semigroups and the sun dual space, respectively.The short Section 5 discusses the notion of sun reflexivity in this generalised context and we finish with studying the relation of the obtained results to Favard spaces, Section 6. Notions and preliminaries For a vector space X over the field R or C with a Hausdorff locally convex topology τ we denote by (X, τ ) ′ the topological linear dual space and just write X ′ ∶= (X, τ ) ′ if (X, τ ) is a Banach space.For two topologies τ 1 and τ 2 on a space X, we write τ 1 ≤ τ 2 if the topology τ 1 is coarser than τ 2 .Further, we use the symbol L(X; Y ) ∶= L((X, ⋅ X ); (Y, ⋅ Y )) for the space of continuous linear operators from a Banach space (X, ⋅ X ) to a Banach space (Y, ⋅ Y ) and denote by ⋅ L(X;Y ) the operator norm on L(X; Y ).If X = Y , we set L(X) ∶= L(X; X). 2.1.Definition ([35, Definition 2.2, p. 3]).Let (X, ⋅ ) be a Banach space and τ a Hausdorff locally convex topology on X that is coarser than the ⋅ -topology τ ⋅ .Then (a) the mixed topology γ ∶= γ( ⋅ , τ ) is the finest linear topology on X that coincides with τ on ⋅ -bounded sets and such that τ ≤ γ ≤ τ ⋅ , (b) the triple (X, ⋅ , τ ) is called a Saks space if there exists a directed system of seminorms P τ that generates the topology τ such that The mixed topology γ is Hausdorff locally convex and our definition is equivalent to the one from the literature [ In this case x Ω (f ) is unique due to X being Hausdorff and we define the τ -Pettis f (s)dλ(s). 2.4.Definition.Let (X, ⋅ , τ ) be a sequentially complete Saks space and Ω ⊂ R non-empty.We set is the space of continuous functions from Ω to (X, τ ). in X and the Riemann and the Pettis integral coincide. Furthermore, the τ -Riemann integrability of f implies that the Riemann sums are τ -convergent.They are even ⋅ -bounded as f is ⋅ -bounded.It follows from [11, I.1.10Proposition, p. 9] that the Riemann sums are γ-convergent and their γ-limit coincides with their τ -limit because γ is stronger than τ .Thus f is γ-Riemann integrable on [a, b] in X and this integral coincides with the τ -Riemann integral.Now, we only need to prove that f ∶ [a, b] → (X, γ) is continuous.Then it follows as above that f is γ-Pettis integrable on [a, b] in X and that the Riemann and the Pettis integral coincide.Let (x n ) n∈N be a sequence in [a, b] . By [11, I.1.10Proposition, p. 9] it follows that (f The proof is analogous to (a).The condition that ⟨x ′ , f ⟩ is improper Riemann integrable on [a, ∞) for all x ′ ∈ (X, τ x, is continuous for all x ∈ X, (iii) (T (t)) t≥0 is locally sequentially γ-equicontinuous, i.e. for every sequence locally uniformly for all t ∈ [0, ∞). If we want to emphasize the dependence on the Saks space, we say that (T (t)) t≥0 is a bi-continuous semigroup on (X, ⋅ , τ ).[26, Proposition 3.6 (ii), p. 1137] in combination with [61,2.4.1 Corollary,p. 56] gives that a bi-continuous semigroup (T (t)) t≥0 on X is exponentially bounded (of type ω), i.e. there exist M ≥ 1 and ω ∈ R such that T (t) L(X) ≤ M e ωt for all t ≥ 0, and we call its growth bound (see [38, p. 7]).Due to the exponential boundedness of a bicontinuous semigroup and [11, I.1.10Proposition, p. 9] we also may rephrase the definition [21, Definition 1.2.6, p. 7] of the generator of a bi-continuous semigroup in terms of the mixed topology.2.7.Definition.Let (X, ⋅ , τ ) be a sequentially complete Saks space and (T (t)) t≥0 a bi-continuous semigroup on X.The generator (A, D(A)) is defined by We recall that an element λ ∈ C belongs to the resolvent set ρ(A) of the generator Usually, it is required that Y is a ⋅ -closed subspace (or a Banach space normcontinuously embedded in X) which is (T (t)) t≥0 -invariant (see [18, Chap.II, Definition, p. 60]), but this is not needed just for the sake of the definition of A Y .With these definitions at hand, we recall the following properties of the generator of a bi-continuous semigroup given in [39, Definition 9, Propositions 10, 11, Theorem 12, Corollary 13, p. 213-215], which are summarised in [4, Theorems 5.5, 5.6, p. 339-340], and may be rephrased in terms of the mixed topology by [11, I.1.10Proposition, p. 9] and Proposition 2.5 as well. 2.8.Theorem.Let (X, ⋅ , τ ) be a sequentially complete Saks space and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, D(A)).Then the following assertions hold: where the integral is a γ-Pettis integral.(f ) For each ω > ω 0 there exists M ≥ 1 such that for all k ∈ N and Re λ > ω, i.e. the generator (A, D(A)) is a Hille-Yosida operator.(g) Let X cont be the space of ⋅ -strong continuity for (T (t)) t≥0 , i.e. Then X cont is a ⋅ -closed, sequentially γ-dense, (T (t)) t≥0 -invariant linear subspace of X.Moreover, X cont = D(A) ⋅ and (T (t) Xcont ) t≥0 is the ⋅strongly continuous semigroup on X cont generated by the part A Xcont of A in X cont and We added in part (g) that X cont is sequentially γ-dense in X, which is a consequence of (b). Dual bi-continuous semigroups We start this section by recalling the definition of the dual semigroup on X ○ of a bi-continuous semigroup on X given in [24], where for a Saks space (X, ⋅ , τ ) we set 3.1.Remark.Let (X, ⋅ , τ ) be a Saks space.Then X ○ is a closed linear subspace of the norm dual X ′ and hence a Banach space by [24, Proposition 2.1, p. 314].We note that it is assumed in [24, Proposition 2.1, p. 314] that the Saks space (X, ⋅ , τ ) is sequentially complete (see [24, Hypothesis A (ii), p. 310-311]) but an inspection of its proof shows that this assumption is not needed. 3.3. Proposition ([24, Proposition 2.4, p. 315], [3, Lemma 1, p. 6]).Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space, ⋅ X ○ the restriction of ⋅ X ′ to X ○ and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, D(A)).Then the following assertions hold: (a) The triple (X ○ , ⋅ X ○ , σ(X ○ , X)) is a sequentially complete Saks space.(b) The operators given by T Next, we take a closer look at the space X ○ and its relation to the dual space (X, γ) ′ where γ is the mixed topology of ⋅ and τ .Both spaces coincide if (X, γ) is a Mazur space.This will be a quite helpful observation in the next sections.3.5.Remark.Let (X, ⋅ , τ ) be a Saks space.Then (X, γ) ′ is a closed linear subspace of X ′ , in particular a Banach space, and seq-γ by [11, I.1.10Proposition, p. 9] and since we always have 3.6.Proposition.Let (X, ⋅ , τ ) be a Saks space.10 Proposition, p. 9] and thus τ -continuous as (X, τ ) is a Mazur space and every τ -convergent sequence ⋅bounded.But this implies that x ′ is γ-continuous since τ is coarser than γ. 3.7. Definition.We call a Saks space (X, ⋅ , τ ) a Mazur space if (X, γ) is a Mazur space.Now, let us revisit Definition 3.2 and give sufficient conditions in terms of the mixed topology γ when the conditions of this definition are fulfilled.For that purpose we recall that a Hausdorff locally convex space (X, ϑ) is called c 0 -barrelled if every σ((X, ϑ) ′ , X)-null sequence in (X, ϑ) ′ is ϑ-equicontinuous (see [30, p. 249], or [59,Definition,p. 353] where such spaces are called sequentially barrelled ). (a) Let (X, γ) be a Mazur space.Then condition (i) of Definition 3.2 is fulfilled if and only if Proof.(a) We have X ′ γ = X ○ by Remark 3.5 and so the triple ) is a Saks space by our considerations above Definition 3.2.Our claim follows from [61, 2.3.2Corollary, p. 55] since condition (i) of Definition 3.2 is equivalent to the sequential completeness of (X ○ , ⋅ X ○ , σ(X ○ , X)), and γ ○ = τ c (X ′ γ , (X, ⋅ )) by [35, 3.22 Second, let C b (Ω) be the space of bounded continuous functions on a completely regular Hausdorff space Ω and We denote by τ co the compact-open topology, i.e. the topology of uniform convergence on compact subsets of Ω, which is induced by the directed system of seminorms P τco given by Let V denote the set of all non-negative bounded functions ν on Ω that vanish at infinity, i.e. for every ε > 0 the set {x ∈ Ω ν(x) ≥ ε} is compact.Let β 0 be the Hausdorff locally convex topology on C b (Ω) that is induced by the seminorms for ν ∈ V. Due to [50, Theorem 2.4, p. 316] we have γ( ⋅ ∞ , τ co ) = β 0 .Let M t (Ω) denote the space of bounded Radon measures on a completely regular Hausdorff space Ω and ⋅ Mt(Ω) be the total variation norm (see e.g.[40, p. 439-440] where Furthermore, a Banach space (X, ⋅ ) is called weakly compactly generated (WCG) if there is a σ(X, X ′ )-compact set K ⊂ X such that X = span(K) where span(K) denotes the ⋅ -closure of span(K) (see [20,Definition 13.1, p. 575]).A Banach space (X, ⋅ ) is called strongly weakly compactly generated space (SWCG) if there exists a σ(X, X ′ )-compact set K ⊂ X such that for every σ(X, X ′ )-compact set L ⊂ X and ε > 0 there is n ∈ N with L ⊂ (nK + εB ⋅ ) by [49, p. 387].In particular, every SWCG space is a WCG space by [ .Moreover, we recall that a Banach space (X, ⋅ ) has an almost shrinking basis if it has a Schauder basis such that its associated sequence of coefficient functionals forms a Schauder basis of (X ′ , µ(X ′ , X)) where µ(X ′ , X) is the Mackey topology on X ′ (see [31, p. 75]). The generator 4.1.Corollary.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space, ⋅ X ○ the restriction of ⋅ X ′ to X ○ and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, D(A)).We define the bi-sun dual is the ⋅ X ○ -strongly continuous semigroup on X • generated by the part Proof.We only need to prove ω 0 (T • ) ≤ ω 0 (T ).The rest of the corollary is a direct consequence of Theorem 2.8 (g) and Proposition 3.3.We note that for all t ≥ 0, yielding ω 0 (T • ) ≤ ω 0 (T ). Let us comment on the definition of X •• and its relation to X •○ ∶= (X • ) ○ and (X • ) • .4.5.Remark.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space and (T (t)) t≥0 a bi-continuous semigroup on X.The triple (X • , ⋅ X • , σ(X • , X •′ )) is a Saks space, where ⋅ X • denotes the restriction of ⋅ X ′ to X • , and we have for all t ≥ 0, we see that Like in [54, Corollary 1.3.6,p. 8] we can consider X as a subspace of X •′ . 4.6.Corollary.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Mazur-Saks space and (T (t)) t≥0 a bi-continuous semigroup on X.Then the canonical map j∶ X → X •′ given by ⟨j(x), Proof.j is clearly linear.If j(x) = 0 for some x ∈ X, then ⟨x • , x⟩ = 0 for all x • ∈ X • , which implies that x • = 0 and thus x = 0 by Theorem 4.3.The inclusion j(X cont ) ⊂ (X •• ∩ j(X)) follows directly from the definitions of X cont (see Theorem 2.8 (g)) and X •• .For the converse inclusion let x ∈ X with j(x) ∈ X •• .We note that for any t ≥ 0 and x implying the rest of our statement because In our next theorem we investigate the relation between the resolvent sets ρ(A), ρ(A • ) and ρ(A •′ ) resp. the resolvents R(λ, A), R(λ, A • ) and R(λ, A •′ ).4.7.Theorem.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, ), then we have j(R(λ, A)x) = R(λ, A •′ )j(x) for all X with the canonical map j∶ X → X •′ . Proof.(a) Let λ ∈ ρ(A).For any x ∈ X and which means that x annihilates the range of λ − A ○ .In particular, x annihilates ).Thus we have x • = 0 and so x = 0 by Theorem 4.3, implying the injectivity of λ − A. Next, we show that the range of λ − A is ⋅ -dense and ⋅ -closed, which then implies the surjectivity of λ − A. Suppose that the range of λ − A is not ⋅ -dense.Then there is some x ○ ∈ X ○ with x ○ ≠ 0 such that for any x ∈ D(A) we have ⟨(λ − A)x, x ○ ⟩ = 0 since X ○ separates the points of X.It follows that ⟨Ax, x ○ ⟩ = ⟨x, λx ○ ⟩ for all x ∈ D(A) and so x ○ ∈ D(A ○ ) by Proposition 3.3 (b).We deduce that (λ − A ○ )x ○ = 0 as D(A) is sequentially γ-dense by Theorem 2.8 (b), which yields Let us turn to the ⋅ -closedness of the range of λ − A. Let x ∈ D(A).By Theorem 4.3 there is x n = y for some y ∈ X, we derive from the estimate above that (x n ) n∈N is a ⋅ -Cauchy sequence, say with limit z ∈ X, because ⋅ and ⋅ • are equivalent norms on X by Theorem 4.3.Since (A, D(A)) is sequentially γ-closed by Theorem 2.8 (a), in particular ⋅ -closed as γ is coarser than the ⋅ -topology, we get z ∈ D(A) and y = (λ − A)z.Hence λ − A is bijective and (3) yields that (λ − A) −1 ∈ L(X) as well. Let us turn to sufficient conditions for R(λ, A) • X • ⊂ D(A • ) to hold in Theorem 4.7 (a).4.8.Proposition.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, D(A)). (a) If Re λ > ω 0 (T ), then we have λ ∈ ρ(A) and R(λ, A) Proof.(a) Let Re λ > ω 0 (T ).Then we have Re λ > ω 0 (T • ) by Corollary 4.1 and thus λ ∈ ρ(A) ∩ ρ(A • ) by Theorem 2.8 (e) as well as and P γ be a directed system of seminorms that generates the mixed topology γ.Since X ○ = (X, γ) ′ by Remark 3.5, there are p γ ∈ P γ and C ≥ 0 such that ⟨R(λ, A) for all x ∈ X. Due to the continuity of R(λ, A)∶ (X, γ) → (X, γ), there are pγ ∈ P γ and C ≥ 0 such that ⟨R(λ, A) For any x ∈ X we note that which means that y ○ ∈ (X, γ) ′ = X ○ .Thus we have ⟨Ax, R(λ, A) . Further, we observe that for any x ∈ D(A) The sequential γ-density of D(A) by Theorem 2.8 (b) and Thus we have for any For any x ∈ X we remark that for all x ∈ X.We deduce that Part (a) shows that the continuity of R(λ, A)∶ (X, γ) → (X, γ) need not be a necessary condition for R(λ, A) • X • ⊂ D(A • ) for all Re λ > ω 0 (T ).This is an open question.Another open question is whether one actually has R(λ, A) The answer is affirmative if τ coincides with the ⋅topology.Because then γ also coincides with the ⋅ -topology, which gives that R(λ, A)∶ (X, γ) → (X, γ) is continuous for all λ ∈ ρ(A).Therefore Proposition 4.8 (b) and Theorem 4.7 imply [54, Theorem 1.4.2,p. 10] (see also [27,Theorem 14.3.3,p. 425]). where Ω denotes the cardinality of Ω.Let M ⊂ ℓ 1 be σ(ℓ 1 , ℓ ∞ )-compact and absolutely convex.Then we have for all x ∈ ℓ ∞ and λ ∉ σ(A) where Now, we only need to show that M λ is σ(ℓ 1 , ℓ ∞ )-compact and absolutely convex.First, we note that C λ ∶= sup n∈N Due to the characterisation of σ(ℓ 1 , ℓ ∞ )-compactness above it remains to show that M λ is uniformly absolutely summable.Let ε > 0. Since M is uniformly absolutely summable, there is δ > 0 such that for all Ω ⊂ N with Ω < δ and all y ∈ M it holds Further, it is easy to check that M λ is absolutely convex because M is absolutely convex.Hence R(λ, A) is µ(ℓ ∞ , ℓ 1 )-continuous by ( 6) for all λ ∈ ρ(A).Therefore Example 3.9 (b) and Proposition 4. In particular, if By assumption there is some t 0 > 0 such that x ∉ ⋃ 0≤r≤t0 G r σ(X,X ○ ) .Since the complement of the latter set is σ(X, X ○ )-open, there are some n ∈ N and x ○ i ∈ X ○ , 1 ≤ i ≤ n, and ε > 0 such that the σ(X, X ○ )-neighbourhood V of x given by Since X ○ = (X, γ) ′ by Remark 3.5, for every 1 ≤ i ≤ n there are C i > 0 and p γi ∈ P γ such that ⟨x ○ i , z⟩ ≤ C i p γi (z) for all z ∈ X where P γ is a directed system of seminorms that generates the mixed topology γ.From P γ being directed it follows that there are C ≥ 1 and p γ ∈ P γ such that ⟨x ○ i , z⟩ ≤ Cp γ (z) for all z ∈ X and 1 ≤ i ≤ n.By the proof of Theorem 4.3 we know that γ-lim t→0+ Thus there is some 0 < t 1 ≤ t 0 such that We claim that Ṽ ∩ G = ∅ where Indeed, for g ∈ G there is some which shows that Ṽ ∩ G = ∅ and proves the claim.However, Now, we generalise the definition of (weak) equicontinuity w.r.t. a norm-strongly continuous semigroup from [54, p. 25] and [54, Proposition 2.2.2, p. 26] to the bicontinuous setting.4.11.Definition.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space and (T (t)) t≥0 a bi-continuous semigroup on X.We say that a set G ⊂ X is γ-(T (t)) t≥0 -equicontinuous if the set {t ↦ T (t)g g ∈ G} is γ-equicontinuous at t = 0. We say that G is σ(X, X ○ )-(T (t)) t≥0 -equicontinuous if for each x ○ ∈ X ○ the set {t ↦ ⟨x ○ , T (t)g⟩ g ∈ G} is equicontinuous at t = 0. 4.12.Remark.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space, G ⊂ X and (T (t)) t≥0 a bi-continuous semigroup on X. Proof.The inclusion ⊂ is clear since G 0 = G.We prove the converse inclusion ⊃ by contraposition.Let x ∉ G σ(X,X ○ ) .We have to show that there is some t 0 > 0 such that x ∉ ⋃ 0≤r≤t0 G r σ(X,X ○ ) .Like in Proposition 4.10 there are some n ∈ N and By the σ(X, X ○ )-(T (t)) t≥0 -equicontinuity there is t 0 > 0 such that for every 0 ≤ r ≤ t 0 , g ∈ G and 1 ≤ i ≤ n we have This yields for every 0 < r ≤ t 0 , g ∈ G and We derive that for every 0 < r ≤ t 0 , g ∈ G and We deduce that Ṽ ∩ G r = ∅ for all 0 < r ≤ t 0 where This finishes the proof because x ∈ Ṽ . 4.16. Corollary.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Mazur-Saks space and (T (t)) t≥0 a bi-continuous semigroup on X.Then a σ(X, X Proof.The implication ⇒ is obvious because σ(X, X ○ ) is a finer topology than σ(X, X • ).Let us turn to the implication ⇐.Let (x n ) n∈N be a σ(X, X ○ )-(T (t)) t≥0equicontinuous sequence in X that is σ(X, X • )-convergent to some x ∈ X.Then the set G ∶= {x n n ∈ N} ∪ {x} is the σ(X, X • )-closure of {x n n ∈ N} and so its σ(X, X ○ )-closure by Corollary 4.14 as well.Hence G is also σ(X, X ○ )-(T (t)) t≥0equicontinuous by Remark 4.12 (c).Let V be a σ(X, X Corollary 4.15.This implies that all but finitely many x n lie in (V ∩ G) ⊂ V , which we had to show.Now, we give a class of sets to which the three preceding corollaries can be applied due to Remark 4.12 (a) if (X, γ) is a Mazur space.4.17.Proposition.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Saks space and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, Proof.Let P γ be a directed system of seminorms that generates the mixed topology γ.Due to [37, Lemma 5.5 (a), p. 2680] and [35, Remark 2.3 (c), p. 3] we may choose P γ such that x = sup pγ ∈Pγ p γ (x) for all x ∈ X.We start with noting that the map for all t > 0 and h ∈ H by Theorem 2.8 (c) and (d).For any x ′ ∈ (X, γ) ′ we get ≤ tM e ω t AR(λ, A) L(X) h for any p γ ∈ P γ since (T (t)) t≥0 is exponentially bounded and AR(λ, A) ∈ L(X) because AR(λ, A)x = λR(λ, A)x − x for all x ∈ X.Since H is ⋅ -bounded, there is C > 0 such that h ≤ C for all h ∈ H, which yields for all t > 0 and p γ ∈ P γ .This means that R(λ, A)H is γ-(T (t)) t≥0 -equicontinuous at t = 0. Proposition 4.17 in combination with Remark 4.12 (b) generalises [54, Proposition 2.2.6, p. 27].The next proposition transfers one direction of [54,Corollary 2.2.8,p. 28] to the bi-continuous setting.4.18.Proposition.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Mazur-Saks space and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, D(A)).Let G ⊂ X be σ(X, X • )-compact.Then the following assertions hold: Proof.(a) Let G ⊂ X be σ(X, X • )-compact.We may regard G as a subset of X •′ via the the canonical map j∶ X → X •′ from Corollary 4.6.Then G is σ(X •′ , X • )compact and thus ⋅ X • ′ -bounded by the uniform boundedness principle, implying the ⋅ -boundedness by Corollary 4.6. The rest of statement (b) is a consequence of Proposition 4.8 (a) and (b). Remark. (a) Let (X, ⋅ ) be a Banach space.For a ⋅ -strongly continuous semigroup (T (t)) t≥0 on X we have X cont = X and X •• = X ⊙⊙ .Thus X is •-reflexive w.r.t.(T (t)) t≥0 if and only if it is ⊙-reflexive w.r.t.(T (t)) t≥0 .(b) One might object to coining the property j(X cont ) = X •• by "•-reflexivity", as it is not symmetric.However, our main point in studying this property lies in its value for describing the Favard space F av(T ) and its relation to the generator (A, D(A)) of (T (t)) t≥0 (and by part (a), it is indeed a reasonable name for this property). First, we study the relation between a bi-continuous semigroup and its restriction to its space of strong continuity with regard to (bi-)sun reflexivity.5.3.Proposition.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Mazur-Saks space and (T (t)) t≥0 a bi-continuous semigroup on X.Then the following assertions hold: (a) T •• j(x) = j(T (t)x) for all t ≥ 0 and x ∈ X cont .(b) The maps ι∶ X • → X ⊙ cont , ι(x • ) ∶= x • Xcont , and κ∶ X ⊙⊙ cont → X •• , κ(y) ∶= y ○ ι, are well-defined, linear and continuous, and ι is injective.In particular, we have the continuous embeddings Proof.(a) We note that j(X cont ) ⊂ X •• by Corollary 4.6 and T (t)X cont ⊂ X cont for all t ≥ 0 by Theorem 2.8 (g), which implies for any t ≥ 0, x ∈ X cont and x • ∈ X • .(b) Due Theorem 2.8 (g) and Remark 3.5 X cont is sequentially γ-dense in X and X ○ = X ′ seq-γ .Thus the continuous linear map Xcont , is injective and we note that ι = ι 0 X • .From T ○ (t)x ○ = T ′ (t)x ○ for all t ≥ 0 and x ○ ∈ X ○ it follows ι 0 (X • ) ⊂ X ⊙ cont .Thus we get y ○ ι ∈ X •′ for any y ∈ X ⊙ cont ′ and by Corollary 4.6 where we identified X cont and X with subspaces of X •′ via j. The Favard space We begin this section with the definition of the Favard space.6.1.Definition.Let (X, ⋅ , τ ) be a sequentially complete Saks space and (T (t)) t≥0 a bi-continuous semigroup on X.Then the Favard space (class) of (T (t)) t≥0 is defined by 6.2.Remark.Let (X, ⋅ , τ ) be a sequentially complete Saks space and (T (t)) t≥0 a bi-continuous semigroup on X. (a) It is obvious from the definition of the generator (A, D(A)) that D(A) ⊂ F av(T ).(b) From T (t)x−x = t 1 t T (t)x−x for all t > 0 and x ∈ X, we obtain F av(T ) ⊂ X cont where X cont is the space of ⋅ -strong continuity of (T (t)) t≥0 from Theorem 2.8 (g). Our goal is to characterise those bi-continuous semigroups on X for which F av(T ) = D(A) holds.A class of bi-continuous semigroups for which this holds are the dual semigroups of norm-strongly continuous semigroups.6.3.Example.Let (X, ⋅ ) be a Banach space and (S(t)) t≥0 a ⋅ -strongly continuous semigroup on X with generator (A, D(A)).Then (S ′ (t)) t≥0 is a bi-continuous semigroup semigroup on (X ′ , ⋅ X ′ , σ(X ′ , X)) by [38, Next, we present a proposition that extends [54, Theorem 3.2.3,p. 55] to the bi-continuous setting.6.6.Proposition.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Mazur-Saks space and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, D(A)). ).The definitions of the Favard space and of T •• yield that where X is identified with its image j(X) in X •′ by Corollary 4.6.Since X cont = X •• ∩ X by Corollary 4.6 again and F av(T ) ⊂ X cont by Remark 6.2 (b), the statement is proved. Next, we show that the second inclusion is a consequence of the equation for all t > 0 and x ∈ X, which we get from Theorem 2.8 (d).Indeed, take x ∈ R(λ, A •′ )B X •′ ∩ X. Due to Proposition 6.6 we have So, since x ∈ F av(T ), (T (t)) t≥0 is exponentially bounded, ⋅ = sup pγ ∈Pγ p γ on X for some directed system of seminorms P γ that generates γ, and ⋅ • is equivalent to ⋅ by Theorem 4.3, the right-hand side of (8) remains ⋅ • -bounded as t → 0+ whereas the left-hand side γ-converges to x (as a sequence with t = t n for any (t n ) n∈N with t n → 0+) by the proof of Theorem 4.3.Thus there is n ∈ N such that x ∈ nR(λ, A)B (X, ⋅ • ) seq-γ . Due to the equivalence of ⋅ • and ⋅ there is M ≥ 0 such that B ⋅ ⊂ B (X, ⋅ • ) ⊂ M B ⋅ , which yields that the lemma above is still valid if ⋅ • is replaced by ⋅ .The next theorem is a generalisation of [54, Theorem 3.2.8,p. 57] and describes the space F av(T ) in terms of approximation by elements of D(A).6.9.Theorem.Let (X, ⋅ , τ ) be a sequentially complete d-consistent Mazur-Saks space and (T (t)) t≥0 a bi-continuous semigroup on X with generator (A, D(A)).Then the following assertions are equivalent for x ∈ X: (i) x ∈ F av(T ) (ii) For some (all) λ ∈ ρ(A) such that R(λ, A) • X • ⊂ D(A • ) there exists a ⋅bounded sequence (y n ) n∈N in X with γ-lim n→∞ R(λ, A)y n = x.(iii) For some (all) λ ∈ ρ(A) such that R(λ, A) • X • ⊂ D(A • ) there exist a ⋅bounded sequence (y n ) n∈N in X and k ∈ N 0 with γ-lim n→∞ R(λ, A) k+1 y n = R(λ, A) k x. (i)⇒(iv) B X •′ is σ(X •′ , X • )-compact by the Banach-Alaoglu theorem.By assumption we may identify X and X •′ as well as B X •′ and B (X, ⋅ • ) via j because j is an isometry as a map from (X, ⋅ • ) to (X •′ , ⋅ X •′ ). 0 T 0 T (a) The generator (A, D(A)) is sequentially γ-closed, i.e. whenever (x n ) n∈N is a sequence in D(A) such that γlim n→∞ x n = x and γlim n→∞ Ax n = y for some x, y ∈ X, then x ∈ D(A) and Ax = y.(b) The domain D(A) is sequentially γ-dense, i.e. for each x ∈ X there exists a sequence (x n ) n∈N in D(A) such that γlim n→∞ x n = x.(c) For x ∈ D(A) we have T (t)x ∈ D(A) and T (t)Ax = AT (t)x for all t ≥ 0. (d) For t > 0 and x ∈ X we have t (s)xds ∈ D(A) and A t (s)xds = T (t)x − x where the integrals are γ-Pettis integrals.(e) For Re λ > ω 0 we have λ ∈ ρ(A) and 4, the first part of our statement follows from Proposition 6.6.Let us consider the second part.Let (X, γ) be semi-reflexive.Then X is •-reflexive w.r.t.(T (t)) t≥0 by Proposition 5.5 and X = X •′ via the canonical map j.Hence we have F av(T ) = D(A •′ ) by the first part of our statement.As D(A) ⊂ F av(T ) by Remark 6.2 (a), we only need to prove that D(A •′ ) ⊂ D(A).Let Re λ > ω 0 (T ).Then it follows from Theorem 4.7 (c) and Proposition 4.8 (a) that R see Theorem 6.12, implying that F av(T ) = D(A) if one (thus both) of the assertions holds.In analogy to strongly continuous semigroups, we are able to show in Theorem 6.10 that the domain of the semigroup generator equals the Favard space if and only if the set R(λ, A)B (X, ⋅ • ) is closed with respect to τ .The main results are thoroughly laid out by various natural classes of examples. 2.3.Definition.Let (X, τ ) be a Hausdorff locally convex space over the field K ∶= R or C, Ω ⊂ R a measurable set w.r.t. the Lebesgue measure λ and L 1 (Ω) the space of (equivalence classes of) absolutely Lebesgue integrable functions from Ω integrable, τ -Pettis integrable and γ-Pettis integrable on [a, ∞) in X and all four integrals coincide. Definition.Let (X, ⋅ , τ ) be a sequentially complete Saks space and γ ∶= [19,s a Mackey-Mazur space, then it is c 0 -barrelled by[59, Proposition 4.3, p. 354] because (X, γ) is sequentially complete.Let us come to some examples of sequentially complete d-consistent Mazur-Saks spaces.First, we recall some notions from general topology.A completely regular space Ω is called k R -space if any map f ∶ Ω → R whose restriction to each compact K ⊂ Ω is continuous, is already continuous on Ω (see[46, p. 487]).In particular, locally compact Hausdorff spaces clearly are Hausdorff k R -spaces.In addition Polish spaces, i.e. separably completely metrisable spaces, are Hausdorff k R -spaces by [29, Proposition 11.5, p. 181] and[19, 3.3.20,3.3.21Theorems,p.152].We recall that a Hausdorff space Ω is called hemicompact if there is a sequence (K n ) n∈N of compact sets in Ω such that for every compact set K ⊂ Ω there is N ∈ N such that K ⊂ K N (see[19, Exercises 3.4.E, p. 165]).For instance, σ-compact locally compact Hausdorff spaces are hemicompact Hausdorff k R -spaces by [19, Exercises 3.8.C (b), p. 195].Further, there are hemicompact Hausdorff k R -spaces that are neither locally compact nor metrisible by [58, p. 267]. Theorem 2.8 (d), Proposition 3.3 (b) and Corollary 4.1 and we note that 1 t0 X •′ and thus x ∈ X cont as ⋅ • and ⋅ are equivalent by Theorem 4.3.
10,061.8
2022-03-23T00:00:00.000
[ "Mathematics" ]
New Trends and Issues Proceedings on Humanities and Social Sciences While the ubiquity and usage of Mobile Instant Messaging (MIM) applications such as WhatsApp has recorded unprecedented growth, their role for teaching and learning remains unclear. To address this gap, this paper systematically reviews empirical studies by examining the educational effects, designs, tools and settings of MIM. For this purpose the databases of PsycINFO, ERIC, Ovid MEDLINE, Web of Science and Google Scholar were systematically searched. Thematic analysis was adopted to analyze and synthesize qualitative data. Findings suggest that MIM is primarily used as group learning platform to enhance pre-existing education formats in a wide range of subjects. The use of MIM in education settings augments the transactional roles of students and requires learners and educators to balance the social and cognitive dimensions of interactional engagement. Introduction, background and research questions This systematic review examines the role of MIM for teaching and learning especially in view of the paucity of research on this subject that considers such systematic approaches. The use of MIM applications such as WhatsApp, Snapchat, iMessage, KaKaoTalk and WeChat has grown tremendously in the last five years and represents a dominant mode of contemporary communication. For example, WhatsApp, an instance of MIM, is considered to be one of the most trafficked applications on the internet. An estimated population of 800 million users spend on average more than three hours per week on WhatsApp, exchanging massive amounts of data. While the scope of functionalities of MIM is constantly expanding, contemporary applications typically allow for real-time and asynchronous communication between two or more users using text, audio, video, emoticons and URLS, i.e. links to additional online resources. Main features include pop-up mechanisms and sound or vibration that immediately alert the users to incoming messages, presence features that display information about online or offline status and profile information or status messages that are used to construct the users' identity, for example, reflecting changes in their mood (Quan-Haase, 2008). In extant literature, there is a surprising knowledge gap about the role of MIM for authentic teaching and learning. Rambe and Bere (2013a) note that despite the popularity of mobile learning, MIM remains one of the least researched and understood applications in higher education. Apart from Cameron and Webster's (2005) non-systematic overview of empirical research on instant messaging, there is only one previous systematic review, conducted in 2008, which summarizes research literature on the use of instant messaging in campus life. In this work, Quan-Haase (2008) observes that students use IM predominantly for social purposes, i.e. maintaining distant and proximate social ties. From an education perspective, she observes that, while it can be helpful for students to interact with peers, lecturers, librarians, technicians and other academic staff in real time and over long distances, there are a number of concerns. These include the students' "improper" writing using IM, and the detrimental effects of distraction and multitasking on academic performance. In that sense, IM is often portrayed as an ambivalent technology that supports and hinders student academic work simultaneously (Quan-Haase, 2008;Rambe & Bere, 2013a;Rambe & Nel, 2014). In the absence of previous systematic reviews on MIM, the current study examines the role of MIM for teaching and learning by addressing the following the research questions: RQ1: What are educational designs, tools and settings of MIM? RQ2: What are educational effects of the integration of MIM in teaching and learning? Research questions, search techniques and inclusion criteria Given the focused questions, a systematic, stepwise review approach was adopted in this study (Cook & West, 2012). The search for eligible studies was conducted in the databases of PsycINFO, ERIC, Ovid and MEDLINE (via Ovid®) and Web of Science (including the Social Science Citation Index, the Arts and Humanities Index and the Conference Proceedings Citation Index). These were searched in May 2015 using the key term 'mobile instant messaging'. In the Web of Science database, the field "topic" was used and in the other databases the search was carried out using "All fields". The time span was not limited and no other limits were set in both databases. In addition, a second round of searches in September 2015 covered the popular instant messaging applications. The Web of Science database was searched combining the following applications with "or" using the "topic field": WhatsApp, imessage, KaKaoTalk, WeChat, BlackBerry messenger, Facebook messenger and snapchat. The search was refined by including the research area "education and educational research". The time span of 2010-2015 was used on the grounds that most of the popular MIM applications were created in the last few years. For example, WhatsApp was founded in 2009. Using the same time span and keywords in "all fields" Eric, Ovid MEDLINE(R), PsycINFO and PsycARTICLES were searched again. Further searches with the keywords indicated above, i.e., mobile instant messaging and a combination of the MIM applications provided with learning, teaching and education were carried out in Google Scholar. Abstracts were reviewed and eligible studies were retrieved and analyzed against the following criteria: (1) Generation of primary, i.e., empirical data through qualitative, quantitative or mixed-study designs; (2) Sound methodological design: due to the limited number of publications on MIM in journals, the scope was not restricted to peer-reviewed publications. However, if a study were to be considered, it needed to be of acceptable quantitative and qualitative design, i.e., it needed to describe data-gathering procedures as well as analytical techniques; (3) Studies were expected to focus on teaching or learning in a broader sense. To account for the potentially broad use of MIM in informal learning environments, the scope was not limited to school-based education but also included work-based, informal or life-long learning environments; (4) The studies considered used MIM as specified in the previous sections. Most importantly, the definition of MIM did not involve investigations that examine the application of more traditional text messaging applications such as SMS or MMS because different dynamics play out (Church & de Oliveira, 2013). A total of 11 studies matched the above stated criteria. In the next step, key information was extracted from these studies relating to RQ1. These included the educational effects (e.g. deep and critical learning); educational design (e.g. inquiry-based learning in small groups of learners guided by teacher as part of a lecture in a blended learning arrangement), tools (e.g. WhatsApp groups) and setting: learners and education institution (e.g. undergraduates), subject (e.g. business) and geographical location. In addition, information regarding the country and research method were documented for each paper. (Table 1). Data analysis and underlying framework In order to pool and make sense of the predominantly qualitative research data of this emerging field of educational research, thematic analysis as an approach of formal qualitative synthesis methodology was applied. In qualitative synthesis, study findings are systematically interpreted through a series of expert judgements to represent the meaning of the collected work, including qualitative studies -and sometimes mixed-methods and quantitative research (Bearman & Dawson, 2013). Thematic analysis involves repeated reading and analyzing of the studies and the identification of key themes and concepts related to the research questions. The second author independently evaluated the inclusion of the studies according to the criteria indicated and analyzed the content with regard to the research questions. Diverging interpretations were resolved upon discussion (Pope, Ziebland, & Mays, 2000). Educational design, tools and settings MIM supported a broad spectrum of educational designs in diverse subjects, including collaborative solving of ill-structured pedagogical problems by students in an educational technology course (Kim, Lee, & Kim, 2014). Other MIM-mediated designs enhanced the moderation of discussion and reflection of teaching methods by pre-service Arabic language teachers (Aburezeq & Ishtaiwa, 2013) and promoted language learning through engagement in dialogic writing activities (Castrillo, Martín-Monje, & Barcena, 2014). Further designs involved the development of research skills through the cocreation of group research assignments (Ngaleka & Uys, 2013;Rambe & Chipunza, 2013) and the facilitation of academic lecturer-student and peer consultations among IT students (Bere, 2012;Rambe & Bere, 2013a, 2013b. Although MIM spaces allow bilateral and multilateral conversations, the most common social formation involved group learning approaches (all but one study), most frequently on WhatsApp (all but two studies). These spaces were created by educators in addition to face-to-face teaching, thereby creating blended learning environments. While teachers facilitated and moderated these groups, for example by setting goals, responding to questions, correcting or disciplining students, much of the interactions were reported to be peer-to-peer in nature. Peer conversations were found to transcend and augment the roles of students who often contributed to teaching presence -with and without the involvement of teachers (Lam, 2015;Timmis, 2012). The shift manifested in complex hierarchies of knowledge brokers, knowledge seekers and givers as well as informal mentors assumed by students (Rambe & Chipunza, 2013). The sample of the studies suggests that MIM has been predominantly researched in higher education settings with university students. Only one study involved teachers from high school environments (Bouhnik & Deshen, 2014). Two studies were situated outside formal educational environments, focusing on the nature of communication amongst students in informal learning settings (Lam, 2015;Timmis, 2012). Timmis alludes to the high relevance of MIM for student communication, as it represented the most frequent and constant digital communication practice (Timmis, 2012). Also in the study of Lam (2015), WhatsApp was used by more students than Skype and Facebook in informal learning environments (Lam, 2015). The geographical scope was broad, including investigations from Europe (Castrillo et al., 2014;Timmis, 2012), Middle east (Aburezeq & Ishtaiwa, 2013;Bouhnik & Deshen), Asia (Kim et al., 2014;Lam, 2015) and especially South Africa (Bere, 2012;Ngaleka & Uys, 2013;Rambe & Bere, 2013b;Rambe & Chipunza, 2013). This observation resonates with the statistics, which highlight the dominance of that South Africa in WhatsApp usage (Smith, 2015). Educational effects The integration of MIM in teaching and learning results in a range of ambiguous effects. In their content analysis, Rambe and Bere (2013a) identified critical engagement with learning resources, which culminated in transformative learning. This finding was corroborated through the post-surveys in the same study, in which a majority of students associated the academic use of WhatsApp with knowledge creation and deep reflection. Students deemed the MIM conversations to allow for sufficient time to review other team members' contributions and provide thoughtful feedback compared to offline discussions (Kim et al., 2014). Even in peer-to-peer settings not prescribed by educators, students engaged in MIM to discuss content-related issues (Timmis, 2012), for instance using WhatsApp groups to perform calculation exercises (Lam, 2015). However, in contrast to promoting cognitive and metacognitive activities, considerable parts of the conversations in other studies tended to be lightweight involving socializing and playing (Aburezeq & Ishtaiwa, 2013;Kim et al., 2014). Aburezeq and Ishtaiwa (2013) note that nearly half of all postings had less than 20 words and were rather based on brief and quick interactions than on reflective, critical or deep thoughts. Also Kim et al. (2014) confirm the socializing dimension of MIM usage in the quantitative content analysis of their mixed-method study. They found that mobile and non-mobile IM groups were associated with more social and affective but less cognitive and metacognitive interactions compared with the bulletin board messages (Kim et al., 2014). The ambiguity between deep engagement and light conversations can be also noticed in the qualitative part of their investigation. Some learners in the MIM groups tended to state their opinion without reviewing or considering other members' postings, which resulted in a lack of recursive, deep and convergent utterances (Kim et al., 2014). The educational implications and consequences of messages with playful and socializing content were twofold. On one hand, they culminated in dialogical messages not directly relevant to education, which were criticized by students (Aburezeq & Ishtaiwa, 2013) and deemed to be upsetting by teachers (Bouhnik & Deshen, 2014). However, drawing on content analysis, some authors observed that lightweight discussions and socializing, although lacking strong intellectual qualities, are critical to social immersion into the productive use of MIM and lay a foundation for more intellectual conversations (Rambe & Bere, 2013b). Timmis (2012), who found playfulness and enactments of existing relations in the discourse analyses, shared a similar view. She concluded that this performativity, in turn, enhanced the creation and maintenance of a shared experience, a relevant component of collaborative learning. A strategy that was used by educators in balancing learning vs. socializing was to orient their learners towards more focused and productive learning interactions by establishing specific posting requirements and evaluation criteria. In Aburezeq and Ishtaiwa's (2013) study, the messages of the pre-service language teachers' needed to reflect the course content and to include new ideas, reflections, opinions and critical thinking beyond mere description or summary. In fact, the established criteria were deemed relevant, and tied to deeper levels of reflection and critical thinking. Female pre-service Arabic language teachers were requested to make collaborative and individual weekly contributions to the WhatsApp group in addition to the face-to-face meetings 17 semi-structured interviews and content analysis. Verbal analysis method Validation: peer debriefing techniques Use of platform can enhance students' instructional interaction, especially student-student interaction; Challenges: extra work load, distraction to learning, lack of students' commitment for effective participation (Castrillo et al., 2014)/ Spain 85 (out of 450) university students of "German as a foreign language" course participated in WhatsApp groups. The teacher made the topical proposal, proposed text corrections and facilitated participation. Mixed-method content analysis of one group (12 students, 1 teacher); quantitative and qualitative analysis to examine participation patterns and meaning making Use of WhatsApp group contributed to the language learning activities (co-constructing knowledge), but also supported one another and built relationships in the group. Utterances bear resemblance with more informal, "spoken" style of writing. (Rambe & Chipunza, 2013)/ South Africa Fourth year human resource management students enrolled for a research methodology module were encouraged to interact on WhatsApp anonymously among themselves, with the lecturer, and the online facilitator Data analysis of student (n = 72) and studentlecturer conversations and students' blog postings; connected with Sen's conceptualisation of functionings and freedom WhatsApp used to bridge access to learning resources, peer and lecturer support and leveraging on-task. Impediments: limited access to Webenabled smart phones and erratic network connectivity (Bere, 2012) Students from a blended learning accounting course students from a full-time higher diploma programme used social media; in particular WhatsApp not prescribed by teachers. Interpretive approach: semi-structured interviews, purposeful sampling (n=8), thematic data analysis Small groups (n=2-10) used mainly to ask peers for content-related problems ad-hoc, especially before exams using text, photograph, voice messages; limited teacher involvement 4. Discussion Pedagogical implications: The educational role of MIM The pedagogical implications of this review are manifold. The studies suggest that MIM in its current form is a predominantly welcome group learning activity. However, MIM does, not serve as a main conduit in most education settings. Instead it has been used as a complementary learning platform created to aid pre-existing education formats and compensate for their shortcomings. MIM groups cannot only help students achieve direct learning goals but they also facilitate engagement in collaborative problem solving in virtual groups (Lam, 2015). This is a paramount skill in the 21st century, where work is increasingly performed in mobile and distributed settings (Brodt & Verburg, 2007), that have been considered to be underdeveloped in the traditional class (Bouhnik & Deshen, 2014). The playful and flexible use of MIM can also compensate for the limited access to and restricted communication in more static learning management systems (Bere, 2012). The specific and confined role of MIM for education has been further substantiated by students who valued the WhatsApp usage but were ambivalent about its wide-scale roll-out in different academic programs (Rambe & Bere, 2013a). It was suggested that MIM can be used in particular to support social and affective interactions that are relevant in the first phases of an educational activity. To promote cognitive and metacognitive interaction, more structured applications such as Bulletin Boards or face-to-face group work could be used in later phases (Kim et al., 2014). This staged approach is in line with Salmon's (2000) five step model of online facilitation of authentic activities, such as online discussions. In her model, which ranges from access and motivation to knowledge construction and development, step two emphasizes online socialization as a necessary ingredient for building trust and rapport among acquaintances and socially distant participants and for the fostering and sustenance of a networked online community. At this stage, the role of moderators' is facilitating and familiarizing students with the online environment through socialization, and providing bridges between social-cultural aspects of offline and online learning environments, in ways that increase familiarity with peers and break social distance among them. Moreover, socializing can be seen as an inherent part of effective learning itself. This is broadly reflected, for example, in the "social presence" dimension of the community of inquiry theory (Rourke, Anderson, Garrison, & Archer, 2007) and also forms the essence of the "learning as participation" metaphor, where the main route of learning is seen as socializing with and growing into a social community (Lave & Wenger, 1991;Paavola, Lipponen, & Hakkarainen, 2004). Limitations and future research While we hope that this synthesis has provided a more complex and nuanced picture of the implications of the educational appropriation of MIM technologies, several limitations need to be acknowledged that also point to directions for future research. Firstly, MIM is a rapidly evolving and changing family of technologies and thus the investigation at hand can only represent a snapshot in time. This goes hand in hand with the limitation that, despite its increasing popularity, the literature search strategy revealed only a limited number of MIM papers and hence may not necessarily represent the full range of experiences that are currently being felt with the ongoing appropriation of this application in educational settings. With the body of literature that is primarily made up of qualitative studies and descriptive quantitative investigations, there is an obvious need for more rigorous quantitative research designs that, for example, compare the differences between MIM and other communication modes more systematically. Compared to the previous review that points to the meaning of IM for displaying and playing with identity (Quan-Haase, 2008), this aspect was not evident in the present study. Another possible gap that could be addressed by future research is the lack of MIM studies outside higher education settings, especially considering the intensive use of MIM by the age group of 13-16 years, and to a lesser extent also by the cohort of 9-12 year old learners (Mascheroni & Ólafsson, 2014). Conclusions One of the main contributions of this study is to systematically summarize and synthesize literature on the educational usage of MIM, one of the most dominant modes of contemporary student communication. Its integration in teaching and learning can support group learning approaches in a wide range of subjects. MIM serves as complementary learning arrangement to enhance pre-existing education formats. The use of MIM in education settings transcends and amplifies the roles of students and requires learners and educators to balance between the social and cognitive dimensions of interactional engagement.
4,429.6
2017-01-01T00:00:00.000
[ "Education", "Computer Science" ]
Modeling Event Clustering Using the m -Memory Cox-Type Self-Exciting Intensity Model In the analysis of point processes or recurrent events, the self-exciting component can be an important factor in understanding and predicting event occurrence. A Cox-type self-exciting intensity point process is generally not a proper model because of its explosion in finite time. However, the model with m -memory is appropriate to analyze sequences of recurrent events. It assumes the most recent m events multiplicatively a ff ect the conditional intensity of event occurrence. Aside from the interpretability, one advantage is the simplicity of the estimation and inference–the Cox partial likelihood can be applied and the resulting estimator is consistent and asymptotically normal. Another advantage is that the model can be applied to the analysis of case-cohort data via the pseudo-likelihood approach. The simulation studies support the asymptotic theory. Application is illustrated with analysis of a bladder cancer dataset and of an Australian stock index dataset, which shows evidence of self-excitation. Introduction Recurrent event data are encountered frequently in many areas of scientific endeavor, such as the modeling and predictions of earthquakes and other disastrous events, study of the patterns of neural firings in neuroscience, assessing the efficacy of cancer medications in suppressing the recurrence of tumors, and analysis of the risk of default on debt repayments by borrowers.Point processes are natural stochastic process models for the modeling and analysis of recurrent event data.Depending on the form of the data available and the research questions of interest, one of two types of point process models might be appropriate.If the data is in the form of a single long string of event recurrence times, it might be of interest to predict the next event recurrence time by exploiting potential dependence of the waiting times between events on past events or on exogenous covariates.Models of this type include the self-exciting point process (Hawkes, 1971;Ogata, 1978), the modulated renewal process (Cox, 1972;Oakes & Cui, 1994;Lin & Fine, 2009) and the autoregressive conditional duration models (Engle & Russell, 1998;Fernandes & Grammig, 2006).Another form of data, which appears most often in medical statistics, consists of multiple strings of event times and covariates for each string.The number of events in each string is typically small due to censoring, and some individuals might not have experienced a single event by the censoring time.For data in this form, the main interest in practice is to assess the effects of the covariates on the frequency of event recurrence.Examples of the models that suits this purpose include the Cox proportional intensities (CoxPI) model (Andersen & Gill, 1982) and the proportional means model (Lin et al., 2000;Wellner & Zhang, 2007). In this paper we consider a model that suits the analysis of data in the multiple string form.We are motivated by the temporal clustering of event times observed in individual strings with multiple events.The temporal clustering of event times indicates potential self-exciting effect among the events, which, if not properly accounted for, can lead to erroneous inferences about the effects of the covariate.Although the CoxPI model does not explicitly account for the potential self-exciting effect and therefore is not directly suitable for data with signs of event clustering, its many well-known theoretical and computational advantages motivate us to build our model based on it.The aim is to explicitly incorporate a self-exciting feature in the model, while at the same time retaining as many advantages of the CoxPI model as possible. The method to model event clustering in this paper is motivated by the aforementioned Hawkes self-exciting point process model, which is a simple point process N(t) with intensity process λ(t) in a self-exciting form, where ν > 0 is the background event intensity and g(•) ≥ 0 is the excitation function.The CoxPI model is a simple point process with intensity process given by where λ 0 (t) is a baseline intensity function, z(t) is a vector valued process of covariates, and β a vector of parameters.A naïve extension of the CoxPI model by including the integral term in (1) to the logarithm of the intensity, i.e., log does not lead to an appropriate model because such a model can easily be explosive; see Remark 1 below for an explanation.However, if we modify the integral term in (2) by restricting the contribution of past events on the current event intensity to the most recent m (< ∞) events, then the resulting model does not suffer from the explosion issue and still posses an explicit self-excitation feature.Such a model, which we call the m-memory Cox-type self-exciting intensity (CoxSEI(m)) model, shall be an appropriate model for recurrent event data with temporal clustering of event times. The rest of this paper is organized as follows.In Section 2, we present the CoxSEI(m) model and the estimation procedure.In Section 3, we present some asymptotic properties of the estimators.In Section 4, we report the results of some simulation studies and analysis of a bladder cancer data set and an Australian stock index data set.Section 5 concludes with discussion.Technical proofs are relegated to the Appendix.All computation was done in R (R Core Team, 2014) with the aid of the package coxsei written by the authors. The CoxSEI(m) Model and the Estimation Procedure Consider a point process N(t) = ∞ j=1 1 {T j ≤t} , with t ∈ [0, ∞) and T j denoting the j-th event time.As a CoxSEI(m) point process, N(•) has a conditional intensity process given by where μ(t) is an unspecified baseline intensity, Z(t) is a possibly time-varying p-vector of covariates, β is a p-vector of regression coefficients which measures the effects of the covariates to the intensity on the log scale, and φ(t) is a self-exciting term depending on past events of the process, where denotes the set of indexes of the most recent m events in the past.The excitation function g is specified up to a parameter γ.Normally g is a positive decaying function, and the parameter γ regulates the decay rate.The decay of g implies that the more recent events have stronger direct effects on the current event intensity than the events in the more remote past.Typical examples of g include the exponential decay function g(t, γ) = exp(−γt) and the polynomial function g(t, γ) = (1 + t) −γ , with γ > 0 (e.g., Errais et al., 2010;Ogata, 1988).The parameter α measures the initial magnitude of the self-exciting effect.While a positive α implies the self-exciting effect is genuinely excitatory, a negative α would imply that the "self-exciting" effect is in fact inhibitory (Kopperschmidt & Stute, 2009). Remark 1 We assume m to be a positive integer.If m = 0, the self-exciting component vanishes and the CoxSEI(m) model ( 3) reduces to a CoxPI model.If m = ∞, the CoxSEI(m) model becomes an infinite-memory Cox-type self-exciting process.In this case, the process will be explosive under fairly general conditions if α > 0. To see this, suppose the baseline intensity μ(•) is bounded away from 0 and ∞, g(t, γ) > 0 is decreasing in t, the covariate processes Z(•) are bounded, and the regression coefficients β are all finite.Write c = inf{μ(t) exp(Z(t) T β): t ≥ 0} > 0. Let ΔT 1 = T 1 , ΔT j = T j − T j−1 , j ≥ 2 denote the durations between events.For any fixed t > 0, there exists ε > 0 such that ε ∞ j=1 1/ j 2 < t.As a result, ΔT j ≤ t) ≥ Pr(ΔT j ≤ ε/ j 2 , j = 1, 2, . . .). (5) Clearly the probability on the right hand side of (5) can be written as For CoxSEI(m) processes with finite m, under mild regularity conditions, such as C1-C4 to be presented later, the intensity process λ(•) is bounded away from 0 and ∞ with probability one.As a result, it will not be explosive for sure (with probability 1).We shall only consider the CoxSEI(m) model with finite m. Remark 2 Under the CoxSEI(m) model, certain Markov property can be derived for the process.Set T k = 0 for k ≤ 0 for notational convenience.Let ξ j (t) = T N(t−)− j+1 , 1 ≤ j ≤ m, be the times of the most recent m events before time t.Let ξ(t) = (ξ 1 (t), ..., ξ m (t)) T , t ≥ 0, be an m-vector continuous time process.It can be verified that given the covariates and ξ(t), ξ(s) and ξ(τ) with s < t < τ are conditionally independent.Therefore ξ(t) is a continuous time Markov process of dimension m, conditioning on the covariates. Suppose we have n independent observations of the CoxSEI(m) process N(t) and the covariate process Z(t) until a censoring time C which is assumed to be independent of N(t) conditional on Z(t).Denote the observations by The estimation of the CoxSEI(m) model is along the same lines as that of the CoxPI model.The estimation of the parametric part relies on the Cox partial likelihood, and the estimation of the cumulative baseline intensity is motivated by the Breslow estimator (Breslow, 1972) as in the CoxPI model.Specifically, we note that given the history of the n subjects prior to time t and the observation that an event occurs at time t, the conditional probability that the event pertains to the i-th subject is The maximum partial likelihood estimator θ is defined as the maximizer of L(θ) over the parameter space Θ ⊂ R p+2 .The estimator of the cumulative baseline intensity function dt is similar to the Breslow estimator (Breslow, 1972) and is given by Large Sample Properties of the Estimators The following conditions are needed to prove the large sample properties of θ.Let θ 0 be the true value of θ in Θ.We use the symbols ∂ θ and ∂ 2 θθ T to denote the operators of finding first and second order partial derivatives with respective to θ. C1. The covariate process Z(•) is bounded. C2.The parameter space Θ is closed, bounded and connected, and contains θ 0 as an interior point.Moreover, C3.The excitation function g(t, γ) is positive, bounded, decreasing in t, and twice continuously differentiable in γ.The baseline intensity μ(•) is bounded and continuous. C4.The matrix Σ(θ) is finite and positive definite and continuous at θ 0 where and Remark 3 C1 is commonly assumed in the literature and C2 is an identifiability condition.In C3, the monotonicity of g is practical but can be relaxed.C4 is essential to the asymptotic normality of θ.Unlike the classical Cox model, the global concavity of the log-partial likelihood does not automatically hold in the CoxSEI(m) process. The large sample properties of θ and Û(•) are given in the following two propositions. where Σ = Σ(θ 0 ), which can be consistently estimated by Remark 5 Similar to the efficiency of the Cox partial likelihood estimator in the proportional hazards model, it can be verified that Σ is the Fisher information matrix for θ and that, as a result, θ is a semiparametric efficient estimator of θ. Proposition 6 Let μ 0 (•) and U 0 (•) be the true baseline intensity and baseline cumulative intensity functions respectively.Let r (m) (t, θ), R (m) (t, θ), m = 0, 1, 2, and I (θ) be as those defined by (A.1)-(A.7) in the Appendix.Assume C1-C4 hold.Then the process √ n{ Û(•) − U 0 (•)} converges weakly to a Gaussian process with mean zero and covariance function which can be estimated uniformly consistently by Remark 7 The large sample distribution of θ is approximately normal with mean θ 0 and variance I ( θ) −1 , and the distribution of Û(t) is approximately normal with mean U 0 (t) and variance If the baseline intensity function μ(•) rather than its integral is of interest, then we can estimate it using one of the many nonparametric methods available, such as kernel smoothing (Ramlau-Hansen, 1983) and the local polynomial method (Chen et al., 2011).To this end, we first note the intensity process of the aggregate process N • (t) has a multiplicative form { i∈R t Ψ i (t, θ)}μ(t).Since the nonparametric estimator of μ(t) with the exposure process i∈R t Ψ i (t, θ) fully known typically has a rate of convergence slower than √ n, while the plug-in estimator of the exposure process i∈R t Ψ i (t, θ) has a √ n rate, we can simply estimate μ(t) by assuming the estimated exposure process is the unknown true exposure process. The proof of Proposition 4 is given in the Appendix.The proof of Proposition 6 is essentially the same as that of Theorem 3.4 and Corollary 3.5 in Andersen and Gill (1982), and is omitted. Simulation This section reports the results of a simulation study.The simulation model is CoxSEI(2) with baseline intensity μ(t) = 1 + 0.5 cos(2πt) and excitation function g(t) = α exp(−γt) = 0.7e −10t .The covariate process has three static components Z i , i = 1, 2, 3. Their design distributions are Uniform[0.5,1.5],Uniform[1.5,2.5] and Bernoulli(0.5),respectively.The regression coefficients associated with the Z i are β 1 = 0.2, β 2 = 0.4, β 3 = 0.6.The censoring variable is independently generated, following lognormal(0, 0.1).The sample size is n = 100.The simulation was repeated 100 times.The results are summarized in Table 1.It is seen that the estimates of the parameters β i , α and γ seem unbiased, and the estimates of the standard errors are close enough to the empirical ones.The empirical distributions of all estimates are very close to normal distributions, with the two-sided Kolmogorov-Smirnov tests of normality all having p-values much greater than 0.05.The estimates of the cumulative baseline intensity function are shown in the left panel of Figure 1, which are close to the true cumulative intensity function.The standard error estimates calculated from the variance estimator (7) are shown in the right panel of Figure 1 together with the empirical standard errors, from which we note the variance estimator (7) for the cumulative intensity estimator is slightly biased upward, but not by much.We therefore conclude that the proposed estimation procedure works well and conforms with the theory.To evaluate the effects of neglecting self-excitation on the estimation of the covariate effect, we fit the ordinary CoxPI model to the data generated from the CoxSEI(2) model.The results are shown in Table 2.The estimated covariate effects are clearly inflated and the standard errors are generally underestimated.This implies that application of the CoxPI model to recurrent event data without accounting for the potential self-exciting effect may lead to erroneous inference about the covariate effects. Analysis of a Bladder Cancer Dataset We illustrate the CoxSEI(m) model with two real-life examples.The first is a bladder cancer study reported by Byar (1980) and frequently used to illustrate event history data analysis methods (e.g.Wei et al., 1989;Therneau & Hamilton, 1997;Wellner & Zhang, 2007).A total of 118 patients with superficial bladder tumors were admitted to the study between November, 1971 andAugust, 1976.The tumors were removed transurethrally and patients were randomly assigned to one of three treatment groups: placebo, pyridoxine, and thiotepa.For patients who experienced tumor recurrence, the new tumors were removed at each visit.The initial number of tumors and the size of the largest initial tumor were recorded for each patient.The censoring time was the earlier of death due to bladder cancer or other causes and end of study.The follow-up time of all patients varies from 0 to 64 months, the number of recurrences experienced by the patients varies between 0 and 9 with mean 1.6 and variance 5.3; see Figure 2. The data is available from the R package survival (Therneau and original Splus→R port by Thomas Lumley, 2011).We have made slight modifications to the data by adding 0.5 to the two 0 censoring times and to the censoring times that equal the corresponding patient's last recurrence time.These modifications caused no appreciable difference to the numerical result of fitting the CoxPI model using the coxph function from the R package survival.3. It is noted that by the CoxPI model the treatment thiotepa has a statistically significant suppressing effect on tumor recurrence intensity in the presence of other covariates.However, in the CoxSEI(2) model, while thiotepa still seems to have a beneficial effect in the presence of other covariates and the self-exciting effect, the beneficial effect is much less conclusive with a p-value substantially greater than 0.05, even if a single-sided alternative is assumed.Since the estimated α parameter of the self-exciting term is positive and statistically highly significant, and the AIC suggests CoxSEI(m) with m > 0 fits much better to the data than the CoxPI model, it seems plausible to conclude the self-exciting effect among bladder tumor recurrences is genuine. From a biological point of niew, it also seems natural to suspect the occurrence of a tumor and the ensuing surgery to remove it could damage the bladder tissue, rendering further tumor recurrences more likely to happen.The neglect of the self-exciting effect could have been the cause of inflated beneficial effect of thiotepa in the ordinary CoxPI model, which is similar to the false positives caused by fitting generalized linear models to overdispersed data without properly accounting for overdispersion. Analysis of an Australian Stock Index Data Set As an example where the baseline event intensity might also be of interest, we consider data on intra-day times of exceedance of a threshold value by the tick-by-tick return of an Australian stock index, the All Ordinaries Index.Our consideration of the index return exceedance process is motivated by Embrechts et al. (2011).During the period from 1 January 1996 to 3 June 2011 GMT, there were roughly 4,000,000 price moves of the All Ordinaries Index.The corresponding tick-by-tick log-returns varied in the range [−1.114, 1.103] × 10 −1 , with the maximum and minimum returns attained at 10:13:33.614and 10:14:01.295respectively on 28 Jun 2010.The 99th percentile of the returns was q (0.99) = 4.39 × 10 −4 .For the purpose of illustrating the CoxSEI(m) model, we only considered the intra-day times in year 2010 at which the returns exceeded q (0.99) .There were 3,131 such exceedances on 254 trading dates in 2010.We filtered out the data on the 24th and 31st December 2010 as the stock exchange closed early at 14:10 on these two days and the baseline event intensity near 14:10 on these days would be substantially higher than on regular trading days when the market closes at 16:10.Since the market dynamics of after hours trading is expected to be different from that of normal hours trading, we also excluded the data outside the normal trading hours, 10:00-16:10.This left us with 3,030 exceedances on 252 trading days.The daily number of exceedances varied between 0 and 66, with mean 12.02 and variance 111.14. To apply the CoxSEI(m) model, we need the assumption that the return exceedance processes on different days are conditionally independent.A times series plot of the daily number of return exceedances showed quite strong serial correlation even after weekday and month of year were accounted for using a Poisson regression.However, if we fit an order 1 autoregressive time series model with weekday and month as external categorical covariate variables, then the Ljung-Box tests revealed no significant serial correlation among the residuals, with p-values > 0.05 up to lag 14 and > 0.01 up to lag 20.Therefore we assumed that the daily exceedance processes were conditionally independent given weekday, month and the number of exceedances on the previous trading day.We fitted CoxSEI(m) models with exponential excitation function g(t) = α exp(−γt) and different m values to the data.We then selected the value of m using the AIC.The unit used in measuring time is the hour. The AIC value was 31766.2 when m = 0, and in the range [31435.5, 31634.7]when m ≥ 1, with the minimal value 31435.5 attained by m = 1.The parametric part of the results of fitting the CoxSEI(1) model are shown in Table 4, from which we note that the number of exceedances on the previous day (yesterday) has a highly significant positive effect on the current day exceedance intensity.This could be interpreted as an inter-day exciting effect among the return exceedances on the All Ordinaries Index.The month effect is significant with February, June, July and August seeing more and April seeing less exceedances than January.In the presence of other variables, the differences between March, May, September, October, November, December and January were not significant.The weekday effects do not seem to be individually significant.The parameter α is highly significant with a positive value, suggesting the existence of intra-day exciting effect among the return exceedances.The parameter γ is also highly significantly different from 0, indicating the self-exciting effect is decaying over time.The month effect we have observed on the return exceedance intensity is reminiscent of the January effect in financial returns observed in the US financial market.In view of the common theory which relates the January effect to the end of the fiscal year in US, we might also speculate that Australia's end of the fiscal year in June have contributed to the increased market volatility which is reflected by the increased intensity of the return exceedance process.Considering the potentially erroneous inference about the covariate effects that could have been caused by neglecting the self-exciting effects, it seems warranted to develop formal statistical tests to detect the existence of the self-exciting/inhibitory effect.While the likelihood ratio test seems a natural candidate test, the asymptotic null distribution of the likelihood ratio statistic is non-standard.The reason is that under the null hypothesis of no self-exciting effect or equivalently, α = 0 in the examples considered in this paper, the parameter γ is unidentifiable and the asymptotic normality of γ fails, and therefore, the asymptotic distribution of the likelihood ratio statistic under the null fails to be χ 2 .In an unreported simulation study, we have found that the empirical distribution of the likelihood ratio statistic deviates substantially from the χ 2 1 and χ 2 2 distributions.Continuing work concerning the asymptotic null distribution of the likelihood ratio test or concerning other tests is desirable. In constructing the self-excitation term (4) in the CoxSEI(m) model, we have parametrized the effects of the recent m events on the current event intensity in the form of αg(t − T N(t−)+1− j , γ) rather than using m unstructured coefficients corresponding to the m events respectively.The consideration behind this choice is interpretability.With a decreasing function g, the individual excitation effects on the current event intensity associated with recent events are monotone with more recent events having more significant effects, which tends to agree with our intuition.In contrast, the unstructured coefficients approach could give rise to estimated coefficients with erratic patterns which are hard to interpret. The CoxSEI(m) model considered in this work may appear to be a special case of the CoxPI model with a timedependent covariate j∈M(t) g(t−T j , γ).However this is generally not the case because of the nonlinear dependence of g on the unknown parameters γ. In the real data examples, our choice of the parametric form of the excitation function is essentially arbitrary and we have not considered how to select the excitation function using any data-driven procedures.The main reason is that for the correct estimation of the covariate effects and the baseline intensity function, the specific choice of the excitation function is much less important than the inclusion of the self-excitation term in the model.However, further work concerning formal specification tests for the excitation function is clearly desirable. From the viewpoint of explicitly accounting for potential self-exciting effects in intensity based regression analysis of recurrent event data, one can also consider the combination of the Hawkes self-exciting point processmodel with the Aalen additive intensity regression model (Aalen, 1980).Although care is needed in fitting such a model to guarantee the positivity of the intensity process and the accommodation of self-inhibitory effects might not be as easy, this additive model is arguably more intuitive and easier to interpret in specific contexts.Therefore such a model also deserves investigation, and shall be considered elsewhere. Another advantage of the CoxSEI(m) model is that it can be applied to the analysis of data collected by the cost effective case-cohort design (Prentice, 1986), with inference based a pseudo-likelihood approach; for details, see the companion paper F. Chen and K. Chen (2014). Figure 1 . Figure 1.The estimates of the cumulative baseline intensity function (left) and of the standard errors (right) based on the simulated data Figure 2 . Figure 2. Bladder tumor recurrence (solid point) and censoring (end of line) times of the 118 bladder cancer patients We fitted the CoxSEI(m) model to the modified data with m = 0, 1, . . ., 9 and calculated the corresponding values of the Akaike information criterion (AIC), which is defined as minus twice the maximized log-partial likelihood value plus twice the number of parameters involved in the partial likelihood.With m = 0 the AIC value was 1626.5, while with m ≥ 1 the AIC values was in the range [1552.9,1571.6] with the minimal value achieved by m = 2, which suggests the CoxSEI(2) model gives the best fit to the data.The results of fitting the CoxPI and the CoxSEI(2) models are shown in Table3.It is noted that by the CoxPI model the treatment thiotepa has a statistically significant suppressing effect on tumor recurrence intensity in the presence of other covariates.However, in the CoxSEI(2) model, while thiotepa still seems to have a beneficial effect in the presence of other covariates and the self-exciting effect, the beneficial effect is much less conclusive with a p-value substantially greater than 0.05, even if a single-sided alternative is assumed.Since the estimated α parameter of the self-exciting term is positive and statistically highly significant, and the AIC suggests CoxSEI(m) with m > 0 fits much better to the data than the CoxPI model, it seems plausible to conclude the self-exciting effect among bladder tumor recurrences is genuine. . codes: 0 *** 0.001 ** 0.01 * 0.5.0.1 1In Figure3we show the estimated cumulative baseline intensity function and a local linear estimate of the baseline intensity function using the method discussed in Remark 7. From the figure we note the baseline intensity of return exceedance at market open is substantially much higher than in the rest of the trading hours and the intensity during the morning hours are generally higher than in the afternoon hours.The very high return exceedance intensity at market open is to be expected considering that the occurrence of large and sudden price changes of the constituents of the index are likely to be due to the availability of price impacting information accumulated overnight when the local exchange is closed but many overseas exchanges are still running.The relatively high intensity during the rest of morning hours could be linked with the opening of Asian stock exchanges, such as the Malaysian and Singapore stock exchanges at 11:00 AEST (Australian Eastern Standard Time), the Hong Kong and Mainland China exchanges at 11:30 AEST.The opening times are to be postponed by an hour when Australia observes the Daylight Saving Time from the first Sunday of October to the first Saturday of April. Figure 3 . Figure 3.Estimated cumulative baseline intensity (left) and baseline intensity (right) of the All Ordinaries Index return exceedance process with point-wise 95% confidence limits 5. Discussion In this paper we have considered an extension of the CoxPI model called the CoxSEI(m) model for the analysis of recurrent event data that has the feature of temporal clustering of events experienced by the same individual. Table 1 . Results of the simulation-fitting the correct model value for the K-S test of normality 0.976 0.937 0.833 0.761 0.622 Table 2 . Results of the simulation-fitting the CoxPI model to data generated by CoxSEI(2) models Table 3 . Results of fitting the CoxPI and CoxSEI(2) models to the bladder cancer data Table 4 . Parametric part of the results of fitting the CoxSEI(1) model to the all ordinaries index data
6,497.4
2014-07-28T00:00:00.000
[ "Mathematics" ]
MTDDI: a graph convolutional network framework for predicting Multi-Type Drug-Drug Interactions — Although the polypharmacy has both higher therapeutic efficacy and less drug resistance in combating complex diseases, drug-drug interactions (DDIs) may trigger unexpected pharmacological effects, such as side effects, adverse reactions, or even serious toxicity. Thus, it is crucial to identify DDIs and explore its underlying mechanism (e.g., DDIs types) for polypharmacy safety. However, the detection of DDIs in assays is still time-consuming and costly, due to the need of experimental search over a large drug combinational space. Machine learning methods have been proved as a promising and efficient method for preliminary DDI screening. Most shallow learning-based predictive methods focus on whether a drug interacts with another or not. Although deep learning (DL)-based predictive methods address a more realistic screening task for identifying the DDI types, they only predict the DDI types of known DDI, ignoring the structural relationship between DDI entries, and they also cannot reveal the knowledge about the dependence between DDI types. Thus, here we proposed a novel end-to-end deep learning-based predictive method (called MTDDI) to predict DDIs as well as its types, exploring the underlying mechanism of DDIs. MTDDI designs an encoder derived from enhanced deep relational graph convolutional networks to capture the structural relationship between multi-type DDI entries, and adopts the tensor-like decoder to uniformly model both single-fold interactions and multi-fold interactions to reflect the relation between DDI types. The results Introduction The polypharmacy, also termed as drug combination, is becoming a promising strategy for treating complex diseases (e.g., diabetes and cancer) in recent years [1]. When two or more drugs are taken together, they may trigger unexpected side effects, adverse reactions, and even serious toxicity [2].The pharmacological effects trigged by multiple drugs in the treatment are named drug-drug interactions (DDIs).DDIs can be divided into two cases.One case is that a pair of drugs triggers only one pharmacological effect, another is that a pair of drugs causes two or more related pharmacological effects.We call the former as a single-fold interaction and the latter as a multiple-fold interaction.For example, the interaction between Sucralfate and Metoclopramide tells "Sucralfate may decrease the excretion rate of Metoclopramide, resulting in a higher serum level".Apparently, the pair of these two drugs may trigger two related pharmacokinetic effects, that is, Excretion and Serum Concentration. Therefore, it is crucial to identify DDIs and unravel their underlying mechanisms for polypharmacy safety.However, it is still both time-consuming and costly to detect DDIs among a large scale of drug pairs in assays.Over the past decade, this build-up of experimentally-determined DDI entries boosts the application of computational methods to find the potential DDIs [3], especially machine learning-based methods. Recent years, other deep learning(DL)-based predictive methods [11, 12, 22] have been developed to address another screening task of identifying the pharmacological effects caused by known DDIs, that is, predicting multi-type DDIs.For example, DeepDDI [11] designs a nine-layer deep neural network to predict 86 types of DDIs by using the structural information of drug pairs as inputs.Lee et al [22] predicts the pharmacological effects of DDIs by using three drug similarity profiles including the structural similarity profile, Gene Ontology term similarity profiles and target gene similarity profiles of known drug pairs to train the three-layer autoencoder and a eight-layer deep feed-forward network.DDIMDL [12] predicts DDI events by using the drug similarity features computed from chemical substructures, targets, enzymes and pathways to separately train three-layer deep neural networks (DNNs), and then averages (sums up) individual predictions of those trained DNNs as the final prediction. Despite these efforts on identifying multi-type DDIs, there still exists the following space to improvement for DL-based methods.i) Existing DL-based methods require the known DDI as input, while the interactions of most drug pairs are unknown.Therefore, it is necessary to develop new algorithms to identify whether an unknown drug pair has one or more pharmacological effects.ii) More DDIs can form an interaction network that help to improve the predictor's performance, however existing DL-based methods treat drug pairs as independent samples, ignoring the structural relationship between DDI entries.iii) Existing DL-based methods cannot reveal the knowledge (e.g., the excretion of a drug slows down due to its increasing serum concentration caused by a DDI) about the relation between DDI types.To address above issues, we proposed a novel predictive method (called MTDDI) to identify whether an unknown drug pair results in one or more pharmacological effects. The main contributions of our work are described as follows: i) MTDDI leverages an encoder by an enhanced relational graph convolutional network (R-GCN) to capture the structural relationship between multiple-type DDI entries.ii) MTDDI employs a tensor-like decoder to uniformly model both single-fold interactions and multi-fold interactions for identifying whether an unlabeled type-specific drug pair results in one or more pharmacological effects.iii) MTDDI adopts a set of type-specific feature importance matrices (i.e., a tensor) in decoder to reveal the dependency between DDI types by calculating their correlations. Datasets We built the multi-type DDI dataset by collecting DDI entries from DrugBank(July 16, 2020 [23] in the following steps.First, we downloaded the completed XML-formatted database (including the comprehensive profiles of 11,440 drugs), from which we selected 2,926 small-molecule drugs and their drug chemical structures and drug binding proteins.After extracting all descriptive sentences of DDIs from the XML file, we totally collected 859,662 interaction entries among 2,926 drugs.Furthermore, we obtained 274 different interaction patterns by parsing these sentences.According to pharmacological effects triggered by DDIs [24], we finally grouped these patterns into 11 types of DDIs, including Absorption, Metabolism, Serum Concentration, Excretion, Synergy Activity, Antagonism Activity, Toxicity Activity, Adverse Effect, Antagonism Effect, Synergy Effect, and PD triggered by PK [25]. The task of multi-type DDI prediction directly discriminates whether an unknown drug pair results in one or more pharmacological effects of interest (Fig. 1-C).It learns a set of functions mapping Suppose This work focuses on the task of multi-type DDI prediction since the second task is just its degraded version.Referring to DDI-triggered pharmacological effects as interaction types, we represent a set of multi-type DDIs as a multi-relation complex network (, ), where vertices are drugs and edges between vertices are multi-type interactions (Fig. 2-A).Let = { 1 , 2 , … , } be the vertex set, = { 1 , 2 , … , } be the interaction type set, and � , , � be the interaction of type caused by the pair of drug and drug .Furthermore, is decomposed into m sliced sub-networks = { 1 , 2 , … , } regarding interaction types (Fig. 2-A Feature extraction In addition to interaction entries, we extracted drug chemical structures, which are represented by SMILES strings, as well as drug binding proteins (DBPs), including targets, enzymes, transporters, and carriers.Drug chemical structures were encoded into feature vectors by Extended Connectivity Fingerprints (ECFPs) [26] and MACCSkeys Fingerprints [26], respectively.ECFPs represent a molecular structure through circular atom neighborhoods as 1024-dimensional binary vector, where each element denotes the presence or the absence of a specific functional substructures.In contrast, the MACCSkeys Fingerprints represent a molecular structure as 166-dimensional binary vector w.r.t. a set of pre-defined substructures.These two fingerprints are computed by the rdkit package of python, and the radius of ECFPs neighborhood is set to 4. Moreover, we consider DBPs (targets, transporters, enzymes, and carrier proteins) as the third type of drug features, because they are crucial factors when a DDI occurs. Sequentially, drug is represented as a 3334-dimensional binary vector in which each element indicates whether the drug binds to a specific protein.Finally, by calculating Tanimoto coefficients between drug feature vectors, we obtained three drug similarities derived from ECFP_4, MACCSkeys and DBPs, respectively. Model construction Upon above representation of multi-type DDIs, we cast the task of multi-type DDI prediction as the multi-relational link prediction, and design an end-to-end Multiple-Type Predictor for Drug-Drug Interactions (MTDDI) to address this task. MTDDI contains an encoder ℱ and a decoder ℱ . Derived from the multi-relation GCN (R-GCN) [27][28][29][30], we construct a multi-layer R-GCN in which encoder ℱ extracts a global latent feature matrix × ( ≪ ) by capturing the topological feature matrices { × } of all drugs across { }.However, the primary multi-layer GCN causes the over-smoothing issue that makes all the nodes in a network have highly similar feature values.To relax the over-smoothing issue, ℱ doesn't use the outputting embedding representations of its final layer, but it sums the embedding representations (named residuals) of its hidden layers together as its final embedding feature matrix .In addition, considering a few of possible missing interactions among the network, ℱ utilizes a pre-defined drug similarity matrix to constrain the similar drugs more close in the embedding space. Since the original decoder in the primary GCN [30] is just an inner production ZZ T between drug embedding vectors, it cannot reflect the essence of multi-type interactions.R-GCN employs RESCAL [31], which utilizes m additional type-specific feature association matrices to capture the essence of multi-type interactions (i.e., ).Inspired by literature [27,32], we suppose that feature importance varies across interaction types, and we also assume that interaction types are not completely independent to each other.Therefore, our decoder ℱ adopts a tensor factorization-like matrix operation to integrate the embedding feature matrix , m type-specific feature importance matrices, and an average feature association matrix to reconstruct the multi-type DDIs network (i.e., ). Finally, our MTDDI trains ℱ and ℱ simultaneously to obtain an end-to-end model for implementing the multi-type DDI prediction., rows in the colorful matrix) by capturing their complex topological properties.A residual strategy (i.e., the black arrow) is added from the second hidden layer to the last hidden layer.Meanwhile, a drug similarity matrix is employed to constrain similar drugs as close as possible in the embedding space (i.e., the purple matrix).(C) The decoder of MTDDI.It is a tensor factorization-like matrix operation, which integrates the embedding feature matrix, type-specific feature importance matrices { }, and an average feature association matrix to reconstruct the multi-type DDIs network.(D) An example to illustrate a layer of R-GCN in the encoder.An interest node (i.e., blue node) aggregates both the features of its first-order neighbor nodes (i.e., orange) and its own in each of m sliced networks to update its features (i.e., green bar).Then, all the updated features are accumulated and passed through a ReLU activation function to produce its final embedding (i.e., the colorful vector).The whole multi-type DDI network are propagated by a p-layer R-GCN to capture the information of its p-th order neighbors.(E) Statistics on different pharmacological effects caused by DDIs. From the left to the right, the interaction types are: Absorption, Metabolism, Serum Concentration, Excretion, Synergy Activity, Antagonism Activity, Toxicity Activity, Adverse Effect, Antagonism Effect, Synergy Effect, and PD triggered by PK.Y-axis indicates their occurring numbers.(F) Proportional distribution of the number of single-fold and multi-fold DDIs.79.6% DDIs are single-fold, 19.36% are two-fold and 1.04% are three-fold. Encoder in Multi-relation graph convolutional network We employed the extension GCN (i.e., R-GCN) to extract the node embedding in the multi-type DDI network.First, the network is decomposed into m sliced sub-networks { 1 , 2 , … , }, in which each slice accounts for a specific interaction type (Fig. 2-A).Then, both the feature vector (0) of drug (or node ) and those of its neighbors in are aggregated by a graph convolutional operation.After that, similar aggregations across all the sliced subnetworks are further summed up to generate the updated feature vector (1) of drug .Such a single layer of R-GCN integrates the topological neighborhood of drug across interaction types which it involves.For any layer in a multi-layer R-GCN, the general propagation rule is defined as: where , = | | is a normalization constant, denotes the set of 's neighbors in , ℎ () is the input feature vector and () is the trainable weight matrix in the k-th layer of R-GCN, and σ is a non-linear element-wise activation function (i.e., ReLU).Last, the aggregation process is propagated through p layers of R-GCN to obtain the final embedding feature vector () of drug . Such a multi-layer propagation of R-GCN enables the extraction of higher-order topological features of multi-type DDI network [33].However, it usually causes the 'over smoothing' issue derived from GCN [33], where the features of the neighboring drugs, even all drugs in the case of many layers, are extremely similar.As a result, a good GCN contains only a few of hidden layers (e.g., the number of layers is less than or equal to 2) [28][29][30].To enhance the ability of GCN's network representation, a residual strategy is adopted to relax 'over smoothing' issue for multi-layer R-GCN. Let the final embedding features outputted by the Encoder ℱ be .For a p-layer R-GCN, we set as: Notedly, this sum requires that the dimensions in different layers are same.Due to the first hidden layer accounts for the dimension reduction of the high-dimensional one-hot features (0) , the residual strategy just starts the sum from the second hidden layer. Moreover, it is anticipated that two interacting drugs are close in the embedding space generated by ℱ .Thus, possible interactions can be deduced among those close drugs according to their embedding features [30].However, the existing interaction with missing label between two drugs possibly causes their remoteness in the network. Missing interactions between these drugs would aggravate the learning of ℱ e . Therefore, under the consideration that similar drugs tend to interact in terms of chemical structures [2] or binding proteins [34], pre-defined drug similarities, taken as a regularization item s i,j ⋅ �z i − z j � 2 2 , is employed to constrain similar drugs as close as possible in the embedding space.Refer to Section 2.5 Loss Function for details. Decoder Once the encoder ℱ generates drug embedding features { }, which integrate topological information across interaction types, the decoder ℱ sequentially employs { } to reconstruct the multi-type DDI network � .In the case of binary DDI prediction, the inner production indicates how likely drug interacts with drug .In order to reflect the difference between interaction types, R-GCN employs to calculate the likelihood of being a type-specific interaction, where { } are m specific-type associative matrices.Inspired by literature [35], we suppose that feature importance varies across interaction types, and we also assume that interaction types are not completely independent to each other.Therefore, our decoder ℱ adopts a tensor factorization-like matrix operation to calculate the type-specific interaction likelihood.Thus, how likely the pair of drug and drug triggers an r-type pharmacological effect can be formally defined as the scoring function: where and are the 1 × embedding vectors of drug nodes and respectively, is a × feature importance diagonal matrices concerning type r, is a × feature association matrix across different interaction types, (•) is the Sigmoid function that converts the confidence score of being an r-type interaction into a probability value of [0,1]. loss function The encoder ℱ and the decoder ℱ can be trained as an end-to-end model of multi-type DDI prediction.The loss function of MTDDI is composed of two components.The first one measures the difference between the original multi-type interaction network and the reconstructed network � .The second one is a regularization item, which keeps the similar drugs as close as possible in the embedding space. Let be the true label of a triplet ( , , ) for the pair of drug and drug in the r-th slice network , and be the predicted probability of being an interaction of type r.For the r-th slice network , its loss function is defined by a binary cross entropy as follows: The positive samples are taken as the interactions in while the negative samples are randomly sampled among its unlabeled drug pairs.The number of negative samples is same as that of positive samples.For all the sliced networks, the global loss function is defined as = ∑ =1 . Let = � , � ∈ [0,1], , = 1,2, … , be the drug similarity matrix.The regularization item is defined as: where and are the embedding representations of drug and drug generated by the encoder respectively.It can be written in an elegant matrix form as follows: where is an × feature matrix stacked by feature vectors, = − is a Laplace matrix, is an × diagonal matrix derived from and its element 𝑖𝑖 . This regularization item utilizes pre-defined drug similarities to constrain similar drugs as close as possible in the embedding space.This idea is similar as that in literature [35]. Therefore, the final loss of MTDDI is as follows: where α is a hyper parameter to adjust the weight of similarity constraint in the training phase. Assessment In order to measure the performance of MTDDI, the whole DDIs dataset is randomly split into a training set, a validation set and a testing set.The training set is used to train the learning model and the validation set is used to tune the model to ensure an optimal predictive performance.The testing set is used to measure the generalization performance of the model on unlabeled data.In each experiment, we use 75% samples of the DDIs datasets as the training set, 5% samples as the validation data, and the remaining 20% samples as the testing data.The splitting process is usually repeated many times (e.g., 20 times) with different random seeds and the average performance of these repetitions is reported as the final performance. Since our task is a multi-type prediction problem, a group of metrics is used to measure the prediction, including the area under the receiver operating characteristic curve (AUC), the area under the precision-recall curve (AUPR), Accuracy, Recall, Precision, and F1-score.Remarkably, Recall, Precision, and F1-score have their macro versions and micro versions, respectively.Macro metrics reflect the average performance across different interaction types.For example, Macro Precision is defined as the average of the Precision values of different interaction types.In contrast, Micro metrics is analogous to corresponding metrics in binary classification by summing the numbers of true positive, false positive, true negative and false negative samples across all interaction types, respectively.Their definitions are as follows: where , , and represent the number of true positive, true negative, false positive and false negative samples in the i-type DDI prediction, respectively; is the number of DDI interaction types.In addition, AP@50 is employed to measure the values of Macro Precision in each DDI type on average in terms of top-50 predicted DDIs.For any of above metrics, the greater value, the better prediction. Results and Discussion We designed some experiments to address the following questions: 1) Does MTDDI improve multi-type DDI classification?2) Can MTDDI achieve a good predictive performance in multi-type DDI prediction? 3) How both the residual strategy and the similarity regularization in the encoder help the prediction?4) How the feature importance matrices in the decoder help finding the dependency between DDI types? Parameter settings To learn a good model of multi-type DDI prediction, we first determined the architecture of the encoder as follows.The one-hot encoding of 2926 nodes in the multi-type DDI network were adopted as the input features of our MTDDI.The encoder is composed of four hidden layers, in which the number of neurons is determined empirically.Besides, to accommodate the residual strategy, the second, the third and the fourth hidden layers contains the same number of neurons.Thus, the numbers of neuron in the input and four hidden layers are 2926, 1024, 128, 128 and 128, respectively. With this encoder architecture and the tensor factorization-like decoder, we performed a grid search with Adam optimizer [36] DDI classification In order to answer the first question, we compared MTDDI with other three state-of-the-art multi-class classification models, including DeepDDI [11], Lee's model [22], and DDIMDL [12].We only focus on these deep learning-based models because they have demonstrated superior performance to regular shallow models.In common, these methods first treat rows in a drug similar matrix as corresponding drug feature vectors, then set the concatenation of two feature vectors as the feature vector of a DDI, and last train a multi-layer DNN with both feature vectors and types of DDIs as the classifier.Differently, in terms of model architecture, DeepDDI is a model for homogeneous interaction feature (i.e., chemical structure) whereas both Lee's model and DDIMDL are two models for accommodating heterogeneous DDI features (e.g., pathway, GO terms and binding proteins). Moreover, to cope with the high dimension of DDI feature, they utilized various tricks to enhance their models.DeepDDI [11] employed the Principal Component In order to further verify the performance of our MTDDI to predict the new DDIs and their interaction types in unknown DDIs, The inspiring prediction impels us to perform a novel transductive inference of potential DDIs among all drug pairs and their interaction types.Such an inference validates the performance of MTDDI in practice.To accomplish this task, we first used the whole dataset with known DDIs to train MTDDI, then employed the trained MTDDI to infer how likely unlabeled drug pairs trigger specific pharmacological effects among 11 interaction types.After that, we ranked these unlabeled drug pairs in each interaction type according to their type-specific predicting scores.In the prediction results of multi-fold interactions, 17 out of 50 two-fold predicted candidates and 8 out of 60 three-fold predicted candidates are confirmed, respectively. The comparison results in The detailed results are listed in Table S2 of supplement file.As illustrated, we picked up a two-fold interaction case (Case 8) and a three-fold interaction case (Case 18) to show how MTDDI contributes to find multi-fold interaction cases.For the example of two-fold interaction, DrugBank states "Acebutolol may increase the arrhythmogenic activities of Digoxin." , while DDI Checker states "Using Acebutolol together with Digoxin may slow your heart rate and lead to increased side effects."(Case 8).Both statements show that the pair of Digoxin and Acebutolol triggers a PK antagonistic activity and further results in a PD adverse effect.For the example of three-fold interaction, two statements are similarly found, but contain three pharmacological effects as follows "Voriconazole may increase the blood levels and effects of Trazodone" and "The risk or severity of QTc prolongation can be increased when Trazodone is combined with Voriconazole" (Case 18).The pair of Voriconazole and Trazodone increases both PK serum and PD synergy of Trazodone, but also increases the risk of adverse effects as well.In total, these newly-predicted multi-type DDIs demonstrate the potentials of our MTDDI in practice. Influence of hidden layers, residual strategy and similarity regularization in encoder In this section, we investigated how three factors (i.e., the number of hidden layers, the similarity regularization, and the residual strategy) in the encoder affect the performance of MTDDI.First, after removing the similarity regularization and the residual strategy in MTDDI, we adopted MTDDI two variants, that is, MTDDI with 2 hidden layers (denoted as MTDDI-2) and 4 hidden layers (denoted as MTDDI-4).The shown in Table 3, from which we can obtain three following crucial points. (1) MTDDI-4 is worse than MTDDI-2 in all the measuring metrics.Obviously, the increment of the number of hidden layers decreases the predictive performance because of the "over smoothing" issue derived from GCN. (2) Compared with MTDDI-2 and MTDDI-4, MTDDI-4-R owing to the residual strategy achieves the significant improvement.Thus, the residual strategy can relax the "over smoothing" issue in the case of deeper GCN architecture. (3) Compared with these variants, the full architecture of MTDDI having the additional similarity regularization further improves the prediction.Thus, the similarity regularization helps constrain similar drugs as close as possible in the embedding space to cope with the issue that missing interaction label between similar drugs causes their remoteness in the network. In summary, with the help of residual strategy, MTDDI can accommodate deep GCN architecture (e.g., containing >2 layers).Also, its similarity regularization further helps capture missing interactions. Table3.Performance of similarity constraint and residual strategy in MTDDI Influence of different implementations of decoder Since the decoder in MTDDI is loosely coupled with the encoder, we should adopt various decoder models.In this section, we compared three implementations of the decoder, including the inner production in the traditional GCN, the type-specific association in R-GCN, as well as our type-specific importance association z i D r RD r z j T .According to their original algorithms, these three implementations are denoted as InnerProd, RESCAL and DEDICOM, respectively. See Section 2.3.2Decoder for details.To further validate whether feature importance matrices capture the dependency between DDI types, we calculated the pairwise correlations among matrices{D r }.First, we calculated their correlations by diagonal vectors of matrices {D r } since these matrices are diagonal matrices (Figure 3).Then, we categorized these types into a pharmacokinetic group (PK) and a pharmacodynamic group (PD) in terms of their pharmacological behaviors.The PK group contains the first 7 types while the remaining types belong to the PD group.After that, we calculated the average values of absolute correlations within PK and within PD (denoted as C PK and C PD ), and the average value of absolute correlations between PK and PD (denoted as C B ), respectively.The results reveal that C PK (0.264) is significantly greater than C PD (0.086), and C B (0.344) is the greatest.Similarly, we calculated the average values (in Figure 3) of absolute correlations within the individual DDI types, and found that Absorption is the maximum (0.301) and Antagonism Effect is the minimum (0.074). The comparison results in Moreover, we enumerated the correlations between individual DDI types.For example, Absorption is significantly related with Serum Concentration (ρ = −0.55) and Toxicity Activity (ρ = −0.45),respectively; Synergy Activity is significantly related with Synergy Effect (ρ =-0.53);Antagonism Effect is independent to Synergy Effect (ρ = −0.005).All the p-values of correlation entries are significantly less than 0.0001.Totally, the results in Figure 3 demonstrate that DDI types are not independent to each other, and some of them show significant correlations.Thus, the feature importance matrices in the decoder can capture the dependency relation of DDI types in some sense, and they would contribute uncovering the forming mechanism of DDI as well as finding potential synergistic drug combinations with the aid of more medical knowledges. DrugBank The metabolism of Fluvastatin can be decreased when combined with Glyburide.Heat map of correlation analysis for different DDI types. n drugs = { } and k interactions ℒ = � � among them.The traditional DDI prediction, multi-type DDI classification and multi-type DDI prediction are different pharmacological tasks. The task of traditional DDI prediction learns a function mapping ℱ: × → {0,1} to deduce potential interactions between unlabeled drug pairs among (Fig.1-A). The task of multi-type DDI classification identifies what pharmacological effects caused by known DDIs are (Fig.1-B).It learns a function mapping ℱ: ℒ → { }, = 1,2 … , where is the pharmacological effect type of DDIs, and T is the cardinality of all pharmacological effects. Figure 2 . Figure 2.Overall framework of MTDDI and multi-type DDI statistics.(A) Decomposition of the multi-type DDIs network.The multi-type DDI network is decomposed into m sliced (i.e., type number) subnetworks, which are represented by m adjacent matrices and are taken as the input of the encoder.(B) The encoder of MTDDI.It constructs a p-layer multi-relation GCN (R-GCN) to encode drugs in the multi-type DDI network into embedding vectors (i.e., rows in the colorful matrix) by capturing their complex topological properties.A residual strategy (i.e., the black arrow) is added from the second hidden layer to the last hidden layer.Meanwhile, a drug similarity matrix is employed to constrain similar drugs as close as possible in the embedding space (i.e., the purple matrix).(C) The decoder of MTDDI.It is a tensor factorization-like matrix operation, which integrates the embedding feature matrix, type-specific feature importance matrices { }, and an average feature association matrix to reconstruct the multi-type DDIs network.(D) An example to illustrate a layer of R-GCN in the encoder.An interest node (i.e., blue node) aggregates both the features of its first-order neighbor nodes (i.e., orange) and its own in each of m sliced networks to update its features (i.e., green bar).Then, all the updated features are accumulated and passed through a ReLU activation function to produce its final embedding (i.e., the colorful vector).The whole multi-type DDI network are propagated by a p-layer R-GCN to capture the information of its p-th order neighbors.(E) Statistics on different pharmacological effects caused by DDIs.From the left to the right, the interaction types are: Absorption, Metabolism, Serum Concentration, Excretion, Synergy Activity, Antagonism Activity, Toxicity Activity, Adverse Effect, Antagonism Effect, Synergy Effect, and PD triggered by PK.Y-axis indicates their occurring numbers.(F) Proportional distribution of the number of to tune major parameters of our MTDDI, including epoch, learning rate, batch size, and the hyper parameter for the similarity regularization.The epoch, referring to as the number of training iterations, was tuned from the list of values {5, 10, 20, 30, 40, 50, 60, 70}.The learning rate, determining whether and when the objective function converges to the optimal values, was empirical investigated from the list {0.0001, 0.001, 0.005, 0.01, 0.05, 0.1}.A mini-batch strategy, sampling a fixed number of drug pairs in each batch, was tuned from {50, 100, 200, 400, 600, 1000, 2000}.The hyper parameter α, adjusting the weight of similarity constraint, was examined from the list {0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5}.Finally, we experimentally determined a well-trained MTDDI by setting the epoch as 40, the learning rate as 0.001, the batch-size as 400, and the hyper parameter α as 0.02. Analysis (PCA) to reduce the feature dimension before training the nine-layer DNN. Lee et al [22] first utilized three three-layer autoencoder for three source of raw DDI features respectively, and then concatenated three sources of dimension-reduced features as the training feature of the eight-layer DNN.DDIMDL [12] trained four three-layer DNNs for four sources of DDI features respectively, and averaged the individual predictions of those trained DNNs as the final prediction.These methods are designed for the classification of multi-type DDIs, and they determine the pharmacological effect type for a given DDIs, while our MTDDI exceeds the task with the direct discrimination of whether an unknown drug pair results in one or more pharmacological effects of interest.Thus, our MTDDI is accommodated to the version of multi-class classification task.In detail, all DDIs are divided into the training samples, and the test samples.For each type, the DDIs belonging to this DDI type are considered as the positive samples, and the DDIs not belonging to this DDI type are considered as the negative samples.We implemented DeepDDI, Lee's model and DDIMDL with their published source codes and the default parameters. Finally, we picked up top-20 type-specific candidates in each interaction type and validated them by both the latest version of DrugBank (version 5.1.8,on January 18, 2021) and the online Drug Interaction Checker tool (Drugs.com).The validation was performed in both single-fold interactions and multi-fold interactions respectively.In the prediction results of single-fold interactions, 40 out of 220 predicted DDI candidates (18.2%) are confirmed.The average rank of 40 verified DDIs is 7.75, indicating that our MTDDI can effectively detect the potential DDIs as well as different types of DDIs.The detailed results can be found in TableS1of supplement file.We further picked up some validated DDI candidates (i.e., Case 31, Case 34, Case15 and Case 16) to show how DDI prediction contributes to synergistic drug combination and drug contraindication.For example, when two drugs of Pregabalin and Benmoxin are combined, the therapeutic efficacy of Benmoxin can be increased (Case 31).In addition, the therapeutic efficacy of Mebanazine can be increased when used in combination with Pregabalin (Case 34).In contrast, the risk or severity of QTc prolongation can be increased when Quinidine is combined with Promethazine (Case 15).Besides, the risk or severity of serotonin syndrome can be increased when Linezolid is combined with Ergotamine (Case 16).These results manifest that the MTDDI can provide a preliminary screening for synergistic drug combination and drug contraindication. Figure 3 . 4 Conclusions Figure 3. Heat map of correlation analysis for different DDI types. Figures Figure 1 Figures Figure 2 Overall Figure 2 Table 1 predicting the multi-fold interactions.In this sense, we run a multi-type DDI prediction to demonstrate the good predictive performance of MTDDI.The results in Table2 show that our MTDDI can effectively predict the single-fold and multi-fold DDIs.Table2.Performance of MTDDI for multi-type DDIs prediction in both single-fold and multi-fold DDIs Table 4 show that InnerProd is the worst and DEDICOM is the best..The potential reason of DEDICOM significantly outperforming two other models is as follows.The inner production only indicates how likely drug interacts with drug , but it cannot model interaction types.In contrast, RESCAL reflects the difference between interaction types and models the likelihood of being a type-specific interaction by m additional type-specific feature association matrices { }.Compared with RECAL, to indicate how likely the pair of drug and drug triggers an r-type pharmacological effect, DEDICOM employs a global feature association matrix , as well as m additional type-specific diagonal matrices { }, which reflects that feature importance varies across interaction types. Table S2 . Investigation of novel multi-fold DDIs predicted by MTDDI Acebutolol may increase the arrhythmogenic activities of Digoxin.(drugbank)Using acebutolol together with digoxin may slow your heart rate and lead to increased side effects.(drugs.com)Sufentanil may have additive effects in lowering your blood pressure.You may experience headache, dizziness, lightheadedness, fainting, and/or changes in pulse or heart rate.
7,683.2
2021-04-09T00:00:00.000
[ "Computer Science", "Medicine" ]
Multilevel Regulation of Membrane Proteins in Response to Metal and Metalloid Stress: A Lesson from Yeast In the face of flourishing industrialization and global trade, heavy metal and metalloid contamination of the environment is a growing concern throughout the world. The widespread presence of highly toxic compounds of arsenic, antimony, and cadmium in nature poses a particular threat to human health. Prolonged exposure to these toxins has been associated with severe human diseases, including cancer, diabetes, and neurodegenerative disorders. These toxins are known to induce analogous cellular stresses, such as DNA damage, disturbance of redox homeostasis, and proteotoxicity. To overcome these threats and improve or devise treatment methods, it is crucial to understand the mechanisms of cellular detoxification in metal and metalloid stress. Membrane proteins are key cellular components involved in the uptake, vacuolar/lysosomal sequestration, and efflux of these compounds; thus, deciphering the multilevel regulation of these proteins is of the utmost importance. In this review, we summarize data on the mechanisms of arsenic, antimony, and cadmium detoxification in the context of membrane proteome. We used yeast Saccharomyces cerevisiae as a eukaryotic model to elucidate the complex mechanisms of the production, regulation, and degradation of selected membrane transporters under metal(loid)-induced stress conditions. Additionally, we present data on orthologues membrane proteins involved in metal(loid)-associated diseases in humans. Introduction Heavy metals and metalloids are ubiquitous in environmental compartments such as the Earth's crust, water, air, and sediments.In nature, they can be found at volcanic sites, in geothermally active areas, and in natural deposits [1][2][3][4].However, the contamination with these elements originates also from anthropogenic sources, such as the metallurgic industry, mining, fossil fuel extraction, and global transport [5][6][7] (Figure 1).Importantly, they have also found use in various industrial applications such as the production of batteries, alloys, and ceramics but also as potential drugs in therapies against human diseases [7][8][9].Nevertheless, prolonged exposure to toxic elements such as arsenic, antimony, and cadmium has been numerously shown to cause a multitude of serious human diseases and disorders (Figure 1). The compounds of highly toxic elements such as cadmium, arsenic, and antimony share some common mechanisms of cellular toxicity.For instance, they can interact with various cellular macromolecules, such as DNA, proteins, and membranes.As a result, they induce serious cellular disturbances, including DNA damage [10][11][12], oxidative stress [13][14][15], and proteotoxicity [16,17].To cope with these challenges, in the course of evolution, cells have developed distinct mechanisms of detoxification and adaptation involving the regulation of specific transporters and enzymes involved in the uptake, sequestration, metabolism, or export of these toxic compounds.Cells need to adjust the membrane protein composition, as they function as a contact site between the extracellular and intracellular environment (plasma membrane, PM), a synthesis hub for newly produced membrane proteins (endoplasmic reticulum, ER), or a sequestration site of toxic compounds (vacuolar membrane, VM) (Figure 2).The production, activity, and stability of membrane proteins are tightly regulated at multiple levels, including transcription (through the induction of specific activators and repressors as well as translational machinery), but also at the protein level by various post-translational modifications (e.g., phosphorylation and ubiquitylation) (Figure 2).These modifications can affect the localization, conformation, interaction, and degradation rate of membrane proteins, thus modulating their function and efficiency in the process of uptake, metabolism, and efflux of toxic compounds.evolution, cells have developed distinct mechanisms of detoxification and adaptation involving the regulation of specific transporters and enzymes involved in the uptake, sequestration, metabolism, or export of these toxic compounds.Cells need to adjust the membrane protein composition, as they function as a contact site between the extracellular and intracellular environment (plasma membrane, PM), a synthesis hub for newly produced membrane proteins (endoplasmic reticulum, ER), or a sequestration site of toxic compounds (vacuolar membrane, VM) (Figure 2).The production, activity, and stability of membrane proteins are tightly regulated at multiple levels, including transcription (through the induction of specific activators and repressors as well as translational machinery), but also at the protein level by various post-translational modifications (e.g., phosphorylation and ubiquitylation) (Figure 2).These modifications can affect the localization, conformation, interaction, and degradation rate of membrane proteins, thus modulating their function and efficiency in the process of uptake, metabolism, and efflux of toxic compounds.The main pathways and mechanisms of multilevel regulation of membrane proteins at distinct cellular compartments.ER-endoplasmic reticulum; VM-vacuolar membrane; PM-plasma membrane; Pi-phosphate group; Ub-ubiquitin; GSH-reduced glutathione; DXE-ER exit signal.Membrane transporters depicted as colored rounded rectangles-vacuolar in grayscale, cadmium plasma membrane transporters in green shades, arsenic, and antimony plasma membrane transporters in reddish/pink shades, bidirectional Fps1 channel as blue.Presented regulation steps involve expression changes, protein production, sorting, and trafficking, as well as post-translational modifications, ubiquitylation, and vacuolar degradation.Black arrows indicate metal(loid) form changes and transport, and dashed gray arrows indicate interactions or regulatory aspects of factors impacting the activity of the selected transporters.Thick green and red arrows represent the upand downregulation of protein level/function.More elaborate explanation of all depicted regulation pathways can be found in the text. Considering these facts, the knowledge of precise control of transporters and the proteins regulating them at all stages-from transcription control through posttranslational modifications to degradation-is of paramount importance regarding the devising and improvement of future therapies against metal(loid)-associated diseases and metal(loid)-based drugs.In this work, we summarize the knowledge regarding the molecular mechanisms responsible for the regulation of membrane proteins involved in the detoxification of arsenic, antimony, and cadmium, as well as the importance of these proteins for cell functioning and, consequently, the impact of disorders of these processes and proteins on human health.We focus on the data obtained in studies carried out using Considering these facts, the knowledge of precise control of transporters and the proteins regulating them at all stages-from transcription control through post-translational modifications to degradation-is of paramount importance regarding the devising and improvement of future therapies against metal(loid)-associated diseases and metal(loid)-based drugs.In this work, we summarize the knowledge regarding the molecular mechanisms responsible for the regulation of membrane proteins involved in the detoxification of arsenic, antimony, and cadmium, as well as the importance of these proteins for cell functioning and, consequently, the impact of disorders of these processes and proteins on human health.We focus on the data obtained in studies carried out using yeast Saccharomyces cerevisiae, which is a powerful model organism for inquiring about the molecular basis of the response to cellular stress.Yeast shares many basic cellular processes, mechanisms, and proteins with humans [18].Additionally, the mechanisms of metal and metalloid detoxification in yeast are relatively well-described in relation to other eukaryotic organisms, including humans.They are also somewhat resistant to toxic agents, surviving a comparably deadly dosage of arsenic or antimony, counted in milli-molar concentrations [10,19].Those dissimilarities in sensitivity may result from various differences, including the presence of the cell wall, simpler metabolic requirements, as well as other, additional systems like specific transporters, which are not present in the mammalian cells.Therefore, this organism is a useful tool for investigating the processes occurring in cellular stress response, as well as their further implications for human health.Thus, the intention of this review is to highlight and recapitulate the available literature on the topic of several selected yet crucial arsenic, antimony, and cadmium yeast transporters and the protein factors influencing their expression, post-translation regulation, and degradation.Additionally, it links the disorders of the transporter orthologues with human diseases and more. The Toxicity of Arsenic Compounds Arsenic (As) naturally occurs in four oxidation states: −3, 0, +3, and +5.The compounds in the +3 and +5 oxidation states constitute a vast majority of arsenic forms in the environment and living organisms.The forms in the +3 oxidation state [(As(III)] prevail in oxygen-free conditions with reducing characteristics (such as rocks or river mud), while the forms in the +5 oxidation state [As(V)] dominate oxygen-rich environments, such as surface waters or soil [20].In general, inorganic trivalent arsenic species are more toxic than inorganic pentavalent ones.At the same time, organoarsenic compounds are considered even more potent toxins, and aromatic arsenicals are considered the most toxic arsenic species [21].Inorganic arsenic compounds can be biotransformed into organic forms in cells by forming complexes with glutathione or by methylation [3,22].Moreover, they can bind biologically active molecules and form their arsenic derivatives (e.g., arsenocholine, arsenobetaine, arsenolipids), which can be integrated into distinct macromolecules or compete with crucial substrates in metabolic pathways, disrupting or completely blocking their functioning [22,23].For instance, As(V) structurally resembles PO 4 − ions; thus, it can inhibit enzymes that involve phosphate, such as glyceraldehyde-3-phosphate dehydrogenase [24]. Arsenic is known to induce a variety of cellular stresses.The high affinity of trivalent arsenic species to sulfhydryl groups allows their binding to cysteine residues in proteins, which may result in protein misfolding or the inhibition of enzymatic activity [16,25].Taking into account that thiol binding is stronger within adjacent cysteine residue sites, in particular, this results in the displacement of zinc ion cofactors from the zinc finger domains, mainly of C3H1 and C4 zinc finger type [26][27][28].Therefore, many crucial DNA binding proteins, including members of the DNA repair systems, may undergo a direct inhibition of function upon arsenic exposure, especially in conditions of zinc deficiency [29,30].Moreover, arsenicals have been found to negatively affect the cytoskeleton [31] and strongly induce oxidative stress [31,32].Although, initially, the genotoxic activity of arsenic has been thought to depend on ROS-induced DNA damage, Litwin et al. demonstrated that arsenic also induces oxidative stress and transcription-independent DNA breakage [33].Inorganic as well as organic methylated arsenic forms were also found to impact growth signaling in epidermal cells, which may further increase the risk of skin cancer development [34,35]. The Toxicity of Antimony Compounds Similarly to arsenic, antimony (Sb) is a metalloid that belongs to group 15 of the periodic table and, therefore, presents the same range of oxidation states from −3 to +5.Analogously to arsenic, the most common form of antimony in oxic environments is +5, while in anoxic environments it is +3 [36].It can also form organoantimonials (e.g., methylated species), which are considered the least toxic, while antimonites (Sb(III)) are the most toxic ones [3]. Although the toxic properties of antimony are not fully understood, there is evidence that antimonials affect cells similarly to arsenic species (Figure 1).It has already been demonstrated that antimony can bind proteins [17] and induce oxidative stress [14], as well as direct and indirect DNA damage [11].Like arsenic, it is also similarly potent in altering signaling pathways in epidermal cells [37,38].These observations indeed confirm the cytotoxic properties of antimony species; nevertheless, further studies are required to fully uncover the mechanisms of antimony toxicity. The Toxicity of Cadmium Compounds Cadmium (Cd) is a heavy metal that naturally occurs in the Earth's crust and atmosphere; however, similarly to metalloids, arsenic, and antimony, no biological functions of cadmium in higher organisms have been found [39] with widely known toxic traits [40].Cadmium contamination comes from the burning of fossil fuels and mining [41].This metal serves as a model toxicant for triggering cellular stress responses specific to heavy metals.It is known to induce heat-shock and oxidative stress responses, e.g., inducing heat-shock-related proteins like HSP70 or heme oxygenase (HMOX1) [42,43].While most of cadmium's biological effects are related to its ability to alter the cellular redox state, some effects may be due to structural similarities between cadmium and calcium or zinc [1]. Cadmium-induced cellular oxidative stress is likely caused by the disruption of redox homeostasis related to the mishandling of redox-active metals.This leads to lipid and protein oxidation and oxidative DNA damage.Cd has only one oxidation state (Cd 2+ ); therefore, it cannot directly generate free radicals.However, it has been reported that cadmium can cause indirect generation of various radicals, including the superoxide radical, hydroxyl radical, and nitric oxide [44,45].In addition to its direct effects on cellular redox balance, cadmium competes with calcium for ion channels in cell membranes and with calcium-regulated proteins inside cells due to molecular mimicry of calcium ions.Cadmium not only competes with calcium channels for entry but also stimulates them, leading to excessive accumulation of calcium in different cellular compartments [46].As a result, it may induce apoptotic cell death [46,47]. Moreover, studies indicate that many structural components of ribosomes, translation initiation factors, and elongation factors are downregulated by Cd(II) treatment.However, some proteins involved in the ubiquitin-dependent protein metabolic process and 19S proteasome regulatory subunits were upregulated.These results suggest that the regulation of translation and protein degradation is also a crucial part of the cellular response to cadmium stress [48].Further, cadmium can alter the structure of membrane proteins.It has the most significant effect on newly formed, partially folded membrane proteins that are prone to misfolding and aggregation [1,49]. Transcriptional Regulation of Membrane Protein Genes in Response to Metal and Metalloid Stress Cell survival requires rapid adaptation to constant environmental changes.Although stress conditions impose rapid changes in the existing protein composition, cells also have to tightly regulate their transcription profiles (for the summary of the transporters and their regulators described below, see Table 1).In the presence of toxic compounds, it is crucial to simultaneously block the transcription of genes coding undesirable proteins (e.g., transporters involved in the uptake of the stressor) and upregulate those essential for survival (e.g., enzymes involved in the metabolism of the toxin) (Figure 2). The key cellular components controlling the selective binding of RNA polymerases to DNA are transcription factors (TFs) [50].Determined groups of TFs cooperate in response to given conditions, acting as activators or repressors for various groups of genes.In yeast, the response to heavy metal and metalloid stress conditions involves the activity of two major groups of TFs-the representatives of the yeast activator protein (YAP) family and the activator proteins involved in pleiotropic drug resistance (PDR) [51,52]. The proteins of the YAP family are transcription activators resembling the activation protein (AP)-1 factors in humans [53].They are stress response-related transcription factors of the JUN, FOS, MAF, and ATF protein families [54].The YAP proteins are basic leucinezipper (bZIP) domain-containing transcription factors, which bind specific DNA sequence motifs in the form of homo-or heterodimers.Moreover, all representatives of the family involved in metal/metalloid detoxification, including Yap1p, Yap2p, and Yap8p, harbor two unique Cysteine-Rich Domains (n-CRD and c-CRD), which contain several evolutionarily conserved cysteine residues, essential for the function of these proteins [55,56].Together, these TFs condition the expression of a multitude of membrane proteins involved in arsenic, antimony, and cadmium detoxification. Transcription of Genes Coding Membrane Proteins Related to Arsenic and Antimony Stress Arsenic and antimony compounds are potent toxins that heavily affect cellular transcription profiles.For instance, arsenic stress has been demonstrated to negatively regulate the Sfp1 transcription activator involved in the transcription of ribosomal genes [88].On the other hand, it stimulates general stress-responsive transcription activators Msn2 and Msn4 [88].Other transcription regulators that were significantly upregulated in arsenic stress include Rpn4p, Fhl1p, Yap1p, Yap2p, Pre1p, Hsf1p, and Met31p [89].There is little data concerning the differences in the regulation of transcription between arsenic and antimony in yeast.The available transcriptomic data mostly come from the studies on transcription profiles of the Leishmania parasites responsible for leishmaniasis in humans, which is treated mostly with antimony-based drugs [90].Indeed, it has been demonstrated that antimony-resistant Leishmania strains display significant changes in transcriptomic profile in comparison to the control strains, including pronounced changes in the expression of multiple membrane protein-coding genes [91]. In yeast, the key membrane protein involved in the detoxification of arsenicals is the PM transporter Acr3 (Figure 2).It is a member of the bile/arsenite/riboflavin transporter (BART) superfamily and a founding member of the arsenic compound resistance (Acr)-3 family, which is ubiquitously present in prokaryotes, fungi, and plants [92][93][94].Acr3p acts as an antiporter, which utilizes the proton gradient generated by the cell membrane H+-ATPase to extrude As(III) out of the cell (Figure 2) [95].It also cooperates with the ACR2 gene, which encodes an arsenate reductase that catalyzes the reduction of As(V) to As(III) [96].Acr3p is predominantly regulated at the transcription level.Under normal conditions, the transcription of the ACR3 gene is turned off; however, exposure to arsenic has been demonstrated to strikingly induce ACR3 transcription [95].Importantly, both the ACR3 and ACR2 genes are localized closely on the yeast chromosome XVI and share the same bi-directing promoter sequence [58,97].The transcription of both ACR3 and ACR2 depends on a single TF Yap8p [58].It has been demonstrated that Yap8 activation depends on the Hog1 kinase, which is a homolog of mammalian mitogen-activated protein kinase (MAPK) p38 [98].Consistently, the deletion of the HOG1 gene results in reduced transcription of the ACR3 gene [99].Recently, a zinc finger domain-containing protein Etp1 has been found to affect the transcription of ACR3 [100].Although Etp1 has been found to interact with Yap8p, the Etp1-dependent regulation of ACR3 has been demonstrated to occur independently of Yap8p [100].Strikingly, Yap8p has also been observed to be post-translationally regulated by arsenic itself.In the absence of arsenic, the protein is continuously degraded by ubiquitin-dependent proteolysis [101].However, Yap8p directly binds As(III) independently of any other yeast protein and is stabilized in effect, thus acting as an arsenic sensor [102]. The aquaglyceroporin Fps1 of the major intrinsic protein (MIP) family is a membrane protein crucial for arsenic and antimony transport across the PM (Figure 2) [103].Chromatin immunoprecipitation (ChIP) data indicate that under normal conditions, the transcription rate of the FPS1 gene may be influenced by transcription regulators Fhl1p, Gcn5p, Med2p, and Stp1p [60,76].Arsenic exposure, however, causes a rapid inhibition of FPS1 transcription [19].Moreover, it has been proposed that the 5 ′ UTR uOFR region present in the transcript of FPS1 may limit its translation rate, preventing deleterious effects of unregulated FPS1 expression under normal conditions [104]. As trivalent arsenic can adapt a form of ring-like structure mimicking glucose, the hexose transporter (HXT) family of PM permeases has been found to serve as another uptake pathway for arsenicals in yeast (Figure 2) [105].Similarly, homologous transporters of the glucose transporter (GLUT) family in humans have been demonstrated to transport As(III) as well [106].Under normal conditions, the transcription of yeast HXT genes is regulated by glucose availability.It has been proposed that in the absence of glucose, the HXT1-3 genes coding low-affinity hexose transporters are repressed by the Rgt1 transcription factor, while the HXT6-7 genes coding high-affinity glucose transporters remain unrepressed.Following the increase in glucose availability, Rgt1p disassociates from the HXT gene promoters, also allowing the transcription of low-affinity glucose transporters.High levels of glucose, on the other hand, inhibit the transcription of HXT2,6-7 genes through the Mig1-dependent repression mechanism.Under these conditions, the transcription of HXT3 remains active, while the transcription of HXT1 is induced by Rgt1p, which acts as a transcription activator in this context [63].Arsenic treatment seems to not affect the transcription of at least several HXT genes, as the mRNA levels of HXT2,6-7 insignificantly change in the presence of arsenic [107].As under these conditions, the protein level of these transporters rapidly decreases, the HXT transporters seem to be regulated at the protein level instead [107]. Due to its structure resembling inorganic phosphate, pentavalent arsenic enters the cells through several phosphate (PHO) transporters (Figure 2).The main PHO transporter involved in this process is the high-affinity transporter Pho84p [70,108,109].Normally, the transcription of PHO84 is regulated by the basic helix-loop-helix (bHLH) TF Pho4, which belongs to the myc-family [110].Moreover, it has been proposed that the transcription of PHO84 is also coupled with nutrient-sensing signaling pathways [111].Additionally, the expression of PHO84 was suggested to be repressed by the antisense transcription of the gene [112].Recently, however, it has been proposed that the 3'UTR region of the transcript, rather than the antisense RNA, is responsible for the downregulation of the transporter [112,113]. Arsenate treatment has also been demonstrated to affect the high-affinity iron permease Ftr1 and ferroxidase Fet3 (Figure 2) at both transcriptional and protein levels.Interestingly, it does so in an opposing manner.Although As(V) induces the degradation of Ftr1p/Fet3p, it also strongly stimulates the Aft1-2 (Activator of Ferrous Transport 1 and 2) TFs, activators of the Fe regulon.The high expression of FTR1 and FET3 has been proposed to be a cellular response to iron deficiency and does not translate to high protein levels.The obtained data indicate that the FTR1 and FET3 mRNA is rapidly degraded by Xrn1p-a nuclease involved in mRNA decay.As the strain devoid of the FTR1 and FET3 genes displays a phenotype of high tolerance to arsenate, this high-affinity uptake system has been speculated to participate in arsenate uptake [114], and the observed phenomena may be a cellular strategy to reduce arsenate toxicity. Given the structural and biochemical similarity between trivalent arsenic and antimony, most of the membrane proteins associated with antimony response overlap with the arsenic-related ones.For instance, Sb(III) treatment has been shown to stimulate the transcription of ACR3 in yeast, although this phenomenon is much less pronounced than in the case of arsenic treatment [115].The main TF involved in antimony response in yeast is Yap1p.Complex microarray studies of strains overexpressing Yap1p indicated 17 genes with at least a three-fold increased expression [53].Yap1p regulates the transcription rate of three important yeast ABC transporters, Ycf1p, Snq2p, Pdr5p [83], as well as two transporters of the MFS family, Atr1p and Flr1p [116], and various genes involved in the biosynthesis of thioredoxin and glutathione [117].The most important antimony resistancerelated transporter gene regulated by Yap1p is the YCF1, which encodes a vacuolar C-type ABC (ABCC) transporter homologous to the human MRP and CFTR transporters [118,119].Ycf1p actively transports the glutathione-antimony conjugates (Sb(GS) 3 ) into the vacuole, strongly contributing to yeast tolerance to antimony (Figure 2) [19,58,118,[120][121][122].The regulation of YCF1 expression is not well understood.For instance, YCF1 is transcribed under normal conditions as well as stress conditions.The ycf1∆ strain displays a hypersensitivity phenotype to antimony, and the overexpression of the Yap1p has been found to promote the transcription of the YCF1 gene.However, the metalloid treatment seems to not affect the expression of YCF1; thus, the complex mechanism of the regulation of YCF1 transcription remains elusive [58,123,124]. Transcription of Genes Coding Membrane Proteins Related to Cadmium Stress Similarly to arsenic and antimony, exposure to cadmium significantly affects the transcriptomic profile of yeast.A high-throughput RNAseq analysis revealed that the plasma membrane protein-encoding cadmium-responsive genes were strongly affected by cadmium treatment [125].Another RNAseq analysis revealed that the Gpp2, Tec1, and Sfg1 TFs, as well as PM transporters Hxt5, Yct1, and Ptr2, might be regulated by the Hog1p signaling pathway in response to cadmium treatment [125,126].Moreover, it was proposed that TFs Hot1, Msn2, and Msn4 might negatively regulate the expression of PM cysteine permease YCT1 [126].One of the ABC transporters involved in the detoxification of cadmium, Vmr1p (Figure 2), is also regulated at the transcriptional level by the Msn2p and Msn4p, as well as starvation-responsive Gcn4p [87]. The YAP family members are also important regulators of the cell response to cadmium stress.Yap1p regulates the transcription of the gene coding the Ycf1 transporter, which is responsible for vacuolar sequestration not only of Sb(GS) 3 but also Cd(GS) 2 and As(GS) 3 [127,128].The Yap2 TF, otherwise known as Cad1p (cadmium resistance 1), shares the highest homology with Yap1p.However, regardless of similarities in their function and overlapping gene targets, Yap1p and Yap2p are not redundant TFs [129].The overexpression of Yap2p has been found to confer resistance to cadmium [129].This corresponds well with the fact that Yap2p can directly interact with cadmium, which stimulates the transactivating potential of the factor [130].It has also been demonstrated that Yap2p is regulated by the MAPKAP (MAPK-activated protein) kinase Rck1, which negatively regulates the protein half-life and nuclear accumulation [131].Together, Yap1p and Yap2p cooperate in the cellular response to cadmium, providing the expression of genes necessary for survival in cadmium and cadmium-related oxidative stress [130]. The PM ABCC transporter Yor1 (Figure 2), which is essential for cellular response to cadmium in yeast, is related to the MRP/CFTR group of ABC transporters, such as the human MRP1 and yeast Ycf1 proteins [132].It is a PM glutathione conjugates transporter facilitating the export of Cd(GS) 2 out of the cell [132,133].The expression of Yor1 is regulated mostly by the PDR factors Pdr1-3 [134][135][136][137]. A P-type ATPase Pca1 is another crucial cadmium exporter (Figure 2) [138,139].The available data indicate that the transcription of PCA1 may be influenced by several factors, including Spt10p [78], Msn2p [76], and Gcn4p [79].However, post-translational stabilization rather than increased transcription levels was found to be a key mechanism regulating the activity of the protein [80]. The PM high-affinity zinc importer Zrt1, on the other hand, has been found to facilitate cadmium uptake in yeast (Figure 2).The transcription of ZRT1 is controlled by the Zap1 transcription factor in response to changing intracellular zinc levels [140].Zap1p binds to the zinc-responsive elements (ZREs) in the ZRT1 promoter in response to low zinc availability [85].Although the mechanism regulating the expression of ZRT1 in cadmium stress is ambiguous, it is known to be repressed by the negative regulators Mot3p and Rox1p in response to osmotic stress [141]. General Effects of Metal and Metalloid Exposure on the Regulation of Membrane Proteins Apart from the transcript regulatory alterations previously discussed, proper posttranslational modifications are frequently essential for the proper functioning of a particular transporter, from its synthesis and sorting to its ultimate removal from the membrane.A growing amount of research demonstrated that functional changes in cells, which go beyond variations in protein abundance, are linked to modifications in the state and structure of significant protein regulatory factors as well as transport proteins themselves.These changes arise from disruptions in their regulatory or post-translational processing. The most common examples of regulatory processes are phosphorylation-induced changes and their consequences for the cellular proteome, which occur in all organisms, from yeasts to humans [142,143].Due to the complexity of these processes, many of them, as well as their molecular basis in yeast, have not yet been fully described and require additional study.For example, by comparing the yeast transcriptome, proteome, and phosphoproteome, as well as looking at phosphorylation states, it was possible to decipher how genetic effects alter signaling networks.Thus, it has been presented that phosphorylation properties are closely related to cell physiological parameters, such as chemical resistance or cell morphology, compared to transcript or protein abundance [144].Another piece of evidence was the description of several specific phosphorylation states and sites.They were correlated with several stress resistance traits in the context of a novel, highquality quantitative trait loci (QTL) multiomics technique in yeast.For the first time, this demonstrated the central importance of protein phosphorylation in the adaptation of stress responses in living organisms (Figure 2) [144].It is also important to acknowledge that the level of phosphorylation also translates into disorders of post-translational processes in cancer and their impact on molecular pathways and cellular processes [145].Therefore, protein regulation and its post-translational modifications are an important element of survival under stress conditions in various species [146,147].However, little is known about detailed studies of protein state changes in response to toxic agents. Exposure to toxic metals and metalloids affects transcriptional and post-translational modifications of not only the stress-related membrane proteins but also of their regulators themselves.An example of this would be the upregulation of autophagy pathways, which, in addition to the proteasome, play an important role in controlling the number of proteins in the cell.Proteomic studies demonstrated a broad upregulation of autophagy components at the protein level.One of the proteins whose level increased the most in response to arsenic stress was Atg8p, a crucial autophagy regulator required for the production of autophagosomes.On the other hand, as a result of the presence of arsenic, the downregulation of components of the ribosomal machinery has also been reported.The results show that a significant decrease in the amount of more than half of the subunits was observed after arsenic treatment [148].Similarly, many important proteins undergo quantitative changes in the presence of factors such as cadmium.A significant number of ribosome structural components (Rpl702p, Rpl3001p, Rps2p), translation initiation factors (Tif33p, Tif211p), and elongation factors (Tef5p) are downregulated upon Cd(II) treatment.Importantly, in the case of the proteome regulation machinery, the increase in abundance levels of some of the proteins involved in ubiquitylation processes (Ubx4p) or proteasomal degradation (the 19S proteasome regulatory subunits Rpn502p, Rpn11p, Rpt6p, and Mts4p) was observed.This indicates the importance of degradation pathways in the presence of cadmium in the cell.It has also been stated that the multilayer regulation of pathways critical for Cd(II) tolerance in S. pombe is regulated by Spc1p and Zip1p.Zip1p is believed to be essential for the primary regulation of important sulfur metabolism-related enzymes and is required for cadmium detoxification, whereas Spc1p is crucial for acute reactions to cadmium stress [48]. Regulation and Post-Translational Modifications of Arsenic and Antimony Transporters One of the best-described regulatory transporters involved in arsenic transport in yeast is Fps1p.This transporter contains a mitogen-activated protein kinase (MAPK) phosphorylation site (Thr231) in its long cytosolic-facing N-terminal tail, which is crucial for gating [149][150][151].The deletion of this residue or the entire N-terminal domain results in increased sensitivity to As(III) and Sb(III) due to high levels of unregulated metalloid influx [19,151].It has been demonstrated that the mitogen-activated protein kinase Hog1 mediates this phosphorylation.Hog1p directly and negatively controls Fps1p-mediated transport by phosphorylating Thr231.As(III) and Sb(III) activate Hog1 kinase, cells missing Hog1p (hog1∆) are particularly sensitive to both metalloids and exhibit higher rates of Fps1p-dependent As(III) absorption [103,151].Additionally, in the regulation of the Hog1p phosphorylation level, two other positive regulators of Fps1p activity have been identified, Rgc1p and Rgc2p/Ask10p, which are pleckstrin homology (PH) domain proteins.As(III) tolerance is increased by the inactivation of Fps1p by the deletion of RGC1 or RGC2 [152].Fps1p forms a homotetramer, and a redundant pair of regulators Rgc1p and Rgc2p govern the activity of this channel.Rgc1p and Rgc2p bind to the C-terminal cytoplasmic domain of Fps1 to keep it in the open channel state.Hog1p phosphorylates Rgc1p and Rgc2p, which removes these regulators from Fps1p and ultimately closes the transporter channel.Rgc1p and Rgc2p have been demonstrated to form both homodimers and heterodimers with each other.The N-terminal domain of Rgc2p mediates the formation of dimers, and mutations that inhibit Rgc2p dimerization impede its capacity to open Fps1 [61,62].Furthermore, it has been demonstrated that methylated arsenite-MAs(III)-is a strong inhibitor of the protein tyrosine phosphatases (Ptp2p and Ptp3p), which normally maintain the inactive state of Hog1p.Inhibition of Ptp2p and Ptp3p by MAs(III) leads to increased Hog1p phosphorylation without the activation of protein kinases that act upstream of stressactivated MAPKs (SAPKs).Furthermore, unlike As(III), arsenate [As(V)], a pentavalent form of arsenic, also activates Hog1p, but it does so by activating Hog1p through MAP/ERK kinase (MEK) Pbs2 [153]. Unlike Fps1p, yeast Acr3p, in terms of arsenic and antimony transport, is regulated mainly at the transcriptional level.But yet, a comprehensive investigation of the domains and regions in charge of Acr3p's appropriate metalloid transport was carried out in addition to identifying the structures in charge of Acr3p's removal from the membrane.It was determined that the mobile transport domain consists of two transmembrane (TM) regions, the TM3-5 and TM8-10, while the scaffolding domain, on which the transport domain glides, is made up of the TM1-2 and TM6-7 domains.The conserved areas of TM4, TM5, TM9, and TM10 were characterized, including TM9's G353 residue, which might assist with substrate binding and is required for Acr3 transport activity, as well as TM4's C151 residue, which may act as a metalloid binding site during translocation and is necessary for the As(III) and Sb(III) antiport by Acr3 in yeast.Furthermore, as shown, the V173A and E353D Acr3 mutants are unable to export the Sb(III) and As(III) out of the cell, respectively [154]. Taking into account the regulation of Hxt1-7 transporters involved in arsenic transport, their regulation is also controlled at multiple different pathways.Hxt1p has been shown to be phosphorylated in vitro via the Atg1 kinase [67], while in vivo via the Cdk1 kinase [67,68], and Npr1 kinase included in TORC1-dependent feedback control [69].In the case of the Hxt2p, it was shown that the peptide fraction in the mass spectrometry study is enriched in phosphorylation more than 25-fold in the case of Rad53 kinase deletion [155,156].The same approach as presented for the discovered Hxt1p phosphorylation site also proposed Npr1 kinase-dependent phosphorylation of Hxt3p [69].Regarding the Hxt5 transporter, in addition to several studies describing novel phosphorylation sites using the global proteomic studies [68,157,158], another post-translational regulatory modification through the succinylation of the lysine residue has been presented [159].In the case of Hxt7p, phosphorylated residues were identified thanks to high-throughput proteomic techniques.Importantly, the same studies also contributed to the discovery of phosphorylation sites of the other hexose transporter family members [68,158,160].Interestingly, from this group of proteins, the least characterized is the Hxt6 transporter, whose regulators have not yet been described. Similarly, for other arsenic response proteins, such as Tat1, Frt1, Fet3, and Pho84, multiple phosphorylation sites have been proven in the previously mentioned genomewide phosphoproteomic studies [68,155,157,158,160,161].What is also worth noting, some of the studies may suggest the involvement of the TOR-controlled pathway in the regulation of proteins such as Tat1, and at the same time, the precise mechanism and exact regulators still require thorough investigation [162].Additionally, in the case of Tat1p, one residue of succinylation-mediated lysine modification was also confirmed [163].For the Pho84 transporter, according to data obtained by Holt and colleagues, the phosphorylation of this protein can be driven in the Cdk1-dependent pathway [68].The Fet3p also has multiple N-glycosylation sites described thanks to the quantitative profiling and mapping studies [164,165]. Regulation and Post-Translational Modifications of Cadmium Transporters The post-translational regulation of Ycf1p occurs at the levels of intracellular trafficking, phosphorylation, and proteolytic processing by Pep4 protease [166][167][168][169].The phosphorylation of the ABC core domain, as well as the guanine exchange factor Tus1, both positively regulate Ycf1p [170].The phosphorylation of residues S908 and T911 in its core ABC domain, driven by the Tus1p, positively regulates Ycf1p activity.At the same time, compared to wild-type Ycf1p, the S251A mutant shows higher cadmium resistance in vivo and increased Ycf1p-dependent [(3)H]estradiol-beta-17-glucuronide transport in vitro.Thus, it is proposed that S251 phosphorylation negatively regulates Ycf1p activity.Moreover, Ycf1p function increases upon the deletion of two kinase genes, CKA1 and HAL5, which were discovered by the integrated Membrane Yeast Two-Hybrid (iMYTH) screen.Taking into account these results, as well as additional genetic tests, it was confirmed that the Cka1 kinase may directly or indirectly phosphorylate S251 to control Ycf1p activity [169].Nevertheless, only a little Cd sensitivity results from altering the phosphorylated residues or eliminating TUS1 [168]. The biogenesis of the Yor1 transporter depends on the level of transport from the endoplasmic reticulum (ER) via the secretory pathway, similar to other proteins in the ABC transporter family.Two DXE element-like sequence motifs commonly found in other ER exit proteins are necessary for Yor1p to be transported from the ER to its site of function in the plasma membrane.The protein's function is lost when the N-terminal DXE fragment is eliminated.Therefore, these findings highlight the significance of the signals linked to this domain in the proper control and sorting of the protein to the membrane; the removal of this domain potentially results in the mislocalization and, ultimately, degradation of mutational protein [171].Using Stable Isotope Labeling by/with Amino acids in Cell culture (SILAC)-based experiments, a comprehensive phosphoproteome screening for budding yeast was presented.Among over 30,000 phosphosites detected under DNA-damaging conditions and/or before arrest in various cell cycle states, six new phosphorylation sites were identified for Yor1p [158].Moreover, Swaney et al. demonstrated four additional sites, as well as new ubiquitylation sites [160].Although a significant number of phosphorylation sites have been identified for Yor1p, knowledge about the mechanisms of phosphorylation of this transporter remains scarcely understood so far [157,161,172].It has been suggested that Yor1p contains one putative phosphorylation site targeted by the Hog1 kinase.However, at the same time, it should be taken into account that in the absence of detected evidence of the physical interaction of Hog1p and Yor1p, further studies are required [173].Noteworthily, a study investigating phosphorylation targets of the Cdk1 kinase identified a phosphorylation site also for Yor1p as a possible substrate [68]. The Zrt1 transporter's activity is controlled through the mechanisms regulating both its transcription [85,140] and vacuolar degradation [174].Specific glycosylation sites have been identified for this transporter using mass spectrometry-based mapping techniques [165,175].As in the case of the Yor1 transporter, thanks to high-throughput experiments, it was also possible to confirm various phosphorylated residues of the Zrt1 protein [68,158,160,172]. As in the case of many previously mentioned transporters involved in arsenic transport, in the case of the Vmr1p [158], Bpt1p, and Ypk9p [68,155,157,158,160,161], numerous proteomic data confirmed multiple phosphorylation sites of these proteins.At the same time, the involvement of the Atg1 and Slt2 kinases [69] as at least one of the kinases involved in the phosphorylation of the Bpt1p and Ypk9p, respectively, is indicated. Degradation of Membrane Proteins in Response to Metal and Metalloid Stress The ability to adjust the protein composition of cellular membranes is a fundamental asset, allowing the survival of cells in response to ever-changing environmental cues.The rapid and specific response to stress conditions requires not only the synthesis of new membrane proteins necessary for survival but also the turnover of damaged, dispensable, and undesirable ones (Figure 2).In eukaryotic cells, the degradation of soluble and membrane proteins is mostly regulated by ubiquitylation, which is a post-translational modification consisting of the covalent binding of a small protein ubiquitin (Ub) to the acceptor lysine residues in the substrate [176] in distinct combinations of polyUb chains [177].For instance, the UbK48-type chains target proteins for proteasomal degradation, whereas the UbK63-type chains target membrane proteins for degradation in vacuoles/lysosomes [178]. The degradation of membrane proteins is distinctly regulated in different cellular compartments.At the ER, membrane proteins are downregulated by the endoplasmic reticulum-associated degradation (ERAD) machinery [179].In yeast, it includes the ER membrane-embedded ubiquitin ligases Doa10 and Hrd1 (homologous to human ligases MARCHF6 and SYNV1, respectively), which tag their substrates with the K48-type polyUb chains [180,181], thus targeting them for proteolysis in the proteasome [179,182].On the other hand, the degradation of proteins present at the PM occurs mainly through their ubiquitylation-dependent endocytosis, endosomal sorting, and subsequent vacuolar/lysosomal degradation [183].In both yeast and animal cells, ubiquitylation acts as a signal-inducing endocytosis [184][185][186], and the main ligases responsible for the ubiquitylation of PM proteins belong to the Rsp5/NEDD4 family [187], which bind their substrates in cooperation with the adaptor proteins of the α-arrestin family [188].These ligases tag their PM substrates with the K63-linked polyUb chains, which are recognized by the highly conserved endosomal sorting complexes required for transport (ESCRT) [189][190][191].The ESCRT machinery is crucial for the sorting of membrane proteins to the endosomal lumen, and their proteolysis occurs in the vacuolar/lysosomal lumen after the fusion of endosomes and vacuoles/lysosomes [189].Both the Rsp5/NEDD4 ligases and the ESCRT complexes, together with ubiquitin ligases Pib1p and the defective SREBP cleavage (Dsc) complex, are also involved in the degradation of membrane proteins at the vacuolar/lysosomal membrane [77,192]. Degradation of Membrane Proteins in Response to Arsenic and Antimony Stress Recent proteomic studies demonstrate that exposure to arsenic induces intensive remodeling of the proteome in both yeast and human cells [148,193], including changes in the membrane proteome composition.As it was mentioned before, the proper response to arsenicals and antimonials in yeast requires their export out of the cell mainly through the arsenic/antimony Acr3 transporter [115,154].Recently, the process of the degradation of the protein has been examined.It has been established that Acr3p is a moderately stable protein, and its half-life is not related to the presence of arsenic [59].Acr3p has been found to undergo vacuolar proteolysis dependent on α-arrestins Art3 and Art4, which are speculated to bind the negatively charged N-terminus of the transporter and promote its Rsp5-dependent polyubiquitylation and degradation [59]. Both arsenic and antimony are substrates for the aquaglyceroporin Fps1 [19,103].Although the adjustment of the activity of Fps1p is quite well understood, the regulation of its half-life under metalloid stress remains mostly uncharacterized.The activity of this channel depends on the phosphorylation by the stress-response-related MAPK kinase Hog1, which, under distinct conditions, regulates the activity and/or half-life of Fps1p.For instance, in the presence of high levels of acetic acid, Hog1p rapidly phosphorylates the T231 and S537 residues of Fps1p, effectively targeting it for endocytosis and degradation [194].In arsenic stress, on the other hand, Fps1p seems to be regulated in terms of switching between the open/closed channel states rather than protein stability [103,148].Nevertheless, a highthroughput proteomic study demonstrated a slight decrease in Fps1p level after prolonged exposure to 1 mM As(III) [148].Given that Fps1p is a main uptake pathway for As(III) and Sb(III), this phenomenon may be connected to a long-term strategy of cellular adjustment to metalloid stress consistent with the observations that lack of FPS1 or maintaining the channel in a closed state increases yeast tolerance to As(III) and Sb(III) [19,103]. Exposure to arsenic has been found to rapidly downregulate the yeast transporters of the hexose transporter (HXT) family (the glucose transporters GLUT in humans) [107], which is another entrance pathway for As(III) [105,106].While the mid/high-affinity hexose transporters Hxt2/6/7 have been observed to degrade most rapidly, the protein levels of the low-affinity transporters Hxt1/3 decreased as well, and the level of the stress conditionsrelated Hxt5 transporter has not changed [107].In yeast, arsenite acts as a competitive substrate for the HXT transporters [105].The ability of arsenic to disturb proper glucose metabolism poses serious energetic stress for cells; hence, the downregulation of this arsenic import pathway seems to be an effective mechanism for cellular protection.The degradation of the HXT and GLUT transporters involves the Rsp5/NEDD4 ubiquitin ligases and several α-arrestin adaptor proteins.Although there is little information on the exact mechanism of arsenic-dependent degradation of these transporters, it has been demonstrated that in arsenic stress, several HXT transporters are degraded in an Rsp5pand K63-type polyUb chain-dependent manner [107]. It is worth mentioning that exposure to As(III) has also been found to trigger the degradation of a multitude of nutrient transporters in yeast, such as the arginine permease Can1, lysine permease Lyp1, and multi-amino acid permease Tat1 [148].However, the methionine permease Mup1 has been shown to be upregulated in response to As(III) instead [148].Moreover, the overexpression of the yeast vacuolar amino acid permease Vba3 has been recently observed to provide the tolerance of yeast cells to arsenate, although the mechanism responsible for the phenomenon seems to not be related to increased As(V) accumulation in the vacuole [72].In eukaryotes, the TORC1 kinase complex is the master regulator of nutrient response.When active, TORC1 orchestrates the degradation of the amino acid permeases [195,196].Surprisingly, though, arsenic has been previously demonstrated to inhibit TORC1 [88].Thus, these observations suggest a more complex mechanism of the arsenic-dependent regulation of nutrient transporters which is yet to be elucidated.Given the lack of sufficient data, the full unraveling of the relationship between arsenic stress and nutrient transporters requires further studies. The resistance to both arsenic and antimony can be acquired by their sequestration into the vacuolar lumen by the ABC transporters such as Ycf1p.The regulation of the stability of this protein is scarcely characterized.A recent structural study revealed that Ycf1p requires its main phosphorylation sites (S908 and T911) in order to maintain structural stability [119].These sites are located in the R-domain of Ycf1p, which is a functionally conserved region of interaction between the ABCC transporters and various protein kinases, and the disruption has been demonstrated to cause high instability and the rapid degradation of Ycf1p [119].Several ubiquitylation sites in Ycf1 have been identified so far [160,197].Interestingly, the K504 and K862 residues of Ycf1p have been described as modified with the K63-type polyUb chains [197].The Dsc complex subunit ubiquitin ligase Tul1 has been demonstrated to promote vacuolar degradation of Ycf1p when overexpressed, suggesting it may be the ligases responsible for its ubiquitylation [77].At the same time, it has been demonstrated that Tul1p provides tolerance to arsenate when overproduced [72].Nevertheless, the relationship between the phosphorylation and degradation of Ycf1p remains elusive.It is worth mentioning, however, that upon arsenic exposure, the protein level of Ycf1p seems to remain virtually unchanged [148,197]. As for the influx of pentavalent arsenicals, the main proteins responsible for the process are inorganic phosphate transporters, especially the high-affinity phosphate transporter Pho84 [72,109].Its PM localization is regulated by the Pho86 protein, which conditions proper ER-exit of Pho84p and increases the PM level of Pho84p when overexpressed [72].Little is known, however, about the regulated degradation of Pho84p, especially in response to arsenic stress.High concentrations of inorganic phosphate have been demonstrated to trigger the degradation of Pho84p in a protein kinase A (PKA)-dependent manner [198].Although there is no sufficient data on the degradation rate of Pho84p in the presence of arsenate, the pho84∆ strain is hypertolerant to As(V) [199], and a high-throughput proteomic analysis revealed that in response to arsenite, the protein level of Pho84p indeed rapidly decreases [148].Moreover, Pho84p has been found to physically interact with Rsp5p, which may indicate its role in the degradation of Pho84p [200].Nevertheless, the exact mechanism regulating the degradation of Pho84p in response to arsenicals remains ambiguous. Pentavalent arsenic also downregulates the high-affinity iron uptake system involving the Ftr1 and Fet3 proteins.The FTR1 and FET3 mRNA, as well as the Fet3 protein, are rapidly degraded upon arsenate treatment, and this phenomenon is related to a lower accumulation of arsenic in yeast cells [114].However, As(III) treatment also causes a decrease in the protein levels of Fet3p and Ftr1p [148].Whether this effect is similarly caused by both As(III) and As(V) is not clear, and the cellular reduction of arsenate to trivalent arsenic cannot be excluded, especially after prolonged exposure time.Nevertheless, these data strongly imply that there exists an important overlap between arsenic and iron uptake and metabolism pathways. Degradation of Membrane Proteins in Response to Cadmium Stress A highly toxic heavy metal, cadmium, has been demonstrated to cause strong disbalance in redox and divalent-ion homeostasis [15,46,[201][202][203].Cd(II) is also able to affect signaling pathways in the cell.For instance, cadmium not only is a competitive substrate for calcium channels but also activates them, providing means for the hyperaccumulation of calcium in distinct cellular compartments and dysregulating calcium-dependent signaling pathways [46,47].Cadmium is also able to hijack other divalent ion uptake pathways.For instance, the zinc PM transporter Zrt1 has been demonstrated to participate in Cd(II) import to the cytosol.In the presence of high concentrations of Zn and Cd, Zrt1p is removed from the cell surface to prevent the uptake of toxic Cd and excess Zn [174,204].Zrt1p inactivation involves Rsp5p-dependent ubiquitylation, followed by endocytosis and degradation in the vacuole [86,204]. Cadmium is characterized by its severe proteotoxic activity.The acute cadmiuminduced proteotoxic stress is connected to Cd(II)-induced protein misfolding, as Cd(II) has been demonstrated to bind thiol groups, effectively disrupting disulfide bonds between cysteine residues [205,206].Cadmium has been found to affect the structure of not only cytosolic [207] but also membrane proteins, especially at the ER [208].For instance, Cd(II) has been demonstrated to target ER proteins, and cadmium exposure strongly upregulates the ERAD pathway [209]. Similarly to arsenic and antimony, cadmium can be either sequestered into the vacuolar lumen or extruded out of the cell.The former process is facilitated mostly by the ABCC transporter Ycf1 and, to some extent, its paralogues Bpt1p and Vmr1p [124], as well as several transporters of other divalent metals (e.g., Ypk9p, Zrc1p) (Figure 2) [210,211].As for the efflux of Cd(II), several proteins capable of exporting Cd(II) out of the cell were identified.For instance, Cd(II) is one of the substrates for the PM multidrug transporter Yor1p [133], which requires α-arrestins Art1-5/7 to be properly degraded in response to cycloheximide treatment [82,133].However, the most interesting example of PM Cd(II) exporters is the P-type ATPase Pca1 of S. cerevisiae (Figure 2).Intriguingly, under normal conditions, the ATPase is constantly degraded in the ER through the ERAD machinery [80].In the absence of cadmium, ubiquitin ligase Doa10p recognizes the hydrophobic degradation-promoting sequence within the N-terminal region of Pca1p [80,212].In effect, Doa10p ubiquitylates Pca1p in cooperation with ubiquitin-conjugating (Ubc) enzymes Ubc6/7 and targets Pca1p for proteasomal degradation.Strikingly, upon cadmium exposure, the cysteine-rich Nterminal tail of Pca1p directly binds Cd(II), effectively preventing Doa10p from recognizing the degradation signal [212].It results in ER rescue and subsequent transport to the PM, where Pca1p participates in Cd(II) export [80].Altogether, this phenomenon provides a pool of unmatured Pca1p, which can be instantly re-localized from the ER to the PM in the presence of Cd(II), allowing a rapid response to cadmium stress. The Role of Membrane Transporters in Metal-and Metalloid-Related Human Pathologies Heavy metals and metalloids ubiquitously occur in the environment.Hence, all organisms developed mechanisms for their detoxification, which often involve similar membrane transporters and channels.Many yeast proteins involved in the transport of arsenic, antimony, and cadmium have similar counterparts to those found in humans.Importantly, arsenic, antimony, and cadmium not only have malignant effects on human health but also found use in medicine and various areas of industry (Figure 1).In this chapter, we present the impact of arsenic, antimony, and cadmium on human health and disease pathogenesis in the context of selected human membrane proteins (Table 2).ABCA1/ABCB1-ATP-binding cassette protein A1/B1; AQP9-Aquaporin 9; SLC11A2-Solute Carrier family 11 member 2; ATP7A-P-type ATPase A7. Health Implications of Impaired Functioning of Arsenic and Antimony Transporters in Humans Arsenic is an especially dangerous contaminant, as more than 200 million people live in areas with elevated levels of this element (mostly in Asia and Latin America) [229,230].Arsenic is a known carcinogen and neurotoxin.It leads to skin, bladder, liver, and lung cancer [229].It is harmful to the skin, causing numerous diseases and pigmentation changes [231].Exposure to arsenic has also been linked to diabetes mellitus due to arsenicinduced pancreatic β-cell death [232].Additionally, arsenic exposure seems to be associated with insulin resistance and cardiovascular disorders as a result of impaired vascular response to neurotransmitters and abnormalities in vascular muscle calcium signaling [233].A proper diet rich in vitamins and natural antioxidants has been shown to counter arsenic toxicity [234].There are also attempts at arsenic detoxification using nutraceuticals [235].Despite its deleteriousness, arsenic-based therapeutics can be used to treat some serious diseases, such as acute promyelocytic leukemia (APL) and lung cancer [236]. Antimony is a metalloid commonly used in the metal and plastic industries.Sb enters the human body mainly through the inhalation of contaminated air.The uptake from the gastrointestinal tract is lower than 1% [237].Sb exposure has been linked to pulmonary toxicity (pneumonitis), causing chronic inflammation, mild fibrosis, and elevated cancer risk [238].Moreover, Sb accumulates in red blood cells due to integration with hemoglobin [239] and leads to hemolysis [240].Sb alters serum cytokine and immunoglobulin levels [241], lowers thyroid hormone levels [242], and induces chromosome aberrations [238].Sb has also been shown to affect human reproduction, as it decreases sperm concentration [243].During pregnancy, Sb disrupts blood glucose homeostasis [244] and induces hypertension [245].Sb also has a positive effect on human cells.It has been shown that Sb induced the apoptosis of acute promyelocytic leukemia cells [246].Antimony is also widely used as an antileishmanial drug [9]. Several human membrane transporters have been shown to transport arsenic and its compounds.ATP-binding cassette (ABC) proteins, such as ABCA1, ABCB1, and ABCC1, are linked to acquired arsenic and drug resistance [247].ABCB1 and ABCC1 upregulation leads to decreased arsenic accumulation and, thus, higher resistance [248].Global deficiency of ABCA1 causes Tangier disease, in which one of the symptoms is an almost complete loss of high-density lipoprotein cholesterol (HDL), as well as splenomegaly, enlarged tonsils, and atherosclerosis [213].ABCA1 is also involved in coronary heart disease (CHD), type 2 diabetes (T2D), thrombosis, age-related macular degeneration (AMD), glaucoma, viral infections, and neurological disorders, such as traumatic brain injury, Alzheimer's disease, and Parkinson's disease [214].ABCB1 is also overexpressed in ovarian cancer [215], causing multidrug resistance, which is a significant difficulty encountered during chemotherapy [247]. Glucose transporter 1 (GLUT1) has been found to facilitate arsenic uptake in humans [106].Permease GLUT1 is mainly expressed in erythrocytes and epithelial cells of the blood-brain barrier.Its physiological role is to mediate the transport of glucose into the brain cells across the blood-brain barrier [249].Thus, GLUT1 catalyzes the majority of arsenic uptake into erythrocytes and the brain, possibly leading to arsenic-induced cardiovascular disorders and neurotoxicity [106].The downregulation of GLUT1 results in GLUT1-deficiency syndrome (GLUT1DS), the symptoms of which include delayed neurological development and various neurological disorders [216]. It was shown that Human Aquaporin 9 (AQP9, homologous to yeast Fps1p) also contributes to arsenic uptake [250].AQP9 is expressed mainly in the liver and in leukocytes [251].Alterations in AQP9 expression result in various diseases, leading especially to liver injury and immune disorders, but also inflammation and numerous cancers, contributing as a promising therapeutic target and biomarker [252].Improper AQP9 expression promotes chronic liver injury (CLI), which is a common disease resulting in hepatic steatosis, liver fibrosis, and eventually, if not treated, hepatocellular carcinoma (HCC) [217].In CLI, AQP9 is overexpressed [217], while in HCC, AQP9 is downregulated [219].Using arsenic-sensitive and arsenic-resistant liver cancer cell lines, it has been established that the phosphorylation of AQP9 regulated by p38 kinase (homologous to yeast Hog1p) regulates cellular arsenic sensitivity [253].Such tolerance can limit arsenic-dependent cancer therapies; thus, the explanation of the mechanism underlying the resistance is an important step for improving therapeutic strategies [254].As AQP9 is one of the most common AQPs in immune cells, it is closely related to the regulation of immune response [255].Its expression increases after lipopolysaccharide (LPS) influence, playing an important role in the development of early stages of LPS-induced endotoxic shock and indicating AQP9 as a promising drug target in sepsis treatment [256].The overexpression of AQP9 is also connected with systemic inflammatory response syndrome (SIRS) [218].Moreover, it regulates the migration of immune cells, facilitating their motility and chemosensing [218,257].AQP9 abnormalities are also connected to male and female infertility and pregnancy complications [258].In females, AQP9 is downregulated in polycystic ovary syndrome (PCOS) and is associated with hyperandrogenism [220].In males, a lower expression level of AQP9 alters sperm maturation and storage [221]. There are no confirmed antimony transporters in humans, but there is some research indicating that membrane proteins involved in arsenic uptake (AQP9, ABC transporters) also facilitate antimony transport [259], suggesting the involvement of similar transporters and pathologies related to them as in the case of arsenic. Health Implications of Impaired Functioning of Cadmium Transporters in Humans Cadmium is not transitioned into less toxic compounds and is ineffectively eliminated in humans.Moreover, due to its exceedingly long biological half-live reaching 10-30 years, it accumulates easily, especially in the kidneys and liver, effectively causing organ failure [260].Exposure to cadmium species in the environment increases the risk of lung, kidney, prostate, pancreatic, breast, and urinary system cancer.The mechanisms by which cadmium promotes carcinogenesis are primarily dependent on oxidative stress coupled with the inhibition of antioxidants [40].It also promotes lipid peroxidation and alters DNA maintenance mechanisms [261]. It has been observed that people who live in Japan in areas with soil contaminated by Cd developed an "Itai-Itai" disease, which is a severe kidney and bone syndrome with fractures and bone deformation [262], as Cd affects vitamin D and calcium assimilability [263].Other diseases commonly associated with Cd exposure are anemia [264], diabetes [265], and osteoporosis [263].Cd also has deleterious effects on the male reproductive system.It alters the migration of germ cells in the testis and reduces sperm count and motility [266].Cd also affects the cardiovascular system, the lungs, the brain, the pancreas, and the adrenal glands [267].Prevention and treatment methods depend on cadmium chelating agents, such as Ca-EDTA [268], meso-2,3-dimercaptosuccinic acid (DMSA), and its lipophilic alkyl monoester MiADMSA [269,270].Moreover, antioxidant vitamins A, C, and E, as well as selenium, zinc, and magnesium supplementation, have been proven to detoxify cadmium [271]. In rats, Cd has been shown to be taken up from the intestinal tissues by metal transporters normally involved in Cu, Fe, and Zn uptake [272].Divalent metal transporter 1 (DMT1) and ferroportin 1 (FPN1) and their human homolog Solute Carrier family 11 member 2 (SLC11A2) and SLC40A1, respectively, are involved in iron adsorption, while at low Fe level the Cd uptake increases [241,273].Hereditary hemochromatosis (HH) is associated with SLC11A2 overexpression in the duodenum, which leads to iron accumulation [222].The impairment of SLC40A1 function, leading to increased iron levels in specific brain regions, causes multiple neurodegenerative disorders, such as Parkinson's disease, Huntington's disease, and Alzheimer's disease.Lower SLC40A1 expression is correlated with aggressive breast cancer [226].The overexpression of SLC11A2 has been linked to esophageal [223] and colorectal cancer [223,224].In ovarian cancer, SLC11A2 functions as a biomarker and a possible therapeutic target [225].The overexpression of the zinc transporter ZIP8 and its human homolog SLC39A8 has been found to correlate with Cd-sensitivity.It also plays a crucial role in the development of inflammation, and its knockdown is harmful to mitochondria and increases cell death [274].Other members of the SLC39 family, such as SLC39A4 and SLC39A14, have also been shown to be involved in Cd uptake [272].The loss of function in SLC39A4 causes acrodermatitis enteropathica, a disease-causing systemic zinc deficiency.SLC39A14 has been demonstrated to be responsive to interleukin 6 (IL-6) in an acute-phase reaction as it localizes in the plasma membrane of hepatocytes and leads to one of the classic acute-phase responses, namely hypozincemia in the liver [227]. Copper-transporting P-type ATPase, ATP7A, also transports cadmium [272].Mutations in the ATP7A gene cause Menkes disease (MD), occipital horn syndrome (OHS), and distal motor neuropathy (DMN) [228].MD, also called kinky hair disease, is an X-linked disorder.It results in the elimination of copper from all the tissues except for the liver.In the brain, copper levels are abnormally high.The symptoms of MD are seizures, psychomotor retardation, hypoglycemia, and representative kinky hair with hyperelastic skin, bone fractures, and aneurysms.OHS, also known as X-linked cutis laxa or Ehlers-Danlos syndrome type 9, is a less acute variant of MD.Neurologic symptoms can be absent while the most typical abnormality is the calcification of the trapezius and sternocleidomastoid muscles at their attachments to the occipital bone.In DMN, the atrophy and weakness of distal muscles in the hands and feet are observed [228]. Summary The ubiquitous environmental presence of metalloids and heavy metals, especially arsenic, antimony, and cadmium, poses a serious threat to human health.In the course of evolution, cells developed multiple mechanisms of protection against these toxins.Those mechanisms involve membrane proteins, which are crucial for the transport of stressors across biological membranes.Thus, in order to minimize the adverse activity of arsenic, antimony, and cadmium, cells rapidly, specifically, and selectively regulate the protein composition at multiple stages.These include the regulation of the transcription of crucial genes encoding membrane proteins [88,89,91,125,126], the post-translational regulation of the activity of these proteins [144][145][146][147], and the degradation of the misfolded ones or those involved in the uptake of heavy metals and metalloids [148,193,209]. Although knowledge of the topic is crucial to preventing the effects of exposure to arsenic, antimony, and cadmium, studying them in human cells presents a major challenge for scientists.A great advantage is provided by research on yeast, which is an excellent model for studying the processes occurring in the cells of higher eukaryotes [18].The results obtained through these studies could possibly be employed to devise and improve therapies against membrane protein-related diseases associated with exposure to arsenic, antimony, and cadmium [214,217,226].On the other hand, the compounds of these toxic elements have a chance to be used against serious human diseases, such as cancer or parasite-caused leishmaniasis.For instance, given the glycolysis-centered metabolism of cancer cells, they are known to overproduce glucose transporters of the GLUT family, which are homologous to yeast HXT transporters and constitute a major pathway of arsenic uptake [105,106,275].Moreover, arsenicals are known inhibitors of these transporters and interact almost irreversibly with their active centers [276][277][278].Nevertheless, the future of possible therapies depends on a thorough understanding of the processes responsible for the regulation of arsenic, antimony, and cadmium transporters; thus, further research in this area is necessary. Figure 1 . Figure 1.The effect of arsenic, antimony, and cadmium on human health and life.Light red indicates the negative effects and diseases caused by poisoning with particular metals/metalloids on human health.Light green indicates the positive use of metals/metalloids in industry and drug therapies. Figure 1 . Figure 1.The effect of arsenic, antimony, and cadmium on human health and life.Light red indicates the negative effects and diseases caused by poisoning with particular metals/metalloids on human health.Light green indicates the positive use of metals/metalloids in industry and drug therapies. Figure 2 . Figure 2. Schematic representation of metal(loid) detoxification pathways in yeast and the selected mechanisms of regulation of the membrane transporters in response to metal(loid) stress.The main pathways and mechanisms of multilevel regulation of membrane proteins at distinct cellular compartments.ER-endoplasmic reticulum; VM-vacuolar membrane; PM-plasma membrane; Pi-phosphate group; Ub-ubiquitin; GSH-reduced glutathione; DXE-ER exit signal.Membrane transporters depicted as colored rounded rectangles-vacuolar in grayscale, cadmium plasma membrane transporters in green shades, arsenic, and antimony plasma membrane transporters in reddish/pink shades, bidirectional Fps1 channel as blue.Presented regulation steps involve expression changes, protein production, sorting, and trafficking, as well as post-translational modifications, ubiquitylation, and vacuolar degradation.Black arrows indicate metal(loid) form changes and transport, and dashed gray arrows indicate interactions or regulatory aspects of factors impacting the activity of the selected transporters.Thick green and red arrows represent the upand downregulation of protein level/function.More elaborate explanation of all depicted regulation pathways can be found in the text. Figure 2 . Figure 2. Schematic representation of metal(loid) detoxification pathways in yeast and the selected mechanisms of regulation of the membrane transporters in response to metal(loid) stress.The main pathways and mechanisms of multilevel regulation of membrane proteins at distinct cellular compartments.ER-endoplasmic reticulum; VM-vacuolar membrane; PM-plasma membrane; Pi-phosphate group; Ub-ubiquitin; GSH-reduced glutathione; DXE-ER exit signal.Membrane transporters depicted as colored rounded rectangles-vacuolar in grayscale, cadmium plasma membrane transporters in green shades, arsenic, and antimony plasma membrane transporters in reddish/pink shades, bidirectional Fps1 channel as blue.Presented regulation steps involve expression changes, protein production, sorting, and trafficking, as well as post-translational modifications, ubiquitylation, and vacuolar degradation.Black arrows indicate metal(loid) form changes and transport, and dashed gray arrows indicate interactions or regulatory aspects of factors impacting the activity of the selected transporters.Thick green and red arrows represent the up-and downregulation of protein level/function.More elaborate explanation of all depicted regulation pathways can be found in the text. Table 1 . Membrane proteins involved in arsenic, antimony, and cadmium response and their human homologs and regulators. Table 2 . Human metal(loid) transport-related proteins and diseases associated with their dysfunction.
14,369.6
2024-04-01T00:00:00.000
[ "Environmental Science", "Biology", "Chemistry" ]
Projection Matrix Design for Co-Sparse Analysis Model Based Compressive Sensing Co-sparse analysis model based-compressive sensing (CAMBCS) has gained attention in recent years as alternative to conventional sparse synthesis model based (SSMB)-CS. The equivalent operator as counterpart of the equivalent dictionary in the SSMB-CS is introduced in the CAMB-CS as the product of projection matrix and transpose of the analysis dictionary. This paper proposes an algorithm for designing suitable projection matrix for CAMB-CS by minimizing the mutual coherence of the equivalent operator based on equiangular tight frames design. The simulation results show that the CAMB-CS with the proposed projection matrix outperforms the SSMB-CS in terms of the signal quality reconstruction. Introduction Compressive sensing (CS) as a new paradigm in signal acquisition has gained popularity over the last decade after it was introduced in [1][2].CS acquires the signal directly in already compressed form by projecting it into a well-designed projection matrix.CS framework has been applied in many applications such as imaging applications, internet of thing, data security, and more [3][4].A conventional CS systems works based on the sparse synthesis model of signal where a signal can be synthesized from a few atoms of a synthesis dictionary [5].The alternative model is co-sparse model where sparse analysis coefficients can be obtained by multiplying the signal and an analysis dictionary (operator) [6].Co-sparse analysis model based (CAMB)-CS has attracted attention in recent years because it outperforms the synthesis model as shown in [7][8]. Three main problems of CS are how to build a dictionary, design a proper projection matrix and reconstruct the signal from CS.The famous KSVD algorithm and its extensions have been commonly used to build a synthesis dictionary [9][10] also the improvements by exploiting additional structure of sparse coefficients can be found in [11][12].The analysis version of KSVD [13] and sparsifying transforms learning algorithms have been used to build an operator [14][15].The Convex and Relaxation, Greedy, and Bayesian algorithms are used for signal reconstruction in synthesis based CS [16] as well as the counterpart algorithms for analysis based CS [6,17].While how to design optimal projection matrix for sparse synthesis model based (SSMB)-CS has been widely proposed such as in [18][19] but for CAMB-CS has not received attention.This paper addresses how to design a projection matrix for CAMB-CS, use it to perform CS on a natural image and compare the image reconstruction performance to SSMB-CS. SSMB-CS And CAMB-CS In the SSMB-CS, the signal is synthesized from a sparse linear combinations of the dictionary columns  is number of non-zero elements in  .The CS is performed by multiplying the signal x and the projection matrix is a compressive measurement vector and N M  .The reconstructed signal x ˆ can be obtained from y by solving the following constrained problem : The problem in ( 2) is NP-hard and has combinatorial complexity but can be approximately solved using the Convex and Relaxation, Greedy, or Bayesian algorithms [16]. In CAMB-CS, the analysis coefficients  are obtained by multiplying the operator and the signal is number of non-zero elements in  .The reconstructed signal x ˆ can be obtained from CAMB-CS by solving the following constrained problem : The problem in (3) can be solved by using the SSMB-CS counterpart algorithms for CAMB-CS [17]. Projection Matrix Design In the SSMB-CS, the equivalent dictionary . The normalized equivalent dictionary is is Welch bound [20].The t -averaged mutual coherence G that has desired properties such as ETF Gram matrix [20].The projection matrix design is performed by solving: where F denotes the Frobenius norm and it can be solved based on shrinkage method [18] or alternating projection [19].The equivalent operator . The projection matrix design for CAMB-CS is performed by solving: This paper adapted algorithm in [19] to solve (5) by using the following algorithm which is denoted as OGS algorithm. OGS Algorithm Initialization:  -Operator; , calculate (6) until (12): eigenvalue decomposition of where e V is orthonormal matrix and where to continue the iterative procedure.End: End the algorithm, output Results and Discussion This paper used 1000 training-images in LabelMe training data set [21][22] where 20 nonoverlapping 8 8  patches are taken randomly from each image and each patch is rearranged as a vector of 1 64  .This training patches 20000 64   P were used to build synthesis dictionary 96 64    by using KSVD algorithm [9] and operator 64 96    by using the algorithm in [15].Algorithm in [19] which is denoted with BLH and OGS algorithm were used for SSMB-CS and CAMB-CS projection matrix design respectively.Both algorithms used the same Gaussian random matrix where J is number of patches in the tes image.CS was performed on those patches to obtain . The OMP [23] and its counterpart Greedy algorithm GAP [6] were used for SSMB-CS and CAMB-CS respectively to obtain each reconstructed patch From Table 1 and Figure 1, it is clear that the proposed algorithm in this paper (OGS CAMB-CS) outperforms the random projection matrix and the previous algorithm (BLH SSMB-CS).It is noted that the reconstruction time for CAMB-CS is comparable to SSMB-CS.Reconstruction time of Barbara test image, as an example, for CR = 31.25 % are 4.22 s, 6.19 s, 4.45 s and 6.67 s for Random SSMB-CS, Random CAMB-CS, BLH SSMB-CS and OGS CAMB-CS respectively. Conclusion In this paper, the projection matrix design algorithm for CAMB-CS was proposed to improve image reconstruction accuracy.The results show that CAMB-CS outperforms the SSMB-CS in terms of PSNR of the image reconstruction.Further improvement can be attempted in future work by designing projection matrix and operator learning simultaneously. is true and otherwise is zero.The common projection matrix design for SSMB-CS is based on how to make possible.It is done by making G as close as possible to a target Gram matrix t reconstructed patches are arranged to get the reconstructed image I ˆ.The Peak Signal-to-Noise Ratio (PSNR) was used to measure image reconstruction accuracy.It is defined as Table 1 . Table1shows the reconstruction (in PSNR (dB)) comparison of SSMB-CS and CAMB-CS for several standard test images with Compression Ratio (CR) = Ratio Reconstruction comparison of SSMB-CS and CAMB-CS for CR = 31.25 %.
1,430.8
2018-03-30T00:00:00.000
[ "Engineering" ]
ZNF750 exerted its Antitumor Action in Oral Squamous Cell Carcinoma by regulating E2F2 Cell cycle activator E2F transcription factor 2 (E2F2) play a key role in tumor development and metastasis. Previous RNA sequence analysis (GSE134835) revealed E2F2 was significantly reduced by Zinc-finger protein 750 (ZNF750) in oral squamous cell carcinoma (OSCC). This study was aimed to determine the involvement of E2F2 in antitumor action of ZNF750. The nude mouse xenograft model was established by subcutaneously injection of stable cell line CAL-27oeZNF750 or CAL-27shZNF750. Xenograft tumor volume and tumor weight was measured. The expression of E2F2, transcriptional repressors such as enhancer of zeste 2 (Ezh2), PHD finger protein 19 (PHF19), and the genes related to cell proliferation or metastasis was studied in vivo or in vitro. Luciferase assay was performed to investigate regulation effect of ZNF750 on E2F2 luciferase activity. The involvement of E2F2 in the antitumor action of ZNF750 was studied by cotransduced ZNF750 with E2F2 lentivirus. The tumor growth and metastasis was repressed by ZNF750 manifested by reduced tumor size, tumor weight and the genes related to cell proliferation and metastasis. However, all of these were reversed by knockdown of the ZNF750 gene. Furthermore, E2F2 luciferase activity was inhibited by ZNF750. E2F2 partly blocked the antitumor action of ZNF750 manifested by increased self-renewal, invasion, migration, elevated Ezh2 and MMP13 protein expression in ZNF750 + E2F2 groups. However, silenced E2F2 further enhanced the antitumor action of ZNF750. ZNF750 depressed E2F2 activity and played a critical role in regulating transcriptional repressors for inhibiting the OSCC growth and metastasis in OSCC. Introduction Oral squamous cell carcinoma (OSCC) is one of the most common malignant tumors worldwide, and approximately 90% of the total oral malignancies are squamous cell carcinomas [1]. OSCC poses significant mortality and morbidity in the patients [2], therefore, it is urgently to develop new targeted therapy strategies. PRC2 is one of the multimeric polycomb group (PcG) protein complex. PcG proteins are essential epigenetic repressors. They form two major protein complexes, called polycomb repressive complexes 1 and 2 (PRC1 and PRC2) whose function is to maintain transcriptional repression [9]. PRC2 is mainly trimethylate lysine 27 of histone H3 (H3K27me3). Trimethylation of H3K27 correlates with transcriptionally repressed chromatin. Ezh2 functions as a H3K27 methyltransferase when comprising the PRC2 [10]. Over-expression of Ezh2 has been reported in many cancers. It was testified that Ezh2 promoted cell proliferation, migration, invasion and metastasis of cancer cells [11]. Ezh2 are E2F transcription factors-regulated genes [12]. E2F is a family of transcription factor proteins that have been implicated in multiple biological functions in human cancer [13]. E2F are divided into two subfamilies: E2F transcription factors are divided into two subfamilies: transcription activators (E2F1, E2F2, E2F3a) and repressors (E2F3b, E2F4, E2F5, E2F6, E2F7 and E2F8). E2F2 is a member of E2F family of transcription activators [14]. High E2F2 expression was correlated to worsen overall survival [15]. Our previous RNA sequence analysis has found that ZNF750 has the ability to reduce E2F2 expression [4]. Currently, the transcriptional regulation of ZNF750 in OSCC is still unclear, and it remains to be fully determined the antitumor action of ZNF750 was relevance to its regulation on E2F2. The present study was to explore the mechanism of ZNF750 for inhibiting the cell proliferation or metastasis in OSCC cell line CAL-27 cells and xenograft model in nude mice. Animals, cell lines and plasmids The five-weeks-old male BALB/c-nu mice weighing 16-17 g (certificate number SCXK Beijing 20160006) were obtained from the Beijing Vital River Laboratory Animal Technology Co., Ltd (Beijing, China). The mice were kept under conventional housing conditions (22 ± 1 °C, 40-70% humidity) with food and water supply ad libitum and 12 h day/dark cycle. The mice were acclimated for one week before the beginning of the experiment. The experimental protocol was approved by Liaocheng People's Hospital Research Ethics Committee. All studies with animals were treated in accordance with the "Guide for the Care and Use of Laboratory Animals". Cell cultures and treatment The 293T cells and CAL-27 cells were grown in DMEM media supplemented with 10% FBS, streptomycin (100 mg/ml) and penicillin (100 IU/ml) at 37 °C in a humidified incubator with 5% CO 2 . All experiments were performed with mycoplasma-free cells. The CAL-27 cells growing at an exponential phase were randomly divided into four groups: Control (oe-Con and sh-Con) groups were transduced with oe-Con or sh-Con lentivirus respectively. Oe-ZNF750 (over-expression ZNF750 gene) and sh-ZNF750 (knockdown of the ZNF750 gene) groups were transduced with oe-ZNF750 or sh-ZNF750 lentivirus respectivly. For investigating the antitumor action of ZNF750 in CAL-27 cells was involved by E2F2, additional group Z+E (ZNF750+E2F2, over-expression of ZNF750 and E2F2 gene), Z+shE (over-expression of ZNF750 and silenced E2F2 gene) and oe-sh-Con (co-transduced with oe-Con and sh-Con lentivirus) was included. Lentiviral packaging and cell infection The lentiviral packaging and infection was performed as we described previously [16]. Briefly, lentiviral particles were produced in 293T cells by transfection with Lipofectamine 2000 (Thermo Fisher Scientific). Lipofectamine 2000/DNA complexes were added into 293T cells with the addition of caffeine (final concentration of 4 mM) to achieve higher titer lentivirus [17]. Lentivirus-containing supernatant was collected at 48 and 72 h post-transfection, filtered and concentrated using SBI's one-step virus concentration solution, PEG-it TM (SBI, USA). The CAL-27 cells grown at 30-50% confluence were infected with the oe-Con, oe-ZNF750, sh-Con, sh-ZNF750 lentivirus respectively, or cotransduced oe-ZNF750 with oe-E2F2 or sh-E2F2, oe-Con with sh-Con respectively in the presence of Polybrene (5 μg/ml, Sigma). Cells were allowed to recover for 48 h before being subjected to puromycin selection to obtain a stable cell line CAL-27 oeZNF750 , CAL-27 shZNF750 , CAL-27 oeZNF750+oeE2F2 , and CAL-27 oeZNF750+shE2F2 . The stable cell line CAL-27 oeCon , CAL-27 shCon and CAL-27 oe-sh-Con using as a control. Animal models and tumor xenograft growth To evaluate the antitumor effect of ZNF750 in the nude mice, 20 nude mice were randomly divided into four groups (n=5 of each), oe-Con, oe-ZNF750, sh-Con and sh-ZNF750 groups. The stable cell line CAL-27 oeCon and CAL-27 oeZNF750 , CAL-27 shCon and CAL-27 shZNF750 cells (3×10 6 ) were inoculated right armpit of nude mice to construct xenografts tumor model. The needle was stopped internally for 5 sec, rotated and pulled out to avoid leakage of the cell suspension. The activity, diet and mental state of nude mice were observed daily. The tumor volume was measured by a caliper every 6-7 days and calculated using the formla length × width 2 × 1/2. At the end of the experiment, the mice were euthanized 40 days later following inoculation. Tumors were excised and weighed to evaluate tumor growth, and then were fixed or frozen under liquid nitrogen for quantitative real-time PCR (qPCR) and western blot analysis. Immunohistochemistry The antigen identified by monoclonal antibody Ki67 and proliferating cell nuclear antigen (PCNA) protein expression in xenograft tumor samples was investigated by immunohistochemistry to evaluate the proliferation of cancer cells. Briefly, xenograft tumor samples were isolated from surrounding normal tissues. Fresh tissue samples were fixed in 10% formaldehyde and embedded in paraffin, Serial 5 µm sections were cut, deparaffinized in xylene, and hydrated through an ethanol series. Endogenous peroxidase was quenched in 3% hydrogen peroxide for 15 min incubation at room temperature. The sectioned slides were incubated with anti-Ki67 (1:16,000) and anti-PCNA (1:1000) antibody antibodies (all from Proteintech) at 4 °C overnight. Then the sections were incubated with KIT-5010 MaxVision TM HRP-Polymer anti-Mouse/Rabbit IHC Kit (Maxin-Bio, Co., Fuzhou, China) for 15 min. The slides were stained using a DAB Chromogen Substrate Kit (Maxin-Bio, Co., Fuzhou, China) for 3-5 min, and sections were counterstained with hematoxylin to identify nuclei. Images were acquired using a digital camera under the microscope (CKX71, Olympus). Quantitative real-time PCR (qPCR) Extraction of total RNA was performed using TRIzol ® reagent (Thermo Fisher Scientific). RNA purity was detected by NanoDrop 2000 (Thermo Fisher, America). RNA (1 μg) was reverse-transcribed to cDNA using a PrimeScript® RT Kit in 20 μl reactions. The cDNA product was diluted 2-folds, aliquoted, and stored at -80 °C. QPCR was performed on an ABI 7500 Sequence Detection System (Applied Biosystems) using SYBR ® Premix Ex Taq™ II Kit (all from Takara, Dalian, China). Amplification parameters for qPCR was 30 sec pre-incubation at 95 °C for one cycle, followed by 40 cycles of 95 °C for 5 sec and 60 °C for 34 sec. Table 1 summarizes the sequences of primers (Sangon Biotech) used in this study. The fold changes of genes were normalized to the housekeeping gene GAPDH by the 2-ΔΔCT method. Samples were analyzed in triplicate and each experiment was repeated for at least three times. Western blot Total protein was extracted from culture cells or tumor tissue from xenograft model of OSCC using 100 μl of ice cold lysis buffer including PMSF. The cell lysates were centrifuge for 5 min at 12000 × g at 4 °C and supernatant was collected as total protein. Protein concentrations were determined using the BCA protein assay kit (all from Beyotime, Jiangsu, China). Equal amount of proteins (15 µg) was used for Western blot. The proteins were denatured in 1× SDS -PAGE sample buffer (Beyotime, Jiangsu, China) for 5 min at 95 °C. Then, the denatured protein was electrophoresed on a 10% sodium dodecyl sulfatepolyacrylamide electophoresis (SDS-PAGE) and was transferred to polyvinylidene difluoride membranes (Millipore, Bedford, MA) after electrophoresis. Nonspecific bindings to the membranes were blocked with 5% (w/v) skimmed milk in TBST (Tris-buffered saline-Tween 20) at room temperature for 1 hour, and then incubated with the appropriate primary antibodies, anti-ZNF750, anti-E2F2 (1:1000, all from Abcam), anti-PHF19 (1:1000), anti-Ezh2 (1:2000), anti-cyclin D1 (1:5000 dilution), anti-matrix metalloproteinase (MMP) 9 (1:500, all from Proteintech), and mouse monoclonal anti-β-actin antibody (1:10000, Proteintech) overnight at 4 °C. After washing with TBST for three times, the membranes were incubated with species specific horseradish peroxidase coupled secondary antibodies for one hour. After washing, the membranes were visualized by ECL western blot detection reagents (Beyotime, Jiangsu, China). Quantification of the protein bands was visualized and analyzed with AlphaView analysis system (ProteinSimle, USA). The β-actin antibody was used as protein loading control. The values of ZNF750, PHF19, Ezh2, E2F2, cyclin D1 and MMP9 proteins expression were normalized against β-actin. Luciferase assay To investigate the regulation effect of ZNF750 on E2F2, the Dual Luciferase Reporter Assay Kit (E1910; Promega, Mannheim, Germany) was used to check luciferase and renilla signals according to a protocol provided by the manufacturer. Cells were divided into two groups: control groups (co-transfected with pcDNA3.1A, renilla reniformis plasmids and PGL3-E2F2-promoter) and ZNF750wt group (co-transfected with pcDNA3.1A-ZNF750, renilla reniformis plasmids and PGL3-E2F2-promoter). Luciferase activity of cell lysates were analyzed at 72 h after transfection using the GloMax ® Navigator (Promega), and the relative luciferase activity was determined as the quotient of firefly luciferase and renilla luciferase activity. The folds change of E2F2 luciferase activity was calculated against control groups. The analysis was performed in triplicate. Evaluation of E2F2 involvement in the antitumor action of ZNF750 For investigating the antitumor action of ZNF750 in CAL-27 cells were involved by E2F2, cell propagation, invasion, migration, Ezh2 and MMP13 protein expression were investigated by cell counting kit-8 (CCK-8), tumor sphere, colony formation assay, transwell assay, western blot and flow cytometry assay respectively. The CAL-27 cells were divided into seven groups: oe-Con, oe-ZNF750, sh-Con, sh-ZNF750, Z+E (co-transduced with ZNF750 and E2F2 virus), oe-sh-Con, and Z+shE (co-transduced with ZNF750 and sh-E2F2 virus) groups. To evaluate the tumor propagation potency and self-renewal of cells, cells viability, tumor sphere and colony forming assays was performed. For cells viability, it was evaluated by CCK-8 assay (Beyotime, Jiangsu, China) according to the manufacturer's instructions. Briefly, 10 μl of CCK-8 solution was added to each well during the 2 h of culture at 37 °C, and the absorbance in each well was measured at 450 nm using a 96-well Multiskan MK3 microplate reader and experiments were repeated three times. For tumor sphere formation, each group of cells (1 cells/μl) was trypsinized, counted, and cultured in ultra-low adherent six well plate (Corning Costar) with serum free DMEM medium supplemented with B27 serum free supplements (1:50; Invitrogen, Thermo Fisher Scientific, Inc.), 20 ng/ml human recombinant epidermal growth factor and 20 ng/ml basic fibroblast growth factor (bFGF) (all from PeproTech, Inc., Rocky Hill, NJ, USA). The spheroid formation was imaged and counted after 7 days of culture. For colony formation, the cells were seeded in six well plates at low density (500 cells/well) in triplicate, and cultured for 7 days. The plates were then washed with PBS and fixed with 4% paraformaldehyde for 30 min followed by staining with 0.5% crystal violet for 1 min. after washed with PBS, and the images of each well were captured and counted by AlphaView (ProteinSimple, Santa Clara, CA, USA). The test was repeated for three times. The cell invasion and migration assay was performed as we described previous [5,16]. For cell invasion assay, Corning transwell chambers with polycarbonate membrane (8 μm pore size) were used to evaluate the cell invasion, and matrigel (BD Biosciences, Franklin Lakes, NJ, USA) was used as the substrate for invasion. Cells that invaded to the lower surfaces of the membrane were fixed with 4% formaldehyde, stained with 0.1% crystal, and visualized under light microscopy (CKX71, Olympus). The average number of invasion cells per field was assessed by Image-Pro Plus 6 software. Cell migration assay use a similar approach without matrigel coating. Three independent experiments were performed. The protein expression of ZNF750, E2F2 and Ezh2 was investigated by western blot as we described above. MMP13 positive cells were assayed by flow cytometry. Each group of stable cell line cells (1×10 6 ) was washed twice with ice cold PBS, resuspended and incubated cells in BD Cytofix/ Cytoperm™ solution at a concentration of 1×10 6 cells/0.5 ml for 20 min on ice. After washed twice with BD Perm/Wash™ buffer (1×), the cells were stained with 5 μl primary rabbit anti-MMP13 (Abcam) at 37 °C for 20 min, and then incubated with 5 μl secondary antibody Alexa 488 goat anti-rabbit IgG (H+L) for 20 min. Samples were analyzed by the BD FACSDiva software. Statistical analysis The values were expressed as means ± SD. Data were analyzed by one-way ANOVA, followed by a post hoc SNK-q test using the SPSS 23.0 statistical package. When comparing two conditions, the data were analyzed by Student's t-test between two groups. P < 0.05 was considered a significant difference. E2F2 and transcriptional repressors were repressed by ZNF750 in vivo and in vitro In this study, the expression of ZNF750 was detected by qPCR and western blot in vivo or in vitro to confirm stable cell lines were obtained. The ZNF750 mRNA expression was significantly increased 4505.9-folds and 55.8-folds in the CAL-27 oeZNF750 cells and xenograft tumor samples respectively, while knockdown of the ZNF750 gene leaded to 77.8% and 60% down-regulation of ZNF750 in the CAL-27 shZNF750 cells and xenografts tumor samples. Furthermore, the protein expression of ZNF750 was increased in oe-ZNF750 groups and decreased in sh-ZNF750 groups respectively in vivo ( Figure 1A-C). The present study showed that the transcriptional repressors (PHF19, Ezh2, EED, SUZ12) and UBE2C mRNA or protein expression was all downregulated by ZNF750, whereas, it was all upregulated in sh-ZNF750 groups in vivo or in vitro ( Figure 1D-G). Moreover, the cell cycle activator E2F2 and cell cycle regulator cyclinD1 expression was repressed in oe-ZNF750 groups but it was increased in sh-ZNF750 groups compared to their matched control groups ( Figure 1E, 1H). ZNF750 inhibited the cell growth in vivo To confirm the inhibitory effect of ZNF750 on tumor growth in vivo, we subcutaneously injected the stable cell lines CAL-27 oeCon , CAL-27 oeZNF750 , CAL-27 shCon and CAL-27 shZNF750 into nude mice, and then the tumor volume was measured about weekly for 40 days. The tumor formation rate in the nude mice was 100% (20/20). The present study showed that the tumor volumes were smaller in oe-ZNF750 groups than in oe-con groups at check point, but it was bigger in sh-ZNF750 groups than in sh-Con groups (Figure 2A, 2B). Furthermore, the tumor weight was reduced in oe-ZNF750 groups but it was increased in sh-ZNF750 groups than in their matched control groups, which indicated that ZNF750 had significant inhibitory effect on tumor growth ( Figure 2C, 2D). Moreover, the proliferation markers Ki67 and PCNA protein expression in xenograft tumor samples was lower in the oe-ZNF750 groups than in oe-Con groups, whereas, it was higher expressed in sh-ZNF750 groups than in sh-Con groups ( Figure 2E, 2F). The expression level of Ki67 and PCNA in oe-ZNF750 and sh-ZNF750 groups was in consistent with the observation results from the tumor growth. ZNF750 repressed the genes related to metastasis in vivo The expression of metastasis related genes MMP1, MMP3, MMP7, MMP9, MMP13, MMP17 and their endogenous inhibitor tissue inhibitor of metalloproteinase-1 (TIMP1) was studied to evaluate the inhibitory effect of ZNF750 on cell metastasis in vivo. The results showed that the above mentioned matrix metalloproteinases (MMPs) expression were all reduced by ZNF750, whereas, it was all increased in sh-ZNF750 groups compared to their matched control groups. However, the MMP inhibitor TIMP1 was downregulated in sh-ZNF750 groups but was upregulated in oe-ZNF750 groups ( Figure 3A-D). ZNF750 inhibited E2F2 luciferase activity Luciferase reporter assay indicated that the E2F2 gene relative luciferase activity was significantly decreased in ZNF750wt groups compared to control groups (p<0.01), which indicated that ZNF750 negatively regulated the E2F2 expression ( Figure 4A). E2F2 involved in the inhibitory action of ZNF750 on cell proliferation, invasion and migration The present results indicated that ZNF750 could abolish the cell proliferation, invasion and migration ability, manifested by reduced cell viability (reduced about 1.40-folds), number of tumor sphere (3.34-folds), colony formation (2.33-folds), cell invasion (2.27-folds) and migration (3.58-folds) in oe-ZNF750 groups compared to oe-con groups, but it was all increased in sh-ZNF750 groups compared to sh-con groups. Furthermore, E2F2 partly blocked the antitumor function of ZNF750 manifested by increased cell proliferation, invasion and migration in Z+E groups compared to oe-ZNF750 groups. However, cotransduced sh-E2F2 with ZNF750 lentivirus further enhanced the antitumor function of ZNF750 in Z+shE groups ( Figure 4B-E). E2F2 blocked the depressed expression of Ezh2 and MMP13 induced by ZNF750 Western blot showed that E2F2 partly reversed the inhibitory effect of ZNF750 on transcriptional repressor Ezh2 expression. The Ezh2 protein expression was reduced in ZNF750 groups but it was elevated in sh-ZNF750 groups compared to their matched groups ( Figure 5A, 5B). Moreover, compared to ZNF750 groups, Ezh2 protein expression were elevated in Z+E groups but slightly reduced in Z+shE groups ( Figure 5A, 5B). The flow cytometry analysis further testified that MMP13 protein expression in CAL-27 cells was repressed (from 10.03% to 1.90%) by overexpression of ZNF750 but it was elevated (from 9.55% to 63.60%) by knockdown of the ZNF750 gene compared to their matched groups, and more importantly, cotransduced E2F2 with ZNF750 lentivirus partly blocked the inhibitory effect of ZNF750 on MMP13 protein expression (from 1.90% to 16.42%) compared to oe-ZNF750 groups ( Figure 5C). MMP13 protein expression was detected by flow cytometry. ZNF750 reduced the MMP13 protein expression from 10.03% to 1.90%, however, E2F2 partly blocked the inhibitory function of ZNF750 on MMP13 expression (from 1.90% to 16.42%). * p<0.05, ** p<0.01. 1: oe-control groups, 2: oe-ZNF750 groups, 3: sh-control groups, 4: sh-ZNF750 groups. 5: oe-sh-control groups, 6: Z+E groups, 7: Z+shE groups. E2F2, E2F transcription factor 2; Ezh2, Enhancer of zeste 2; ZNF750, Zinc finger protein 750. Discussion Our previous study had found that the potential tumor suppressor ZNF750 inhibited cell cycle activator E2F2 and transcriptional repressors (PHF19, Ezh2) expression analyzed by RNA sequence (NCBI/GEO/GSE134835) [4]. In this study, we elucidated the antitumor mechanism of ZNF750 on OSCC in vivo or in vitro. The present study revealed that knockdown of the ZNF750 gene enhanced the expression of E2F2, transcriptional repressors and UBE2C in CAL-27 cells or xenograft model in nude mice. On the contrary, increased ZNF750 expression leaded to decreased expression of it in vivo or in vitro. These observations are consistent with our previous RNA sequence analysis (NCBI/GEO/GSE134835) that transcriptional repressor PHF19 and Ezh2 expression was repressed by ZNF750 [4]. PHF19 (PHD finger protein 19) is PRC2 associated factors that form sub-complexes with PRC2 core components to modulate the enzymatic activity of PRC2 [8]. PRC2 is the major H3K27 methyltransferase and is responsible for maintaining repressed gene expression patterns throughout development [18]. Thus, PHF19 functions as a transcriptional repressor, and play an important role in regulating transcription and histone demethylation [19]. Decreasing level of PHF19 resulted in reduced H3K27me3 while overexpressing PHF19 led to increased H3K27me3 level [20]. The PRC2 complex is composed of a trimeric core of Ezh1/2, EED and SUZ12, catalyzes the trimethylation of histone H3 at lysine 27 (H3K27me3) [21]. PHF19 could form the PRC2 with Ezh2, EED, and SUZ12, knockdown of the PHF19 gene suppressed Ezh2 phosphorylation and proliferation in glioma cells [22]. Therefore, the increased PHF19 with Ezh2, EED, and SUZ12 which form the PRC2 in sh-ZNF750 groups may result in abnormal repressed gene expression. Ezh2 is downstream of the pRB-E2F pathway, amplified in cancer and strongly associated with tumor proliferation and aggressiveness [23,24]. Targeting Ezh2 markedly suppressed OSCC invasion [25]. It was found that Ezh2 could interact with ubiquitin-conjugating enzyme E2C (UBE2C) [26], and UBE2C involved in head and neck tumorigenesis through cell cycle [27]. Cell cycle inhibitor p21 caused growth arrest by inhibition of cyclin D1 [28]. Ezh2 suppressed the expression of p21 through histone methylation (H3K27me3) on the p21 promoter resulted in cell proliferation. E2F2 is a member of the E2F family of transcription factors. In addition to their well-characterized roles in cell cycle control, E2F2 play key roles in mediating tumor development and metastasis [29]. Our previous study had revealed that cell cycle was involved in the antitumor effect of ZNF750 [4]. The current study were consistent with above studies manifested by overexpressed ZNF750 accompany with reduced E2F2, PHF19, Ezh2, EED, SUZ12, UBE2C and cyclinD1, and the vise verse for knockdown of the ZNF750 gene. Thus, the increased E2F2, transcriptional repressor PHF19, Ezh2, EED and SUZ12 in sh-ZNF750 groups could regulate the repressive transcriptional activity resulted in enhanced cyclinD1expression and cell growth. In line with above studies, we proofed that the changes of ZNF750 expression led to the changes of tumor growth, which manifested by decreased tumor volume and tumor weight, reduced Ki67 and PCNA expression in oe-ZNF750 groups, but all of these was increased in sh-ZNF750 groups. It is well known that Ki67 and PCNA are proliferation markers. The elevated expression level of Ki67 and PCNA in oe-ZNF750 and diminished level in sh-ZNF750 groups was in consistent with the observation results from the tumor growth in vivo. Metastasis has been identified as the main cause of high mortality in OSCC [30]. MMPs and their endogenous inhibitors TIMPs regulate extracellular matrix degradation and synthesis critical for cancer cell metastasis. Imbalances of MMPs and TIMPs lead to progression of various diseases including cancer [31]. Activated MMPs mediated matrix degradation and leading to tumor cell invasion [32]. It has been reported that TIMP1 has strong inhibiting properties against MMP9 [31]. In the current study, a group of MMPs (including MMP1, MMP3, MMP7, MMP9, MMP13 and MMP17) were all decreased but TIMP1 was increased in ZNF750 groups. However, MMPs were elevated but TIMP1 was diminished in sh-ZNF750 groups. According to the above mentioned results, our observations proved the inhibitory effect of ZNF750 on cell metastasis. The present study elucidated that E2F2 was downregulated by ZNF750 and involved in the antitumor action of ZNF750. Luciferase reporter assay indicated that ZNF750 regulated the E2F2 lead to reduced E2F2 relative luciferase activity, which testified the negative regulation effect of ZNF750 on E2F2. Overexpression of E2F2 partly blocked the inhibited effect of ZNF750 on cell proliferation, invasion, migration, Ezh2 and MMP13 protein expression. More recent work has found that knockdown of the ZNF750 gene significantly promoted cell proliferation, colony formation, migration and invasion in esophageal squamous cell carcinoma cells [33]. Parallel in other cell type support a potential tumor suppressor role of ZNF750, the present study testified that ZNF750 attenuated cell invasion, migration and MMP13 positive population. Moreover, the cell viability, colony forming activity, and the self-renewal capacity of each group in CAL-27 cells was paralleled with the cancer cell proliferation in the nude mice. Furthermore, compared to ZNF750 groups, the cell proliferation, invasion and migration were augmented in Z+E groups. More importantly, E2F2 could block the inhibitory function of ZNF750 on transcriptional repressor, which manifested by increased Ezh2 protein expression in Z+E groups than in oe-ZNF750 groups, but the Ezh2 was further reduced in Z+shE groups. Therefore, we postulated that ZNF750 may be a crucial mediator on Ezh2, leading to reduced tumor growth, metastasis, and the E2F2 was involved in its regulation. Taken together, the present study revealed a possible novel mechanism underlying malignant biological behavior caused by loss function of ZNF750 in OSCC in vivo or in vitro, and indicated that ZNF750 as a tumor suppressor plays a vital role in regulating transcriptional repressors and this function was related to depressed expression of E2F2 by ZNF750. The detail mechanism of ZNF750 on tumor suppressing in OSCC involved by E2F2 will be further investigated in the future.
5,880
2021-10-25T00:00:00.000
[ "Biology", "Chemistry" ]
Moon IME: Neural-based Chinese Pinyin Aided Input Method with Customizable Association Chinese pinyin input method engine (IME) lets user conveniently input Chinese into a computer by typing pinyin through the common keyboard. In addition to offering high conversion quality, modern pinyin IME is supposed to aid user input with extended association function. However, existing solutions for such functions are roughly based on oversimplified matching algorithms at word-level, whose resulting products provide limited extension associated with user inputs. This work presents the Moon IME, a pinyin IME that integrates the attention-based neural machine translation (NMT) model and Information Retrieval (IR) to offer amusive and customizable association ability. The released IME is implemented on Windows via text services framework. Introduction Pinyin is the official romanization representation for Chinese and pinyin-to-character (P2C) which concerts the inputted pinyin sequence to Chinese character sequence is the core module of all pinyin based IMEs. Previous works in kinds of literature only focus on pinyin to the character itself, paying less attention to user experience with associative advances, let along predictive typing or automatic completion. However, more agile associa- * These authors contribute equally. † Corresponding author. This tion outputs from IME predication may undoubtedly lead to incomparable user typing experience, which motivates this work. Modern IMEs are supposed to extend P2C with association functions that additionally predict the next series of characters that the user is attempting to enter. Such IME extended capacity can be generally fallen into two categories: auto-completion and follow-up prediction. The former will look up all possible phrases that might match the user input even though the input is incomplete. For example, when receiving a pinyin syllable "bei", auto-completion module will predict " ¬" (beijing, Beijing) or "Ìo" (beijing, Background) as a word-level candidate. The second scenario is when a user completes entering a set of words, in which case the IME will present appropriate collocations for the user to choose. For example, after the user selects " ¬" (Beijing) from the candidate list in the above example, the IME will show a list of collocations that follows the word Beijing, such as " " (city), "eÐ " (Olympics). This paper presents the Moon IME, a pinyin IME engine with an association cloud platform, which integrates the attention-based neural machine translation (NMT) model with diverse associations to enable customizable and amusive user typing experience. Compared to its existing counterparts, Moon IME has extraordinarily offered the following promising advantages: • It is the first attempt that adopts attentive NMT method to achieve P2C conversion in both IME research and engineering. • It provides a general association cloud platform which contains follow-up-prediction and machine translation module for typing assistance. • With an information retrieval based module, it realizes fast and effective auto-completion which can help users type sentences in a more convenient Figure 1: Architecture of the proposed Moon IME. and efficient manner. • With a powerful customizable design, the association cloud platform can be adapted to any specific domains such as the fields of law and medicine which contain complex specialized terms. The rest of the paper is organized as follows: Section 2 demonstrates the details of our system. Section 3 presents the feature functions of our realized IME. Some related works are introduced in Section 4. Section 5 concludes this paper. Figure 1 illustrates the architecture of Moon IME. The Moon IME is based on Windows Text Services Framework (TSF) 1 . Our Moon IME extends the Open-source projects PIME 2 with three main components: a) pinyin text segmentation, b) P2C conversion module, c) IR-based association module. The nub of our work is realizing an engine to stably convert pinyin to Chinese as well as giving reasonable association lists. Input Method Engine Pinyin Segmentation For a convenient reference, hereafter a character in pinyin also refers to an independent syllable in the case without causing confusion, and word means a pinyin syllable sequence with respect to a true Chinese word. 1 TSF is a system service available as a redistributable for Windows 2000 and later versions of Windows operation system. A TSF text service provides multilingual support and delivers text services such as keyboard processors, handwriting recognition, and speech recognition. 2 https://github.com/EasyIME/PIME As (Zhang et al., 2017) proves that P2C conversion of IME may benefit from decoding longer pinyin sequence for more efficient inputting. When a given pinyin sequence becomes longer, the list of the corresponding legal character sequences will significantly reduce. Thus, we train our P2C model with segmented corpora. We used baseSeg (Zhao et al., 2006) to segment all text, and finish the training in both word-level and character-level. NMT-based P2C module Our P2C module is implemented through OpenNMT Toolkit 3 as we formulize P2C as a translation between pinyin and character sequences. Given a pinyin sequence X and a Chinese character sequence Y , the encoder of the P2C model encodes pinyin representation in word-level, and the decoder is to generate the target Chinese sequence which maximizes P (Y |X) using maximum likelihood training. The encoder is a bi-directional long shortterm memory (LSTM) network (Hochreiter and Schmidhuber, 1997). The vectorized inputs are fed to forward LSTM and backward LSTM to obtain the internal features of two directions. The output for each input is the concatenation of the two vectors from both directions: Our decoder is based on the global attentional model proposed by (Luong et al., 2015) which takes the hidden states of the encoder into consideration when deriving the context vector. The probability is conditioned on a distinct context vector for each target word. The context vec-tor is computed as a weighted sum of previously hidden states. The probability of each candidate word as being the recommended one is predicted using a softmax layer over the inner-product between source and candidate target characters. Our model is initially trained on two datasets, namely the People's Daily (PD) corpus and Douban (DC) corpus. The former is extracted from the People's Daily from 1992 to 1998 that has word segmentation annotations by Peking University. The DC corpus is created by (Wu et al., 2017) from Chinese open domain conversations. One sentence of the DC corpus contains one complete utterance in a continuous dialogue situation. The statistics of two datasets is shown in Table 1. With character text available, the needed parallel corpus between pinyin and character texts is automatically created following the approach proposed by (Yang et al., 2012 Here is the hyperparameters we used: (a) deep LSTM models, 3 layers, 500 cells, (c) 13 epoch training with plain SGD and a simple learning rate schedule -start with a learning rate of 1.0; after 9 epochs, halve the learning rate every epoch, (d) mini-batches are of size 64 and shuffled, (e) dropout is 0.3. The pre-trained pinyin embeddings and Chinese word embeddings are trained by word2vec (Mikolov et al., 2013) toolkit on Wikipedia 4 and unseen words are assigned unique random vectors. IR-based association module We use IR-based association module to help user type long sentences which can predict the whole expected inputs according to the similarity between user's incomplete input and the candidates in a corpus containing massive sentences. In this work, we use Term Frequency-Inverse Document Frequency (TF-IDF) to calculate the similarity measurement, which has been usually used in text classification and information retrieval. The TF (term-frequency) term is simply a count of the number of times a word appearing in a given context, while the IDF (invert document frequency) term puts a penalty on how often the word appears elsewhere in the corpus. The final TF-IDF score is calculated by the product of these two terms, which is formulated as: where f (w, d) indicates the number of times word w appearing in context d, N is the total number of dialogues, and the denominator represents the number of dialogues in which the word w appears. In the IME scenario, the TF-IDF vectors are first calculated for the input context and each of the candidate responses from the corpus. Given a set of candidate response vectors, the one with the highest cosine similarity to the context vector is selected as the output. For Recall @ k, the top k candidates are returned. In this work, we only make use of the top 1 matched one. High Quality of P2C We utilize Maximum Input Unit (MIU) Accuracy (Zhang et al., 2017) to evaluate the quality of our P2C module by measuring the conversion accuracy of MIU, whose definition is the longest uninterrupted Chinese character sequence inside a sentence. As the P2C conversion aims to output a ranked list of corresponding character sequences candidates, the top-K MIU accuracy means the possibility of hitting the target in the first K predicted items. We will follow the definition of (Zhang et al., 2017) about top-K accuracy. Our model is compared to other models in Table 2. So far, (Huang et al., 2015) and (Zhang et al., 2017) reported the state-of-the-art results among statistical models. We list the top-5 accuracy contrast to all baselines with top-10 results, and the comparison indicates the noticeable advancement of our P2C model. To our surprise, the top-5 result on PD of our P2C module approaches the top-10 accuracy of Google IME. On DC corpus, the P2C module with the best setting achieves 90.17% accuracy, surpassing all the baselines. The comparison shows the high quality of our P2C conversion. Association Cloud Platform Follow-up Prediction An accurate P2C conversion is only the fundamental requirement to build an intelligent IME which is not only supposed to give accurate P2C conversion, but to help users type sentences in a more convenient and efficient manner. To this end, follow-up prediction is quite necessary for input acceleration. Given an unfinished input, Moon IME now enables the followup prediction to help the user complete the typing. For example, given "ë …Ì" (Fast Fourier), the IME engine will provide the candidate "ë …Ì öØb" (fast Fourier transform). Specifically, we extract each sentence in the Wikipedia corpus and use the IR-based association module to retrieve the index continuously and give the best-matched sentence as the prediction. Pinyin-to-English Translation Our Moon IME is also equipped with a multi-lingual typing ability. For users of different language backgrounds, a satisfying conversation can benefit from the direct translation in IME engine. For example, if a Chinese user is using our IME chatting with a native English speaker, but get confused with how to say "Input Method Engine", simply typing the words ""eÕ" in mother tongue, the IME will give the translated expression. This is also achieved by training a Seq2Seq model from OpenNMT using WMT17 Chinese-English dataset 5 . Factoid Question Answering As an instance of IR-based association module, we make use of question answering (QA) corpus for automatic question completion. Intuitively, if a user wants to raise a question, our IME will retrieve the most matched question in the corpus along with the corresponding answer for typing reference. We use the WebQA dataset (Li et al., 2016) as our QA corpus, which contains more than 42K factoid question-answer pairs. For example, if a user input " Ö " or " Ö&" (guitar strings), the candidate " Ö à9&" (How many strings are there in the guitar?). 5 http://www.statmt.org/wmt17/translation-task.html Figure 2 shows a typical result returned by the platform when a user gives incomplete input. When user input pinyin sequence such as "zui da de ping", the P2C module returns " '"s" as one candidate of the generated list and sends it to association platform. Then associative prediction is given according to the input mode that user current selections. Since the demands of the users are quite diverse, our platform to support such demands can be adapted to any specific domains with complex specialized terms. We provide a Demo homepage 6 for better reference, in which we display the main feature function of our platform and provide a download link. Related Work There are variable referential natural language processing studies (Cai et al., 2018;Cai et al., 2017a,b) for IME development to refer to. Most of the engineering practice mainly focus on the matching correspondence between the Pinyin and Chinese characters, namely, pinyin-to-character converting with the highest accuracy. (Chen, 2003) introduced a conditional maximum entropy model with syllabification for grapheme-to-phoneme conversion. (Zhang et al., 2006) presented a rule-based error correction approach to improving preferable conversion rate. (Lin and Zhang, 2008) present a statistical model that associates a word with supporting context to offer a better solution to Chinese input. (Jiang et al., 2007) put forward a PTC framework based on support vector machine. (Okuno and Mori, 2012) introduced an ensemble model of wordbased and character-based models for Japanese and Chinese IMEs. (Yang et al., 2012;Wang et al., 2018Pang et al., 2016;Zhao, 2013, 2014) regarded the P2C conversion as a transformation between two languages and solved it by statistical machine translation framework. (Chen et al., 2015) firstly use natural machine thanslation method to translate pinyin to Chinese. (Zhang et al., 2017) introduced an online algorithm to construct an appropriate dictionary for IME. The recent trend on state-of-the-art techniques for Chinese input methods can be put into two lines. Speech-to-text input as iFly IM 7 Saon et al., 2014; and the aided input methods which are capable of generating candidate sentences for users to choose to complete input tasks, means that users can yield coherent text with fewer keystrokes. The challenge is that the input pinyin sequences are too imperfect to support sufficient training. Most existing commercial input methods offer autocompletion to users as well as extended association functions, to aid users input. However, the performance of association function of existing commercial IMEs are unsatisfactory to relevan- Conclusion This work makes the first attempt at establishing a general cloud platform to provide customizable association services for Chinese pinyin IME as to our best knowledge. We present Moon IME, a pinyin IME that contains a high-quality P2C module and an extended information retrieval based module. The former is based on an attentionbased NMT model and the latter contains followup-prediction and machine translation module for typing assistance. With a powerful customizable design, the association cloud platform can be adapted to any specific domains including complex specialized terms. Usability analysis shows that core engine achieves comparable conversion quality with the state-of-the-art research models and the association function is stable and can be well adopted by a broad range of users. It is more convenient for predicting complete, extra and even corrected character outputs especially when user input is incomplete or incorrect.
3,379.6
2018-07-01T00:00:00.000
[ "Computer Science" ]
Limit periodic linear difference systems with coefficient matrices from commutative groups In this paper, limit periodic and almost periodic homogeneous linear difference systems are studied. The coefficient matrices of the considered systems belong to a given commutative group. We find a condition on the group under which the systems, whose fundamental matrices are not almost periodic, form an everywhere dense subset in the space of all considered systems. The treated problem is discussed for the elements of the coefficient matrices from an arbitrary infinite field with an absolute value. Nevertheless, the presented results are new even for the field of complex numbers. Introduction For a given commutative group X , we intend to analyse the homogeneous linear difference systems We will consider limit periodic and almost periodic systems (1.1), which means that the sequence of A k will be limit periodic or almost periodic.The basic motivation of this paper comes from [29,35]. In [29] (see also [26]), there are studied systems (1.1) for X being the unitary group and there is proved that, in any neighbourhood of an almost periodic system (1.1), there exist almost periodic systems (1.1) whose fundamental matrices are not almost periodic.The corresponding result about orthogonal difference, skew-Hermitian and skew-symmetric differential systems can be found in [30], [32], and in [34] (see also [27]), respectively.For results concerning almost periodic solutions, we refer to [16,17,28,30], where unitary, orthogonal, skew-Hermitian, and skew-symmetric systems are analysed.In our previous works [13,33], 2 P. Hasil and M. Veselý the above mentioned result of [29] is improved for a general (weakly) transformable group X .We remark that the process from [29] cannot be applied for commutative groups of coefficient matrices which are treated in this paper. In [35], the study of non-almost periodic solutions of limit periodic systems (1.1) has been initiated and the so-called property P has been introduced.The concept of groups with property P leads to results of the same type as the main results of [13,33].It should be noted that only bounded groups of matrices are treated in [35].The goal of this paper is to prove for other groups of matrices that, in any neighbourhood of a system (1.1), there exist systems (1.1) which have at least one non-almost periodic solution.Moreover, we deal with the corresponding Cauchy problems.For this purpose, we generalize the notion of property P (we introduce property P with respect to a given non-trivial vector) and we use the generalization to obtain the announced results for groups which can be unbounded.Especially, for the used modification of property P, it holds that any group which contains a group with the innovated property has this property as well. The fundamental properties of limit periodic and almost periodic sequences and functions can be found in a lot of monographs (see, e.g., [4,10,18,24]).Almost periodic solutions of almost periodic linear difference equations are studied in articles [6,7,8,12,14,37].Properties of complex almost periodic systems (1.1) are discussed, e.g., in [3,15,23].In the situation when index k attains only positive values, linear almost periodic equations are treated, e.g., in [1,25].To the best of our knowledge, the first result about non-almost periodic solutions of homogeneous linear difference equations was obtained in [11]. We prove the announced results using constructions of limit periodic sequences.This approach is motivated by the continuous case (special constructions of homogeneous linear differential systems with almost periodic coefficients are used, e.g., in [19,20,21,22,32,34]).Note that the process applied in this paper is substantially different from the ones in all above mentioned papers.Hence, we obtain new results even for almost periodic systems and bounded groups of coefficient matrices. This paper is organized as follows.In the next section, we mention the notation which is used throughout the whole paper.Then, in Section 3, we define limit periodic, almost periodic, and asymptotically almost periodic sequences and we state their properties which we will need later.In Section 4, we treat the considered homogeneous linear difference systems, where we recall the definitions and results which motivate our recent research and which give the necessary background of the studied problems.In the final section, we formulate and prove our results which are commented by several remarks. Preliminaries At first, we mention the used notation which is similar to the one from [35].For arbitrary p ∈ N, we put pN := {pj : j ∈ N}.Let (F, ⊕, ) be an infinite field.Let | • | : F → R be an absolute value on F; i.e., let Theorem 3.6.Let {ϕ k } k∈Z ⊆ S be given.The sequence {ϕ k } is almost periodic if and only if any sequence {l n } n∈N ⊆ Z has a subsequence { ln } n∈N ⊆ {l n } such that, for any ε > 0, there exists K(ε) ∈ N satisfying Proof.See, e.g., [31,Theorem 2.3]. Corollary 3.7.Let p ∈ N be arbitrarily given and let {ϕ k } k∈Z ⊆ S be almost periodic.For any ε > 0, the set of all ε-translation numbers l ∈ pN of {ϕ k } is infinite. Proof.It suffices to apply Theorem 3.6 for l n := pn, n ∈ N. Indeed, it holds Using Theorem 3.6 n-times, we also obtain the following result. is almost periodic if and only if all sequences {ϕ 1 k }, . . ., {ϕ n k } are almost periodic.Definition 3.9.We say that a sequence {ϕ k } k∈Z ⊆ S is asymptotically almost periodic if, for every ε > 0, there exist r(ε), R(ε) ∈ N such that any set consisting of r(ε) consecutive integers contains at least one number l satisfying Remark 3.10.Considering Theorem 3.5, we know that any limit periodic sequence is almost periodic.In addition, any almost periodic sequence is evidently asymptotically almost periodic.Note that, in Banach spaces, a sequence is asymptotically almost periodic if and only if it can be expressed as the sum of an almost periodic sequence and a sequence vanishing at infinity (see, e.g., [36,Chapter 5]). Homogeneous linear difference systems over a field In this section, we describe the studied systems in more details.Let X ⊂ Mat(F, m) be an arbitrarily given group.We recall that we will analyse homogeneous linear difference systems (1.1).Let LP (X ) denotes the set of all systems (1.1) for which the sequence of matrices A k is limit periodic.Analogously, the set of all almost periodic systems (1.1) will be denoted by AP (X ).Especially, we can identify the sequence {A k } with the system in the form (1.1) which is determined by {A k }.In AP (X ), we introduce the metric Henceforth, the symbol O σ ε ({A k }) will denote the ε-neighbourhood of {A k } in AP (X ).Now we recall a definition from [35] which is used in the formulations of the below given Theorems 4.2 and 4.3 (for their proofs, see [35]).We point out that Theorems 4.2 and 4.3 are the basic motivation for our current research.Definition 4.1.We say that X has property P if there exists ζ > 0 and if, for all δ > 0, there exists l ∈ N such that, for any vector u ∈ F m satisfying u ≥ 1, one can find matrices N 1 , N 2 , . . . ,N l ∈ X with the property that Theorem 4.2.Let X be bounded and have property P. For any {A k } ∈ LP (X ) and ε > 0, there exists a system {B k } ∈ O σ ε ({A k }) ∩ LP (X ) which does not have any non-zero asymptotically almost periodic solution. Theorem 4.3.Let X be bounded and have property P. For any {A k } ∈ AP (X ) and ε > 0, there exists a system {B k } ∈ O σ ε ({A k }) which does not have any non-zero asymptotically almost periodic solution. In this paper, we intend to improve the above theorems.To show how the presented results improve Theorems 4.2 and 4.3, we need to reformulate Definition 4.1 for bounded groups applying the next two lemmas (which we will need later as well). Lemma 4.4.Let p ∈ N be given.The multiplication of p matrices is continuous in the Lipschitz sense on any bounded subset of Mat(F, m). Proof.Let K > 0 be given.Since the addition and the multiplication have the Lipschitz property on the set of f ∈ F satisfying | f | < K, the statement of the lemma is true.Lemma 4.5.Let a bounded group X ⊆ Mat(F, m) be given.There exists L > 1 such that Proof.We know that the inequality holds for some K > 0. The map f → − f , the multiplication, and the addition have the Lipschitz property on the set of all f ∈ F satisfying | f | < K.In addition, for any M ∈ X, we have (see Hence, the map has the Lipschitz property as well.Let a matrix M ∈ X be given.If we use the expression where m −1 i,j are elements of M −1 ∈ X and M j,i are the algebraic complements of the elements m j,i of M, then it is seen that the map M → M −1 is continuous in the Lipschitz sense on X. Evidently, Lemma 4.4 and the Lipschitz continuity of M → M −1 on X imply the existence of L > 1 for which (4.1) is valid. P. Hasil and M. Veselý Using Lemmas 4.4 and 4.5 for bounded X and for we can rewrite Definition 4.1 as follows. Definition 4.6.A bounded group X ⊂ Mat(F, m) has property P if there exists ζ > 0 and if, for all δ > 0, there exists l ∈ N such that, for any vector u ∈ F m satisfying u ≥ 1, one can find matrices M 1 , M 2 , . . ., M l ∈ X with the property that To formulate the obtained results in a simple and consistent form, we introduce the following direct generalization of Definition 4.6.Definition 4.7.Let a non-zero vector u ∈ F m be given.We say that X has property P with respect to u if there exists ζ > 0 such that, for all δ > 0, one can find matrices Remark 4.8.Since a group with property P has property P with respect to any non-zero vector u (consider ), we can refer to a lot of examples of matrix groups with property P mentioned in our previous paper [35].In [35], there is also proved the following implication.If a complex transformable matrix group contains a matrix M satisfying Mu = u for a vector u ∈ C m , then the group has property P with respect to u.Thus, concerning examples of groups having property P with respect to a given vector, we can also refer to our articles [13,33], where (weakly) transformable groups are studied.Furthermore, we point out that any group, which contains a subgroup having property P with respect to a vector u, has property P with respect to u as well. Results Henceforth, we will assume that X is commutative.To prove the announced result (the below given Theorem 5.3), we use Lemmas 5.1 and 5.2. Lemma 5.1.Let {A k } ∈ LP (X ) and ε > 0 be arbitrarily given.Let {δ n } n∈N ⊂ R be a decreasing sequence satisfying lim n→∞ δ n = 0 (5.1) ) Proof.Condition (5.3) means that, for any k ∈ Z, there exists i ∈ N such that Especially, the definition of {B k } k∈Z is correct and We show that {B k } is limit periodic.Since {A k } is limit periodic and A k ∈ X , k ∈ Z, there exist periodic sequences {C n k } k∈Z ⊂ X for n ∈ N with the property that Hence (see (5.2), (5.3), (5.6)), we have for all k ∈ Z, n ∈ N. Considering (5.1), we get that {B k } is the uniform limit of the sequence of periodic sequences for some i ∈ N and for all k ∈ Z. Thus (see (5.4)), we obtain (5.7). Proof.We can assume that all solutions of {A k } are almost periodic.Especially (consider Corollary 3.8), for any ϑ > 0, there exist infinitely many positive integers p with the property that Let {δ n } n∈N ⊂ R be a decreasing sequence satisfying (5.1) and (5.4).For δ n and K n := n, n ∈ N, we consider matrices and Let a sequence of positive numbers ϑ n for n ∈ N be given.Let us consider p 1 1 , p 1 2 ∈ N such that p 1 2 − p 1 1 > 2l 1 and that (see (5.8)) (5.11) In addition, let p 1 1 and p 1 2 be even (consider Corollary 3.7).We define the periodic sequence {B 1 k } k∈Z with period p 1 2 by values Again, we can assume that, for any ϑ > 0, there exist infinitely many positive integers p with the property that (5.12) Otherwise, we obtain the system {B k } ≡ { B1 k } with a non-almost periodic solution.Indeed, it suffices to consider Lemma 5.1 for 2 and (see (5.12)) B1 Especially, for all k ∈ Z, there exists i ∈ {1, 2} such that B2 We continue in the same manner.Let us assume that all obtained systems { Bj k } k∈Z have only almost periodic solutions.Thus, for every ϑ > 0 and j ∈ N, one can find infinitely many In the n-th step, we consider Bn−1 (5.15) Finally, we put From the construction, we obtain that, for any k ∈ Z, there exists i ∈ N such that B k = A k • B i k .It means that (5.3) is satisfied.Since (5.2) follows from the construction and from (5.9), we can use Lemma 5.1 which guarantees that {B k } ∈ O σ ε ({A k }) ∩ LP (X ).It remains to prove that the fundamental matrix of {B k } is not almost periodic.On contrary, let us assume its almost periodicity.Then, the fundamental matrix is bounded (see Remark 3.4); i.e., there exists K 0 > 0 with the property that (5.16) Let us choose n ∈ N for which n ≥ K 0 + 1.We repeat that the multiplication of matrices is continuous (see also Lemma 4.4).Hence, for given matrix (5.17) We can assume that ϑ n = θ n in (5.14) (see also (5.11), (5.13)).We construct sequences {B (5.18) This contradiction (cf.(5.16) and (5.18)) completes the proof. Theorem 5.3.Let X have property P with respect to a vector u.For any {A k } ∈ LP (X ) and ε > 0, there exists a system {B k } ∈ O σ ε ({A k }) ∩ LP (X ) whose fundamental matrix is not almost periodic.Proof.Let us consider the solution {x 0 k } k∈Z of the Cauchy problem If {x 0 k } is not almost periodic, then the statement of the theorem is true for B k := A k , k ∈ Z.Hence, we assume that {x 0 k } is almost periodic.We put , n ∈ N. We continue in the same manner.Before the n-th step, we define Bn−1 Again, we consider that the sequence {x n−1 k } is almost periodic.Otherwise, we can put B k := Bn−1 k , k ∈ Z. Especially, for all p ∈ N, there exist infinitely many numbers j ∈ pN with the property that Denote Let us consider an integer j (1,n−1) ∈ p n N satisfying (5.37) and j (1,n−1) ≥ q n−1 . (5.40) We define {B we put B (1,n) k := I, k ∈ Z.In the other case, we put we consider the solution {x Again, we assume that {x Especially, for all i = j, i, j ∈ N, there exists l ∈ Z such that This contradiction (consider (5.50) for 2ξ ≤ ϑ) proves that {x k } is not almost periodic. Remark 5.4.It is seen that the statement of Theorem 5.3 does not change if we replace system {A k } ∈ LP (X ) by a periodic one.Indeed, it follows directly from Definition 3.1. Remark 5.5.To illustrate Theorem 5.3, let us consider an arbitrary periodic system {M k } in the complex case (i.e., for F = C with the usual absolute value).It means that we have a system for a positive integer p and arbitrarily given non-singular complex matrices M 0 , M 1 , . . ., M p−1 . It is well-known that a solution of {M k } is almost periodic if and only if it is bounded (see, e.g., [33,Corollary 3.9] or [35,Theorem 5]).The fundamental matrix Φ(k, 0) of {M k } satisfying Φ(0, 0) = I is given by Thus, to describe the structure of almost periodic solutions, it suffices to consider the multiples and, in fact, the constant system For any constant system given by a non-singular complex matrix M, one can easily find a commutative matrix group X containing M and having property P with respect to a vector (e.g., one can consider the group generated by matrices cM for all complex numbers c = sin l + i cos l, l ∈ Z).Applying Theorem 5.3, we know that, in any neighbourhood of the considered system, there exists a limit periodic system whose coefficient matrices are from the group and whose fundamental matrix is not almost periodic.In addition, such a limit periodic system can be found for any commutative group X which contains M and which has property P with respect to at least one vector. Remark 5.6.We repeat that the basic motivation of this paper comes from [35], where nonasymptotically almost periodic solutions of limit periodic systems are considered.Of course, systems with coefficient matrices from bounded groups are analysed in [35].For general groups, it is not possible to prove the main results of [35], i.e., Theorems 4.2 and 4.3.It suffices to consider the constant system given by matrix I/2 in the complex case.Any solution {x k } k∈Z of this system has the property that Thus, there exists a neighbourhood of the system such that, for any solution {y k } k∈Z of an almost periodic system from the neighbourhood, we obtain lim k→∞ y k = 0, which gives the asymptotic almost periodicity of {y k } (see Remark 3.10).At the same time, in [35], there is required that the studied matrix group has property P. Since the group X has property P only with respect to one vector in the statement of Theorem 5.3, we can apply this theorem for groups of matrices in the following form , where X is taken from a commutative matrix group having property P with respect to a concrete vector.In this sense, Theorem 5.3 generalizes Theorem 4.2 as well. The construction from the proof of Theorem 5.3 can be applied for the Cauchy (initial) problem.Especially, we immediately obtain the following result.Theorem 5.7.Let a non-zero vector u ∈ F m be given.Let X have the property that there exist ζ > 0 and K > 0 such that, for all δ > 0, one can find matrices M 1 , M 2 , . . ., M l ∈ X satisfying For any {A k } ∈ LP (X ) and ε > 0, there exists a system {B k } ∈ O σ ε ({A k }) ∩ LP (X ) for which the solution of x k+1 = B k • x k , k ∈ Z, x 0 = u is not almost periodic. P. Hasil and M. Veselý Proof.The theorem follows from the proof of Theorem 5.3, where (5.61) is satisfied (i.e., the case, which is covered by Lemma 5.2, does not happen). Similarly to Theorem 4.3 which is the almost periodic version of Theorem 4.2, we formulate the below given Theorem 5.10 as the almost periodic version of Theorem 5.3.To prove it, we need the next two lemmas.Lemma 5.8.Let {A k } ∈ AP (X ) and ε > 0 be arbitrarily given.Let {δ n } n∈N ⊂ R be a decreasing sequence satisfying (5.1) and let {B n k } k∈Z ⊂ X be periodic sequences for n ∈ N such that (5.2) and (5.3) are valid.Then, {B k } ∈ AP (X ) if In addition, {B k } ∈ O σ ε ({A k }) if (5.4) is fulfilled.Proof.The lemma can be proved analogously as Lemma 5.1.In the proof of Lemma 5. Using the same way which is applied in the proof of Lemma 5.2, we can prove its almost periodic counterpart.Indeed, we do not use the limit periodicity of {A k } in the proof (consider also Lemma 5.8).Lemma 5.9.If for any δ > 0 and K > 0, there exist matrices M 1 , M 2 , . . ., M l ∈ X such that then, for any {A k } ∈ AP (X ) and ε > 0, there exists a system {B k } ∈ O σ ε ({A k }) whose fundamental matrix is not almost periodic.Theorem 5.10.Let X have property P with respect to a vector.For any {A k } ∈ AP (X ) and ε > 0, there exists a system {B k } ∈ O σ ε ({A k }) whose fundamental matrix is not almost periodic.Proof.The theorem can be proved using the same construction as Theorem 5.3.It suffices to replace Lemma 5.1 by Lemma 5.8 and Lemma 5.2 by Lemma 5.9. Analogously, we get the following result as well. Theorem 5.11.Let a non-zero vector u ∈ F m be given.Let X have the property that there exist ζ > 0 and K > 0 such that, for all δ > 0, one can find matrices M 1 , M 2 , . . ., M l ∈ X satisfying For any {A k } ∈ AP (X ) and ε > 0, there exists a system {B k } ∈ O σ ε ({A k }) for which the solution of x k+1 = B k • x k , k ∈ Z, x 0 = u is not almost periodic. Remark 5.12.We add that Theorems 5.10 and 5.11 do not follow from Theorems 5.3 and 5.7.Indeed, in [5], there is proved that there exist systems which are almost periodic and which are not limit periodic (e.g., the sequence {e ik } k∈Z is almost periodic and, at the same time, it is not limit periodic).It means that there exist almost periodic systems which have neighbourhoods without limit periodic systems. . 14 ) 10 P We define the periodic sequence {B n k } k∈Z with period p n 2 by values B n 0 := I, B n 1 := I, . . ., B n p n
5,328.4
2014-01-01T00:00:00.000
[ "Mathematics" ]
Chromosome territory repositioning induced by PHA-activation of lymphocytes: A 2D and 3D appraisal Genomes and by extension chromosome territories (CTs) in a variety of organisms exhibit nonrandom organization within interphase nuclei. CTs are susceptible to movement upon induction by a variety of stimuli, including: cell differentiation, growth factors, genotoxic agents, proliferating status, and stimulants that induce novel transcription profiles. These findings suggest nuclear architecture can undergo reorganization, providing support for a functional significance of CT organization. The effect of the initiation of transcription on global scale chromatin architecture has been underexplored. This study investigates the organization of all 24 human chromosomes in lymphocytes from two individuals in resting and phytohaemagglutinin activated lymphocytes using 2D and 3D approaches. The radial organization of CTs in lymphocytes in both resting and activated lymphocytes follows a gene-density pattern. However, CT organization in activated nuclei appears less constrained exhibiting a more random organization. We report differences in the spatial relationship between homologous and heterologous CTs in activated nuclei. In addition, a reproducible radial hierarchy of CTs was identified and evidence of a CT repositioning was observed in activated nuclei using both 2D and 3D approaches. Alterations between resting and activated lymphocytes could be adaptation of CTs to the new transcription profile and possibly the formation of new neighborhoods of interest or interaction of CTs with nuclear landmarks. The increased distances between homologous and heterologous CTs in activated lymphocytes could be a reflection of a defensive mechanism to reduce potential interaction to prevent any structural chromosome abnormalities (e.g. translocations) as a result of DNA damage that increases during lymphocyte activation. Background Genomes contain the blueprints of life and are organized in-vivo as chromosomes. Nonetheless, our understanding of the spatial organization of genomes and its function has received little attention compared with the vast body of knowledge of the majority of other cellular structures. Pioneering visualization experiments during the 1980s using fluorescence in situ hybridization (FISH) demonstrated that chromosomes were not randomly organized in mammalian cells, occupying distinct positions known as chromosome territories (CTs) [1][2][3][4][5][6]. These CTs are roughly spherical in shape and 2-4 μm in diameter [7]. Analyses using 3C technology have confirmed nonrandom organization of the genome with current evidence supporting a fractal globule organization of chromatin at all levels, from CTs to chromosome arm and band domains to megabase-sized chromatin built up from a series of spatially separated 100 kbp chromatin domains [8]. Certain biophysical properties of the fractal globule (reviewed in [9]) further support this appealing model for chromatin organization. Observations in different cell types and organisms identified proximity patterns of chromosomes, leading to the proposal of two models (gene density and chromosome size) for the radial arrangement of CTs. The gene density model proposes that gene-rich CTs and gene-dense subchromosomal regions are located toward the nuclear interior, with gene-poor regions located toward the nuclear periphery [8,10,11]. This model originated from observations in proliferating lymphoblasts and fibroblasts and can be seen in primates, old world monkey, rodents, birds (excluding chicken) and cattle [6,[12][13][14]. The chromosome size model originated from observations in flat ellipsoid fibroblasts, quiescent, and senescent cells proposing localization of larger chromosomes toward the nuclear periphery and smaller chromosomes toward the nuclear interior [15][16][17]. It is likely that these two models are not mutually exclusive, with radial CT organization depending on the proliferating status of the cells, the chromosome itself and the neighborhood surrounding it, which could play a vital role in regulating cell-type specific gene expression [18]. The above correlative observations of CTs and gene positioning have established the concept of nonrandom organization and highlight emphatically the question of whether there is a link between position and genome function. Evidence to support the hypothesis for a link between position and function is provided from studies of cellular differentiation processes. Examples include the repositioning of the immunoglobulin gene cluster, the Mash1 locus during neural induction [19,20], the HoxB1 gene in mouse embryos [21], the repositioning of adipogenesis genes during porcine mesenchymal stem cell adipogenesis [22] and sex chromosome movement during porcine spermatogenesis [23]. These studies correlate gene repositioning (to a more internal localization) upon transcriptional activation (with the exception of the sex chromosome movement in pigs). However, internal positioning and activation of genes seems to be an oversimplification since biallelicaly expressed genes, RNA polymerase II sites and heterochromatin can be found throughout the nucleus [21,24]. Further evidence of gene repositioning, relative to their respective CTs or to other nuclear structures such as transcription factories and splicing speckles also reveals a correlation with transcriptional activity [8]. This evidence comes from studies using active and inactive genes [25], immune-FISH approaches in Ikaros protein [26], 3D positional experiments of adenine nucleotide translocase genes on the X chromosome [27], immediate activation of Myc proto-oncogene in mouse B lymphocytes [28] and the distance between promyelotic leukemia loci and nuclear bodies that seems to correlate with transcriptional activity [29]. More recent evidence from 3C and 4C conformation experiments shows an association of actively transcribed Hox genes in a cluster compared to silent genes that are located in a different region and form part of the active cluster once they are activated [30]. Another emerging feature of genome organization that may play a major role in the control of gene expression is the intraorganization of chromosomes within the CTs in 3D space. This refers to loops that are being formed in order for regions of chromosomes to interact in cis (e.g. locus control region of the β-globin gene which acts as an enhancer of the β-globin genes), or trans (e.g. in mouse erythroid cells) [7]. The most prominent example of this intraorganization of CTs occurs during X chromosome inactivation in which the ncRNA Xist silences one of the X chromosomes in females, with only a handful of genes escaping inactivation. The X chromosome is inactivated by being condensed into a compact structure (Barr body) that is associated with the nuclear periphery. Following silencing, a repressive nuclear compartment forms that does not affect RNA polymerase II and transcription factors. Genes that are not expressed are "pulled" down into this repressive compartment, rendering themselves inaccessible to transcriptional machinery, whereas the few genes that escape inactivation and are expressed, loop out from this compartment [31]. The aforementioned examples highlight the relationship between the nonrandom organization of chromosomes and gene expression. Another emerging aspect of the complex nature of genome organization arises from experiments in which the local environment changes by inducing differentiation or transcription through addition or removal of growth factors in cells that exist in culture, or through in-vitro exposure to genotoxic agents. Such experiments could be important in understanding the changes in genome organization as cells divide, age or reach the end of their replicative status. Evidence from primary fibroblasts that entered quiescence (after a 7 day serum starvation) depicts alterations in the topology of CTs including 13, 18 and 10 [16]. With chromosomes 13 and 18 exhibiting a movement toward the nuclear interior and chromosome 10 moving from an intermediate position to a more peripheral one [32]. A more detailed look at the gene expression of ten genes on chromosome 10 following this movement showed that two genes were down-regulated and five were up-regulated when CT 10 was in the periphery providing small scale evidence that the nuclear periphery is not solely associated with gene silencing [32]. Interestingly, the same group measured timing of the relocation of chromosomes after elimination of serum and remarkably CTs relocated within 15 min, highlighting need for energy for this type of repositioning and implicating myosin 1β as the mediator for this relocation [32]. In terms of cells in senescence, evidence from dermal fibroblasts demonstrates that CTs orient themselves following a size-related, rather than gene density related model [33]. These changes are another manifestation of the modification of nuclear architecture during these specific cellular stages that could lead to changes in the transcriptional status of the cells. Nuclear architecture is also subject to alterations when stimuli initiate cellular growth, transcriptional activation or induce DNA damage in vitro. The most prominent example for the former comes from pig mesenchymal stem cells when adipogenic growth factors added in culture give rise to committed pre-adipocyte cells. Six genes involved in the adipogenesis pathway repositioned to a more interior location after 14 days of treatment, correlated with up-regulation. The GATA2 gene moved from a peripheral to an interior location (day 7up-regulation) and back to a peripheral location (day 14downregulation) [32]. Recently, we reported reproducible events of CT repositioning in human lymphocytes following in-vitro exposure to genotoxic agents, hydrogen peroxide and UVB [20]. Differences were also reported in CT repositioning between the two genotoxic agents most likely represents differences in mobility and/or decondensation of CTs as a result of differences in the DNA damage induced, chromatin regions targeted and different repair mechanisms [20]. With regards to stimulating a different transcriptional profile a single study has provided evidence of CT repositioning in human lymphocytes following activation of lymphocytes using phytohaemagglutinin (PHA). PHA is a plant mitogen that induces the proliferation of mammalian lymphocytes and creates a cascade of biochemical events that activates resting lymphocytes, which results in large scale decondensation of chromatin, increases in nuclear size and leads to a distinct transcriptional profile [34]. Branco et al. [34] investigated the positioning of 11 CTs in resting human lymphocytes (-PHA) and activated human lymphocytes (+PHA) from a single female subject [34]. The findings showed some intraorganization of CTs with chromosomes 1 and 3 moving more peripherally and chromosome 21 being more centrally located in activated cells [34]. The differences between the two states were attributed to the nuclear expansion as a result of lymphocyte activation by PHA, and to the different transcriptional program [34]. Another important finding was the observation that CT intermingling was lower in activated lymphocytes. This finding was proposed to be a potential protective mechanism to prevent chromosome translocations or risk from DNA damage due to the controlled cell death program that occurs during T lymphocyte activation [34]. Altered transcription profiles induces changes in the topology of CTs; therefore, our study will expand on the previous PHA study [34] to investigate whether the organization of all 24 CTs differs between resting (-PHA) and activated (+PHA) human lymphocytes. Lymphocytes were thus obtained from two volunteers (1 male, 1 female) and CT organization for all 24 CTs was assessed utilizing both 2D and 3D approaches. The purpose of this study was to assess the topology of all 24 chromosomes in resting and transcriptionally activated lymphocytes and where possible compare 2D and 3D approaches. This study specifically investigates: i) random/nonrandom CT organization (2D); ii) spatial relationship between homologous CTs (intraprobe) and heterologous CT pairs (interprobe) (3D); iii) hierarchical organization of CTs from the nuclear interior toward the nuclear periphery (2D and 3D); and iv) CT repositioning in PHA activated lymphocytes (2D and 3D). Findings in this study suggest that CT organization in lymphocytes is reproducible among subjects and follows a gene density pattern as identified by both 2D and 3D approaches. In addition, a number of alterations in CT organization was observed in activated lymphocytes compared to resting lymphocytes including: i) a less constrained CT organization (more random organization); ii) differences in the spatial organization of homologous and heterologous CTs; iii) small differences in the radial CT hierarchy; and iv) evidence of CT repositioning for a handful of CTs. When possible, comparisons between 2D and 3D approaches revealed largely similar results. Results Although both 2D and 3D approaches can be used to address the same problems, they utilize different approaches allowing different perspectives to be studied. Specifically, the 2D approach examines the radial distribution of the entire CT within a flattened 2D nucleus. The 3D approach utilizes the geometrical center of the CT (single point) and performs measurements in microns to the nearest nuclear edge or geometrical center of another CT, providing a physical location or distance within a 3D nucleus. 2D radial chromosome territory organization in resting and activated lymphocytes The 2D organization of CTs within lymphocytes was assessed using previously published and validated methodologies that divide the interphase nucleus into five equal areas based on DAPI fluorescence intensity, and measures the fluorescence distribution of each CT across the five equal areas [10,20,26,35]. This provides the ability to determine if a CT is equally distributed across the nucleus (random organization), or demonstrates a preferential nuclear localization (nonrandom organization). A total of 9400 cells were captured and analyzed to assess the radial nuclear organization for all 24 human chromosomes in resting (-PHA) and activated (+PHA) lymphocytes. The CT distribution and random/nonrandom status of all 24 CTs is presented in Fig. 1 and Table 1, respectively, for both subjects and in both resting and activated lymphocytes. In resting lymphocytes, the vast majority of CTs occupied nonrandom positions (p <0.05), 22 CTs (91.67 %) and 21 CTs (91.30 %), in the male and female subject respectively (Table 1). Nineteen CTs demonstrated nonrandom organization in both subjects (CT Y excluded). In activated lymphocytes a different picture of nuclear organization emerges, with CTs demonstrating a more Radial organization for all 24 CTs in resting and activated lymphocytes. Displays the radial distribution for all 24 CTs (chromosomes 1-22, X and Y) in both enrolled subjects for resting (-PHA) and activated (+PHA) lymphocytes. The X-axis for all histograms represents each of the five rings of equal area (1-5) moving from the nuclear interior toward the nuclear periphery (left to right). Each ring includes the data for the resting lymphocytes (dark blue and dark pink) and the activated lymphocytes (light blue and light pink) in the male and female subject, respectively. The Y-axis for all histograms represents the proportion of fluorescence (%) for each CT within each of the five rings. Error bars represent the standard error of the mean (SEM). Significant events of repositioning in activated lymphocytes compared to resting are denoted by letters (a-d) in the top right corner of each histogram, blue and pink letters correspond to repositioning events in the male and female subject, respectively. Each letter corresponds to the type or repositioning movement observed based on the radial distribution (see figure key for more details) random organization compared to resting lymphocytes. In activated lymphocytes, 14 CTs (58.33 %) and 16 CTs (69.56 %) occupied a nonrandom organization in the male and female subject, respectively ( Table 1). CTs that demonstrated a random organization in resting lymphocytes in the male and female subject (CTs 3 and 13; and CTs 5 and 18, respectively) retained their random organization in activated lymphocytes. In the activated lymphocytes, an additional 8 CTs in the male subject and 5 CTs in the female subject were found to be randomly organized. Of the CTs that were randomly organized in the activated lymphocytes seven were common in both subjects (CTs 3, 5, 7, 9, 11, 13 and 18), whereas CTs 2, 4 and Y were randomly organized only in the male subject. Spatial relationship between CTs within the 3D nucleus Dual color 3D FISH experiments utilizing the same CT pairs permits the spatial relationship between homologous CTs (intraprobe) and between heterologous CTs probed together (interprobe) to be investigated. 3D models were rendered in Imaris (V.7.6.3) and the intraand interprobe measurements were established by measuring from the geometrical center of each CT between homologous CTs (intraprobe) and between heterologous CTs (interprobe) (Fig. 2). In addition, the 3D software also calculates the 3D volume of each rendered nucleus (μm 3 ). A minimum of 40 cells per CT were studied (20 per subject), with exceptions for the sex chromosomes because: i) no intraprobe probe measurements are available for the Y chromosome; ii) intraprobe probe measurements for the X chromosome are available from the female subject only (20 cells); and iii) interprobe measurements between the X and Y chromosome are available from the male subject only (20 cells). In Table 2 we report these results: i) CT pairs probed together, ii) intra-and interprobe measurements (μm) for each CT, iii) nucleus diameter (μm); and iv) nucleus volume (μm 3 ) in both resting and activated lymphocytes. The emerging picture from these data demonstrates that nuclei of activated lymphocytes were larger in size than that of resting lymphocytes (diameter and volume). In accordance with the increase in nucleus size, the intraand inter-probe distances were also greater for all CTs in activated lymphocytes compared to resting lymphocytes. Prior to comparing the intra-and interprobe data in both resting and activated lymphocytes, the measurements obtained were normalized against the diameter of the nucleus to account for differences in nucleus size (data not shown). Utilizing these normalized values, it is possible to order CTs based on their proximity (closestfurthest). In resting lymphocytes the intraprobe measurements between homologous CTs (closest to furthest) were as follows: 19, 21, 22, 17, 1, 15, 16, 14, X, 10, 12, 6, 13, 20, 8, 11, 9, 18, 3, 7, 2, 4, and 5. In activated lymphocytes the distances between homologous CTs (closest to furthest) were: 22, X, 19,21,17,12,9,14,16,18,15,3,20,8,13, 4, 6, 10, 11, 5, 1, 2, and 7. When the relative distances are compared between resting and activated lymphocytes, 9 CTs demonstrated the largest increases in distances in activated lymphocytes (2,11,15,19,6,7,21,10, and 1, respectively). Conversely, CT X in activated lymphocytes in the female subject showed a closer spatial organization to that seen in resting lymphocytes. The normalized interprobe distances among CTs when probed together in resting lymphocytes (closest to furthest) were: 17-22, 21-20, 14-16, 12-11, 8-7, X-Y, 15-3, 6-5, 10-9, 19-18, 1-13, and 2-4. In activated lymphocytes the spatial relationship between CT pairs (closest to furthest) were: 21-20, 12-11, 15-3, 19-18, CT organization from the nuclear interior toward the nuclear periphery (2D and 3D approach) Our 2D method of analysis allows the distribution of CT fluorescence to be transformed to provide a single number for each nucleus, which reflects the midpoint of the frequency distribution of observed fluorescence (median) across the five rings [20]. This median value can be utilized to establish the hierarchical radial order of CTs from the nuclear interior toward the nuclear periphery [20]. The midpoint of the frequency distribution of the fluorescence for each CT in resting and activated lymphocytes was determined from a minimum of 200 cells per CT (100 cells per subject), with the exception of the CT Y, (studied in 100 cells from the male subject) ( Table 3). In addition, hierarchical radial organization of CTs in resting and activated lymphocytes was also established from the 3D data. The hierarchical order of all 24 CTs was determined by the distance in microns, measured from the geographical center of the CT to the nearest nuclear edge. A minimum of 40 cells per CT (20 cells per subject) were analyzed with the exception of CT Y. To account for differences in nucleus size, the measurements obtained were normalized against the radius of the nucleus (Table 3). Despite inherent differences in the 2D and 3D analysis (2D: CT median fluorescence intensity; and 3D: distance from the CT geometrical center to nearest nuclear edge) the hierarchical organization of CTs in resting lymphocytes is by and large very similar between the two methodologies ( Table 3). The hierarchical organization of CTs in activated lymphocytes also demonstrates similar clustering; however, more variability in the clusters of CTs forming the core, intermediate and peripheral regions of the nucleus is observed between 2D and 3D methods (Table 3). CT repositioning following PHA activation of lymphocytes utilizing 2D and 3D approaches Furthermore, we evaluated whether there was any statistically significant alteration in the localization of CTs . Panel e: the same cell that has been rotated toward the left with 50 % of the DAPI plane removed "clipped" to better demonstrate the 3D aspect of the CTs within a 3D nucleus between resting and activated lymphocytes within each of the subjects enrolled. To evaluate 2D CT repositioning, the percentage of fluorescence distribution in each shell ( Fig. 1) was compared between resting and activated lymphocytes in each of the two subjects. Alteration were deemed statistically significant when the p value from the chi-squared goodness-of-fit comparison was less than 0.05. In total there were 15 significant CT repositioning events (10 in the male subject, and 5 in the female subject) from resting to activated lymphocytes (Table 4). Based on the histograms produced from the radial analysis, movement of CTs was classified into the following categories: 1) interior to less interior, 2) interior to intermediate, 3) interior to periphery, 4) intermediate to periphery, 5) intermediate to interior (Fig. 1, Table 4). CT 1 was the only CT that demonstrated statistically significant repositioning in activated lymphocytes in both subjects, and also depicted the same type of movement (interior to less interior). Interestingly, the only CT that demonstrated a significant repositioning event toward the nuclear interior was that of CT X in the male subject. Rather than assessing the radial distribution of the entire CT in a 2D object, 3D methodologies provides physical localization of the center of each CT in a 3D nucleus to the nearest nuclear edge in 3D Provides an overview of all the 3D measurements obtained utilizing Imaris software (V7.6.3) for all 24 CTs in resting and activated lymphocytes. The CTs column indicates which CTs were probed together in the dual FISH experiment (e.g. 1 -13 indicates chromosomes 1 and 13). PHA status indicates whether data is from resting (-PHA) or activated (+PHA) lymphocytes. The remaining columns present the average measurements in microns (μm or μm 3 ) obtained from a minimum of 40 cells (20 per subject with the exception of the X-Y and X-X data, with 20 cells from the male and female subject evaluated, respectively). Fig. 2 provides examples of how each distance was measured. The standard deviation of the measurements in all studied cells is provided in parentheses. The intraprobe columns provides the measurement between homologous CTs, the first intraprobe column provides the distance between the first CT listed in the CT column (e.g. CT 1), with the second intraprobe column providing the distance between the homologous CT pair for the second CT in the CT column (e.g. CT 13). The two interprobe columns provides the distances between the two heterologous CTs (e.g. CT 1 and CT 13), the first interprobe column presents the distance for the two closest CTs, with the second column presenting the data for the two furthest CTs. Subsequently, both the nuclear diameter and volume is provided models rendered in Imaris (V.7.6.3). As with the 2D analysis the nuclear localization of all 24 CTs was established in the same subjects in resting and activated lymphocytes. For the 3D analysis CT repositioning was established using a two-tailed, paired t-test (p < 0.05) to detect significant events of CT repositioning following activation of lymphocytes. In total 10 repositioning events were found to be statistically significant, six in the male subject (CTs 11,14,15,16,17,and 19) and four in the female subject (CTs 11, 16, 19, and X) ( Table 4). All significant CT movements in activated lymphocytes were relatively more peripheral compared to resting lymphocytes. Three CTs were common between the two subjects (CTs 11, 16, and 19), whereas two CTs were common between the two methods in the male (CTs 14 and 16), and in the female (CTs 11 and 19), all demonstrating a similar peripheral movement. Discussion The effects of transcriptional reprogramming on nuclear organization have been underexplored. Therefore the purpose of this study was to evaluate the consequences of PHA activation on global nuclear organization of CTs in human lymphocytes. To the best of our knowledge this is the first study to evaluate the radial organization, random/ nonrandom status, intra-and interprobe spatial relationship between CTs, radial hierarchy and repositioning of all 24 chromosomes in resting and PHA-activated lymphocytes using both 2D and 3D approaches in multiple subjects. The 2D radial organization of CTs demonstrates a highly reproducible radial CT organization with CTs displaying similar radial distributions and random/nonrandom organization between subjects in both resting and Presents the radial hierarchy for all 24 CTs in resting and activated lymphocytes using both 2D and 3D approaches from both subjects. CTs are ordered from the nuclear interior (top of the table) toward the nuclear periphery (bottom of the table). Following the CTs in parentheses are the numerical values used to order the CTs. In the 2D approach, CTs are ordered based on the midpoint of the distribution of the CT fluorescence across the five rings of equal area (median) (200 cells/ CT). In the 3D approach, CTs are ordered based on their (distance) to the nearest nuclear edge following normalization against the radius of the nucleus to account for differences in nucleus size (40 cells/CT). Data for CT Y (Y*) is obtained from the male subject only (2D: 100 cells; 3D: 20 cells) activated lymphocytes. Overall, the organization in activated lymphocytes appears to be more "relaxed" (or less defined), with more CTs (10 in the male subject and 7 in the female subject) displaying a random organization compared to resting lymphocytes (2 CTs in both subjects). It has been established that following PHA stimulation in human lymphocytes, there is a rapid transcriptional surge, which is accompanied by nuclear and nucleolar morphological changes [34]. Specifically, there is enlargement of the nuclear size, increased chromatin decondensation and activation of nucleoli [34]. In agreement with previously published studies, the current study also reports an increase in nuclear size (~2-3 fold) in activated lymphocytes compared to resting lymphocytes [34]. The increased nuclear size in activated lymphocytes could account for the more random CT organization observed. Chromatin is a dynamic entity in constant motion inside the nucleus [36,37]; thus, in a nucleus with more available space, it is feasible that there is relatively more motion at least for some CTs, perhaps conferring a less defined organization than other CTs. Dual color 3D FISH experiments also enabled the spatial relationship between CTs to be examined, providing information on interactions amongst homologous chromosomes and heterologous CT pairs. Differences in nuclear size is a potential confounder, therefore, to account for these differences the distances were normalized using the nuclear diameter. The distances between homologous CTs (intraprobe) were larger in 74 % (17 heterologous homologs) of cases in activated lymphocytes (CT Y excluded), with only 6 CTs possessing shorter distances. The interprobe results demonstrate that 58 % of CTs (14 CTs) had relatively smaller distances, with 42 % of CTs (10 CTs) possessing greater distances in activated lymphocytes compared to resting lymphocytes. In activated lymphocytes the following CT pairs demonstrated the largest increases in distance compared to resting lymphocytes: 17-22, 8-7 and X-Y (male subject only). Altered spatial organization between homologous CTs could be due to different patterns of chromatin decondensation (or condensation) amongst different CTs, or as result of the new transcriptional profile that occurs following PHA activation [34]. The spatial relationship between CTs may also be the result of the formation of new CT neighborhoods following transcriptional activation, or serve as a protective mechanism to reduce potential interactions to prevent any structural abnormalities (e.g. translocations) as result of DNA damage that increases during T lymphocyte activation [34,38]. Utilizing both 2D and 3D methodologies the radial hierarchy of all CTs from the nuclear interior toward the nuclear periphery was established. The radial organization of CTs in both resting and activated lymphocytes in this study follows a more gene-density correlation, which is in agreement with previously published literature [4,11,15,20]. This type of organization might be stimulated by chromosome specific gene expression and the organization of transcription in the nucleus (i.e. interaction of polymerases with CT physical properties like NORs) [16,39]. Both 2D and 3D approaches provided remarkably similar results in terms of the hierarchical order of CTs that preferentially formed clusters in the nuclear interior, intermediate and peripheral regions. Gene rich CTs (e.g. CTs 15,17,19,and 22) clustered toward the nuclear interior, whereas gene poor CTs (e.g. CTs 2, 3, 13, and 18) localized toward the nuclear periphery in resting and activated lymphocytes. CT 21 is the only gene poor chromosome associated with the nuclear interior [40], a finding mirrored in a recently CTs involved in statistical significant radial repositioning between resting and activated lymphocytes in both the male and female subject, as determined by 2D and 3D approaches. 2D radial repositioning was determined using the χ 2 goodness-of-fit test (p < 0.05). The direction of the repositioning movement was determined by comparing the radial distribution of CTs in each subject in resting and activated lymphocytes (Fig. 1). 3D radial repositioning was determined using the two-tailed, paired t-test (p < 0.05). The direction of the repositioning movement was determined based on the distance to the nuclear edge in activated lymphocytes compared to resting lymphocytes published study [20]. The intermediate region of the nucleus is largely composed of the following CTs: 6,8,10,12,14,20, and X, of these, four CTs were of intermediate gene density (CTs 6, 10, 14, and X), two were gene rich (CTs 12 and 20) and one was gene poor (CT 8) [40]. 2D and 3D approaches were also utilized to evaluate any statistically significant events of CT repositioning between resting and activated lymphocytes. Both methods identified significant repositioning events with similarities emerging between the two approaches. These findings include more CTs repositioned in the male subject (e.g. 10 events, 2D and 6 events, 3D) compared to the female subject (5 events, 2D, and 4 events, 3D), and all CTs demonstrating a relatively more peripheral organization, with the exception of CT X (2D in the male subject). One repositioned CT (CT 1) was shared between the subjects in the 2D results with three repositioned CTs shared between the subjects using the 3D method (CTs 11,16,and 19). When an intra-subject comparison was made between the two approaches (2D and 3D), CTs 14, and 16 and CTs 11 and 19 showed similar characteristics in the male and female subject, respectively. One possible explanation for the differences observed in the intra-subject comparisons regarding the number of CT repositioning events and the CTs involved could be due to variability in the nuclear size between the subjects. A two-tailed, paired t-test showed that in certain CTs (CTs 2,4,5,6,9,10,14,16,17,22,X and Y) there was a significant difference (p < 0.05) in the nucleus size between the male and female subject activated lymphocytes (data not shown). Interestingly, 8 out of the 10 significant events of CT repositioning involved these CTs. A similar t-test in resting lymphocytes only revealed a significant difference in nucleus size between the two subjects for CTs 3 and 15 depicting more comparable nuclei sizes (data not shown). The differences in the radial positions could reflect the different network interactions of chromosomes with other nuclear structures or alternative territories [34]. To date, one other study has investigated the repositioning of 11 CTs (CTs 1, 2, 3, 4, 5, 9, 11, 12, 13, 21 and 22) in resting and activated lymphocytes [34]. This study, reported peripheral repositioning of CTs 1 and 3 and more internal repositioning of CT 21 [34]. The majority of repositioned CTs in the current study were not evaluated in the previous study, however, we confirm the peripheral repositioning of CT 1 but not for CT3, and although not significant in this study the radial hierarchy of CT 21 is more internally localized in activated lymphocytes. An additional important aspect in the current study is the use of both 2D and 3D approaches to study CT organization. The two approaches to address CT organization are synergistic, the 2D approach assesses the radial distribution of the entire CT in a 2D nucleus, whereas with the 3D approach assesses the physical localization of the center of the CT to the nearest nuclear edge or another CT. Despite the inherent differences between the two approaches the relatively small variations observed in the hierarchical organization of CTs are reassuring, suggesting reproducibility between both 2D and 3D methods. The reproducibility between the two approaches is likely due to the fact that the 2D data was transformed to a single data point, similar to the 3D data, rather than examining the distribution of the entire CT. Greater variability was observed between the two approaches when examining CT repositioning in activated lymphocytes, most likely reflecting differences when studying localization of the entire CT (2D) versus a single point (3D). Nevertheless, some CTs demonstrated reproducible repositioning using both methods which provides further evidence that both 2D and 3D approaches provide comparable data [32]. Conclusions Alterations observed in CT organization in activated lymphocytes most likely depend on and influence the spatial organization of CTs [34]. Both 2D and 3D approaches revealed CT specific differences in organization between resting and activated lymphocytes. These differences were associated with the increased nuclear size and altered chromatin condensation observed in activated cells, and may be related to the altered transcriptional profile in activated cells. Different CT interactions (changes in intra-and interprobe spatial organization) may lead to the formation of different CT neighborhoods. Furthermore, there is evidence to indicate that CTs in activated cells display less intermingling which may serve as a mechanism to reduce the formation of chromosome aberrations [34]. These hypotheses warrant further investigation in future studies focusing on the spatial relationship and interaction between CTs. In addition, future studies should include a larger cohort and genome-wide gene expression studies to determine whether consistency is observed among individuals for CT repositioning events and whether repositioning is associated with gene expression. Sample cohort This research study was approved by the Florida International University Institutional Review Board (protocol numbers: IRB-121010-00, IRB-14-0163). Informed consent was provided by two individuals (one male and one female subject) to participate in this study. Both participants were 35-years-old, karyotypically normal (data not shown) and completed a brief health history survey, providing life style (e.g. alcohol or tobacco use) any recent illness information, and any medication taken. Both participants were non-smokers and had not knowingly been in contact with any hazardous or radioactive material in their working or home environment and were occasional social drinkers (2-8 units/week). Conditions of Cell Culture Peripheral blood was collected by venipuncture in heparin tubes (Greiner-BioOne, Monroe, NC, USA). Whole blood from each individual was split and cultured in the presence or absence of PHA. All lymphocyte cultures were prepared in RPMI 1640 (Lonza, Walkersville, MD, USA) reconstituted with: 10 % heat inactivated fetal bovine serum (FBS -Sigma-Aldrich, St Louis, MO, USA), 2 % L-glutamine (Thermo-Fisher, Waltham, MA, USA) and 1 % penicillin-streptomycin solution (Thermo-Fisher, Waltham, MA, USA). All cultures had a total volume of 5 ml and for those that PHA was added, they were reconstituted with 100 μl of PHA, (45 mg/vial) (Remel Inc, Lenexa, KS, USA) 0.8-1.0 ml of blood was incubated for 71 h at 37°C (5 % CO 2 ). Following lymphocyte culture incubation (71 h), lymphocytes were prepared following standard karyotyping protocols. Proliferating cells in metaphase were arrested using 0.2 μg colcemid (Thermo-Fisher, Waltham, MA, USA) for 30 min at 37°C, followed by standard hypotonic conditions to allow separation of white blood cells from anucleate erythrocytes (0.075 M of KCL -Thermo-Fisher, Waltham, MA, USA) for 45 min at 37°C. White blood cells were subsequently fixed in 3:1 (v/v) of methanol:acetic acid solution to clean and fix the preparation. All cultures were stored at -20°C immediately following the harvesting procedure. Fluorescence in situ hybridization (FISH) Cells cultured in the presence and absence of PHA were dropped on glass slides (FisherBrand® -Thermo-Fisher, Waltham, MA, USA), allowed to adhere by ageing overnight at room temperature (RT) and subsequently washed in 1X PBS (Thermo-Fisher, Waltham, MA, USA), followed by an ethanol dehydration step (70-80-100 %, 3 min each). Air dried cells were treated with a 1 % pepsin solution (Thermo-Fisher, Waltham, MA, USA) in a pre-warmed at 37°C solution of 49 ml double distilled water (ddH 2 O) and 0.5 ml of 1 N HCL (Thermo-Fisher, Waltham, MA, USA) for 20 min. Cells were rinsed with ddH 2 O and 1 X PBS at RT, and subjected to another round of fixation using 1 % paraformaldehyde/PBS (Thermo-Fisher, Waltham, MA, USA) at 4°C for 10 min. Following which, slides were rinsed in 1 X PBS and ddH 2 O (RT), prior to another ethanol dehydration series (2 min each), and finally air dried. A dual color FISH experiment (fluorescein isothiocyanate (FITC), and tetramethyl rhodamine isothiocyanate (TRITC) labelled FISH probes) was performed utilizing whole chromosome paints (WCPs), for all 24 chromosomes (Rainbow Scientific, Windsor, CT, USA). WCPs were codenatured for 5 min with lymphocytes at 75°C followed by overnight hybridization (>16 h) at 37°C using a Ther-mobrite® Statspin (Abbott Molecular, Illinois, IL, USA). A post hybridization stringency wash was performed in a pre-warmed 73°C solution of 0.7 X SSC/0.3 % Tween 20 (Thermo-Fisher, Waltham, MA, USA) (35 ml of 20 X SSC, 3 ml of Tween 20 and 965 ml of ddH 2 O) for 2 min. After 2 min elapsed, cells were washed in 2 X SSC/ 0.1 % Tween 20 (100 ml of 20 X SSC followed by a brief ethanol series (1 min each). Slides were subsequently air dried and mounted with 4′,6-diamidino-2-phenylindole (DAPI -Vector Labs, Burlingame, CA, USA) under a 24X55mm coverslip. 2D Image acquisition, radial chromosome positioning and statistical analysis All images for 2D analysis were captured using an Olympus BX61 epifluorescence microscope equipped with a cool charged couple device camera (Hamamatsu ORCA -R 2 C10600) and a motorized ES111 Optiscan stage (Prior Scientific UK). Three single band pass filters for FITC, TRITC, and DAPI (Chroma Technology, Bellows Falls, VT, USA) were used. All images were acquired using Smart Capture 3.0 (Digital Scientific, UK) exported as .tiff files for further analysis. A minimum of 100 cells were analyzed per subject, per chromosome pair, per condition (-PHA and + PHA). To evaluate the radial chromosome position previously published methodologies were utilized [11,20]. The details have been described extensively elsewhere [20,26,28]. In brief, a customized script written for Image J allows the separation of each captured image into three channels (FITC, TRITC and DAPI counterstain). The DAPI fluorescence is converted into a binary mask that allows for the creation of 5 concentric rings of equal area from nuclear interior toward the nuclear periphery. The proportion of WCP signal in each ring, for each channel (FITC and TRITC) is subsequently measured relative to the total signal for the area contained within the ring. Data was normalized against the different DNA content in the nucleus to compensate for the fact that a 3D object is observed in 2D. The Chi-squared goodness of fit (χ 2 ) was utilized to evaluate if the organization of each chromosome differed from random (p < 0.05) and to compare differences between conditions (-PHA and + PHA) in each subject. 3D Image acquisition, chromosome territory positioning and statistical analysis The FISH protocol described above was used to prepare slides for 3D image acquisition. The images were captured using a DeltaVision (Applied Precision, WA, USA) imaging station consisting of an Olympus IX71 inverted microscope with x 100, 1.4 NA oil-immersion lens and a
9,081.8
2015-07-03T00:00:00.000
[ "Biology" ]
‘Whose Call?’ The Conflict Between Tradition-Based and Expressivist Accounts of Calling Research evidencing the consequences of the experience of ‘calling’ have multiplied in recent years. At the same time, concerns have been expressed about the conceptual coherence of the notion as studies have posited a wide variety of senses in which both workers and scholars understand what it means for workers to be called, what they are called to do and who is doing the ‘calling’. This paper makes both conceptual and empirical contributions to the field. We argue that Bellah et al.’s (Habits of the heart: Individualism and commitment in American life, University of California Press, 1996) contrast between tradition-based and expressivist understandings of ‘calling’ highlights a fundamental but neglected fissure in the literature. Expressivist accounts amongst both scholars and research participants require only that ‘calling’ be deeply felt by those who experience it. However, tradition-based accounts require an external caller. Exemplifying this, workers who attest to a divine call and scholars who write about ‘calling’ in the context of particular Christian traditions understand ‘calling’ in terms of a relationship with God. These accounts cannot but be in radical tension. We suggest that this conceptual confusion can be understood in terms of MacIntyre’s notion of ‘tradition-constituted rationality.’ The implications of this argument for practice are evidenced in our report of a study of adherents to one such tradition, workers at a Christian organization that supports people in poverty. Through in-depth interviews with long-term volunteers, we seek to assess if tradition-based ‘calling’ can be evidenced in unpaid work for the lack of pay and career progression opportunities strongly suggest the presence of ‘calling.’ This study demonstrates that even in the context of work that exhibits duty and altruism associated with expressivist accounts of ‘calling,’ these workers’ understanding of the relationships between themselves, their clients and Jesus Christ dominate their work choices. It is the meaning derived from a divine caller, understood in terms of Christian tradition, that accounts for their decision to begin and to continue this work. Introduction Research evidencing the consequences of the experience of 'calling' have multiplied in recent years. At the same time, concerns have been expressed about the conceptual coherence of the notion (Bunderson & Thompson, 2019) and empirical studies have found a wide variety of senses in which workers understand what it means to be called, what they are called to do and who is doing the 'calling.' Guillen et al. (2015, p. 803) argue that "spiritual motivations" are "neglected" within this discourse and literature on business ethics more broadly. This paper seeks to address this gap by contributing to the literature on spiritually motivated 'calling. ' We report a study of the experience of spiritual 'calling' in unpaid work by providing empirical evidence from an 1 3 'extreme case' featuring volunteers working for a Christian charity where there are strong theoretical reasons suggesting that volunteers may be motivated by 'calling'. The workers attested to a relational account of being called to work with people in poverty by a divine caller. Interviews with long-term volunteers demonstrate that even in the context of work that exhibits the features of pro-social impact, challenge and duty, which have been associated with 'calling,' their understanding of the relationships between themselves, their clients and Jesus Christ dominate their work choices. It is the meaning derived from a divine call that accounts for their decision to begin and to continue this work. In addition to novel empirics, this paper makes a conceptual contribution to aid analysis of the research literature itself. Bellah et al.'s (1996) foundational conceptualization held that those with a 'calling' participated in some version of tradition-constituted rationality (MacIntyre, 1988;Reames, 1998). Despite adopting Bellah et al.'s (1996) work orientation framework which distinguishes the 'calling' orientation from job and career orientations, contemporary usage has substituted an expressivist for a tradition-based for 'calling,' requiring only that those called find their work to be "a deep and meaningful necessity" (Cinque et al., 2020, p. 9). Expressivists understand 'calling' (alongside other normative and evaluative terms) as combining "descriptive and emotional meaning" (MacIntyre, 2016, p. 17) which requires no "authoritative standard, external to and independent of an agent's feelings, concerns, commitments and attitudes" (MacIntyre, 2016, p. 23). As recent research has demonstrated however, employees who understand themselves as participants in a tradition account for their 'calling' in terms provided by the rationality internal to that tradition e.g. Buddhist managers (Burton & Vu, 2020) and Quaker businesses (Burton & Sinnicks, 2021), whereas for the expressivist, deeply felt meaning is the only ground for such a claim. These conceptualizations cannot but be in radical tension with one another. We proceed as follows. 'Calling-The Recent History of a Concept', outlines a series of transformations through which Bellah et al.'s (1996) precisive conceptualization of 'calling' has become the name of a minimally defined expressivist category. In 'Called to Combat Poverty', we present evidence of the centrality of a divine call that is believed to have been received by workers in a Christian organization in the United Kingdom. In 'Discussion', we argue that the contrast between conceptualizing 'calling' in expressivist terms and as a divine call can be understood through MacIntyre's notion of 'tradition-constituted rationality. The 'Conclusion' presents the paper's key theses and recommendations for future research. Calling-The Recent History of a Concept In the first decades of the twenty-first century, "usage frequency [of 'work as a calling' in the academic literature] nearly doubled" and the "the steepest rise appears to be in just the past decade" from 2009 to 2019, with usage frequency quadrupling (Bunderson & Thompson, 2019, p. 422). By 2011, Wrzesniewski (2011) had already claimed that "callings have stolen center stage in our imaginations as offering some sort of special gateway to fulfillment and meaning in work" (p. 45). Recent empirical work on 'calling' has been undertaken in diverse occupational settings including firefighters (Jo et al., 2018), school principals (Swen, 2020), hotel employees (Lee, 2016), theatrical artists (Cinque et al., 2020), chefs (Cain et al., 2018) and many others (Bunderson & Thompson, 2019). The conceptual origin of this work goes back to 1985, with the publication of 'Habits of the Heart', by Robert Bellah and his colleagues. Bellah et al.'s (1996) definition of 'work as a calling' "subsumes the self into a community of disciplined practice and sound judgment whose activity has meaning and value in itself, not just in the output or profit that results from it" 1 (p. 66). Such a definition is distinguished from "work as a job," whereby work merely becomes "a way of making money and making a living," (ibid) as well as "work as a career" whereby "work traces one's progress through life by achievement and advancement in an occupation" (ibid). It is important at this point to note that Bellah et al. (1996) explicitly indicate that their framework for understanding 'calling' entails a MacIntyrean theory of practices and traditions that is not fully unpacked within Habits of the Heart. Instead, readers of Bellah et al.'s work who arrive at their account of 'calling' will find that they are directed to turn to MacIntyre's landmark work, After Virtue. Those familiar with that work will not miss the MacIntyrean terminology that Bellah et al. refer to in their paradigmatic example of the ballet dancer who embraces a 'calling' (ibid, p. 66). This conceptualization rules out subjective accounts that do not meet its requirements for internal goods, disciplined practice, community, and intensity that "makes a person's work morally inseparable from his or her life" (ibid). Critical here is what Bellah et al. take to be the normative legislator of the moral objectives that are associated with one's 'calling. ' Bellah et al.'s ballet dancer illustrates a tradition-based 'calling' which entails "habits and practices" that must be "handed down in a community based on a still-living tradition" (ibid). As Bellah et al. remind us, "the stories that make up a tradition contain conceptions of character, of what a good person is like, and of the virtues that define such character" (ibid, p. 153). Importantly, the habits, practices, and identity of the ballet dancer and her community are all linked to the living tradition of ballet and the ongoing narrative that it extends. In this way, the ballet dancer's reasoning about her 'calling' is tradition-based. She cannot understand her 'calling' (or even herself) apart from these resources that a key living tradition in her life provides because, as Bellah et al. say, "what we do often translates to what we are" (ibid, p. 66). While Bellah et al.'s (1996) ballet dancer appears to be formed in light of the ongoing tradition of ballet, it also seems clear that her understanding of ballet as a 'calling' is shaped by the broader tradition of civic republicanism. This is evident in the way that Bellah et al. recount the dancer's motive to remain "devoted to an ill-paid art…so that the lives of the public may be enriched" through her performance (ibid, p. 66). The performance itself, then, appears to be motivated by the civic ideal of service to the community. This ideal, we should note, is inseparable from Bellah et al.'s argument that ballet, or any other 'calling' for that matter, "can never be merely private" (ibid). It always marks "a crucial link between the individual and the public world" (ibid). Bellah et al. distinguish their precisive notion from other types of 'calling' attributions that make it "more difficult to see work as a contribution to the whole" (ibid). They argued that the widespread and misplaced attribution of 'calling' to explain work choices was epiphenomenal of "expressive" and "utilitarian" rationalities (ibid). Expressivist claims are described by Bellah et al. as centering around the "psychic rewards" (ibid) of being called while utilitarian claims involved a "segmental, self-interested" (ibid) commitment to "material rewards" (ibid). 2 By contrast, Bellah et al. (1996) understood 'calling' to require allegiance to the moral and deliberative resources of a specific tradition for their justification, a condition which rules out expressivist accounts which deny any special status to such resources. On the expressivist account, we are called inasmuch as we strongly believe ourselves to have been (Cinque et al., 2020). The distinction between the expressivist teacher or doctor who believes themselves to be called to their practice and the adherent to a religious or civic tradition who believes themselves to be called to teaching or medicine by God or the nation is the type of justification that is required to attribute the term 'call' and its cognates. In considering the American late twentieth century context, Bellah et al. (1996) identify two traditions that contribute to contemporary moral language-civic republicanism, which we have already seen applied, and the biblical tradition. Both traditions provide resources for and ongoing discourse about the nature of the 'caller' and what it means to be called. The biblical tradition considers normative demands as appropriate responses to God's goodness and the civic republican tradition vests normative demands in allegiance to the goods of community. The former attests to the "widely shared" element of "belief in God"-perceived as a divine caller who may call believers to particular forms of work or service (ibid, p. 333). The latter operates as a call from the community for "public participation as a form of moral education and sees its purposes as the attainment of justice and the public good" (ibid, p. 335). In both cases, traditions supply resources that enable argument as to why we might be called to particular, good work and to deliberate with others about its requirements. In the case of the ballet dancer, the requirements of public service that are integral to civic republicanism justify a commitment to an art form that enriches "the lives of the public" (ibid, p. 66). This type of justification requires a framework within which practitioners can evaluate and debate the merit of particular projects. Cinque et al.'s (2020) study of Italian actors in poorly paid, marginal environments distinguishes those whose 'calling' involves what they describe as therapeutic identity work and those who emphasize religious and political commitments. Whilst the therapeutic self-understanding involves authenticity and self-knowledge, something that only they can determine, those with a political or religious orientation spoke in terms of responsibility to ongoing projects of political emancipation or service. This illustrates an important distinction, that adherents of a tradition can use its resources to deliberate in an action-guiding way about particular projects in a way that expressivists cannot. Just over a decade following the publication of 'Habits of the Heart,' empirical support for the differentiating effects of 'job', 'career' and 'calling' began with Wrzesniewski et al.,'s (1997) study, "Jobs, Careers, and Callings: People's Relations to Their Work." For this project, Wrzesniewski led a research team that set out to: present evidence suggesting that most people see their work as either a Job (focus on financial rewards and necessity rather than pleasure or fulfillment; not a major positive part of life), a Career (focus on advancement), or a Calling (focus on enjoyment of fulfilling, socially useful work). (Wrzesniewski et al., 1997, p. 21) The familiarity of these names for work is intentional. After all, as Wrzesniewski acknowledges, "the inspiration for our approach came from Habits of the Heart" (ibid, p. 22). Being so inspired, her team developed three hypothetical 'work orientations': A, B, and C. Narratives about each hypothetical person's motivations for working accompanied their title and research participants were instructed to indicate which of the three narratives they most identified with. Wrzesniewski et al. designed A to align with the 'Job' orientation, B the 'Career' orientation, and C the 'Calling' orientation. Conclusions from the study have been widely cited and indicated that research participants have no problem identifying which of the three hypothetical workers best describes their own workplace motivations (ibid). Wrzesniewski has maintained that her understanding of the 'calling' orientation, as distinct from a 'job' or a 'career,' is aligned with Bellah et al.'s account of 'calling,' and that, like the work orientation of Messrs. A, B, and C, these "three categories represent three different work orientations, which guide individuals' basic goals for working, capture beliefs about the role of work in life, and are reflected in work-related feelings and behaviors" (Wrzesniewski, 2011, p. 47 Mr. C's work is one of the most important parts of his life. He is very pleased that he is in this line of work. Because what he does for a living is a vital part of who he is, it is one of the first things he tells people about himself. He tends to take his work home with him and on vacations, too. The majority of his friends are from his place of employment, and he belongs to several organizations and clubs relating to his work. Mr. C feels good about his work because he loves it, and because he thinks it makes the world a better place. He would encourage his friends and children to enter his line of work. Mr. C would be pretty upset if he were forced to stop working, and he is not particularly looking forward to retirement. (Wrzesniewski et al., 1997, p. 24) Mr. C understands his 'calling' in reference to his 'personal interests' and not in reference to tradition-based norms that mark his primary values or goals, as the ballet dancer does in Bellah et al.'s illustration. Mr. C's 'calling' is understood subjectively rather than communally and exhibits no clear relationship to a caller. 3 Furthermore, Mr. C's meaningful relationships do not appear to involve relationships with other adherents of a tradition-another point of departure from Bellah et al.'s example of the ballet dancer who is bound together in meaningful relation with others by a mutual adherence to the living tradition of ballet, construed pro-socially in light of the broader civic tradition. Instead, Mr. C appears to merely be pleased by his relationships with other agreeable persons in the office. In other areas of her research, Wrzesniewski admits that more work is needed to understand how ideas about 'work as a calling' exhibit the competing priorities of "helping others" versus "helping oneself," but what seems clear here is that Wrzesniewski maintains at least some shell of Bellah et al.'s (1996) Wrzesniewski et al.'s (1997) account obscures the idea that one's 'calling' can never merely be motivated by private interests. Rather, it must always link the individual's good with the common good under Bellah et al.'s (1996) view. 4 Nonetheless, Wrzesniewski's work continues to provoke more research and further case studies on Bellah et al.'s concept of 'work as a calling.' The growth in publications and interest in the concept of 'calling' required other researchers to take up Wrzesniewski et al.'s (1997) call for further research. Vocational psychologists Dik and Duffy are the most regularly cited scholars within the interdisciplinary discourse on 'calling' with citations exceeding 7000 per a recent Google Scholar report. They formally define 'calling' as "[a] a transcendent summons, experienced as originating beyond the self, [b] to approach a particular life role in a manner oriented toward demonstrating or deriving a sense of purpose or meaningfulness and that [c] holds other-oriented values and goals as primary sources of motivation" (Dik & Duffy, 2012, p. 11). Unlike Wrzesniewski et al. (1997), Dik and Duffy's (2012) notion of 'summons' requires a relationship to an authoritative caller to whom recipients must respond; this also implies both that not all are summoned and that the summons may come at any time. Both of these conditions indicate that our own identity be understood in relation to what the summoner is and what they may require of us. To acknowledge that we may be summoned makes sense only to those whose self-understanding includes this relationship, not as an incidental or possible feature of our lives, but as a permanent and definitive feature of it. Unlike Wrzesniewski's secular definition, Dik and Duffy's inclusion of 'transcendent summons' more closely resembles Bellah et al.'s understanding. However, their argument that this definition "intentionally leaves open the content of the perceived source or sources of callings, which may range from God to the needs of society to serendipitous fate" (Dik & Duffy, 2009, p. 427) contrasts markedly with Bellah et al.'s (1996) precisive definition. To base a claim for 'calling' on serendipity represents a variety of that very expressivism that Bellah et al.'s (1996) definition excludes. Serendipity requires a degree of reflexivity but does not require the acceptance of normative demands originating beyond the self. In a more recent paper, Dik and Duffy again emphasize that "identifying the source of a summons externally is often now thought of as the exception rather than the rule" (Duffy et al., 2014, p. 564). It is notable that when writing for a Christian audience, Dik adheres to the Biblical notion of 'transcendent summons' and makes no mention of serendipity. In his 'Redeeming Work: A Guide to Discovering God's Calling for Your Career' Dik much more narrowly speaks of a 'transcendent summons' as a divine call from God to particular work, arguing that the book itself "walks you through how you can discern and live God's calling within your career path" (2020, p. 24). In work drawing on the resources of a specific tradition, concepts need not be expanded to enable application beyond that tradition, for example to a notion of serendipity that has no place in the Biblical worldview. Bellah et al.'s (1996) insistence that the attribution of 'calling' requires the resources of a specific tradition for its justification finds further support in Bunderson and Thompson's highly cited research. Bunderson and Thompson join Dik and Duffy in their critique of the modern, predominantly self-seeking notions of 'work as a calling.' They argue: Whereas classical views of 'calling' may have emphasized destiny, duty, and discovery, modern conceptualizations -in line with our modern empha-sis on expressive individualism -reflect an emphasis on self-expression and self-fulfillment. Under this view, 'callings' are expressions of internal passions and interests and are pursued for the enjoyment and fulfillment they can bring and not out of any sense of societal duty or obligation. (Bunderson & Thompson, 2019, p. 430) Bunderson and Thompson suggest that the increasingly popular 'work as a calling' literature grounds the meaning of one's 'calling' in one's passions and interests, or in one's preferences. When 'callings' are reduced to 'internal' passions, then there is no 'external' moral source to point to, which implies a divorce between the caller and the called. Consequently, meaning is not understood in light of some 'transcendent summons' to a specific 'calling' that contributes to the good of individual lives and communities, but rather in light of what pleases workers-or in light of what workers think will make them happy. To resolve this conundrum, Bunderson and Thompson propose that one's 'calling' "is a conviction-often felt as a sense of destiny or fit-that a particular domain of work leverages one's particular gifts and consuming passions in service of a cause or purpose beyond self-interest" (2019, p. 432), a description which "integrates outer requiredness (as per neoclassical definitions) with inner requiredness (as per modern definitions)" (ibid) in the hope of reaching a "solution to the definitional stalemate in the calling literature" (ibid). The very possibility of such a definition sets them at odds with Bellah et al.'s (1996) understanding of the tradition dependence of attempts to justify any claim that an agent may make to have responded to a 'calling.' Insofar as modern notions of passion root a sense of meaning in the therapeutic fulfillment of one's preferences, these notions are antithetical to Bunderson and Thompson's vision. This becomes evident when, alongside Dik, we note that Bunderson and Thompson's work on 'calling' presupposes distinctly Biblical premises when addressing a Christian audience. Their latest book 'The Zookeeper's Secret: Finding Your Calling In Life' (Bunderson & Thompson, 2018) is thoroughly contextualized within the Mormon tradition. They argue here that their understanding is: a product of our study of scriptural teachings and gospel principles as they relate to work and its place in a disciple's life. We have found that the restored gospel of Jesus Christ has a great deal to teach about finding your calling in life. In fact, just as we believe that the greatest wellsprings of family happiness flow through those who center their lives on the Savior's teachings, we testify that the greatest fulfillment from work is only available when you build your career path on Jesus Christ's gospel. (Bunderson & Thompson, 2018, p. 127) Unlike their 2019 argument for a definition of 'calling' that reconciles neoclassical and modern understandings, their 2018 work sits firmly within the former. The worker responds to a divine caller not because their call stimulates a response to inner passions but rather because the identification of the call with a divine caller requires no other response. Bellah et al. (1996) presented a precisive account of 'calling' which explicitly excluded expressivist and utilitarian notions. With rare exceptions such as McPherson (2012), Burton and Vu (2020), and Vu (2020) who contextualized 'calling' and meaningful work within specific moral and religious traditions, most subsequent researchers have extended their understandings so that it requires only the experience of being called (Cinque et al., 2020), and thereby include expressivist interpretations. By contrast, Bellah et al. (1996) argued that only a precisive definition could be coherent with distinct traditions that both inform the notion of 'calling' and require their adherents to respond. Once the caller is identified-God in the Biblical tradition, the Nation in civic republicanism-then the call requires a response. It is the relationship between the caller and the called that is necessary. This relational notion is conceptually and theoretically distinct from the expressivist understanding in which 'calling' describes an experience, for an experience-whether of serendipity or of inner passion-does not carry with it the moral urgency of responding to a call from God or the nation, nor is it available for the type of dialogue that adherents to a tradition might engage in to evaluate projects in light of their 'calling.' Such was Bellah et al.'s (1996) argument and conceptualization. This literature review argues for the first time that later scholars have misconstrued Bellah et al.'s (1996) 'calling' framework, showing that there is dissonance between the modern, expressivist conceptions of 'calling', such as that proposed by Wrzesniewski et al. (1997), and what Bellah et al. (1996) originally meant by the notion. The expressivist understanding detaches 'calling' from tradition and is evident in countless studies, even in those which speak of a call from God such as the work of Dik and Duffy or Bunderson and Thompson as we have shown. In order to illustrate the distinctiveness of 'calling,' understood as Bellah and his colleagues understood it, Sect. 2 demonstrates what the tradition-based notion of 'calling' can look like in practice by providing evidence from a context in which work that combines many of the elements commonly found to be associated with 'calling'-autonomy, challenge, prosocial outcomes and feedback-is undertaken by research participants who are practicing Christians. If we wish to better grasp the experience of a divine call for the person of faith, we should especially consider those cases where it seems clear that the integral relationship between the divine caller and the called has not been broken. It is toward the establishment of a method for carrying out research on this experience and its deeply motivating and meaningful nature that we now turn. Called to Combat Poverty As mentioned, this research demonstrates what the tradition-based notion of 'calling', as understood by Bellah et al. (1996), can look like in practice by investigating if spiritual 'calling', that is to be called by God, can be evidenced in the context of unpaid work. We therefore provide contributions towards two gaps in the 'work as a calling' literature. Firstly, we present evidence of spiritual 'calling' from the context of a specific tradition, as Bellah et al. intended. Secondly, our research focuses on unpaid rather than paid workers, which most of the empirical work surrounding 'work as a calling' concerns. If Bellah et al.'s (1996) argument that moral justification employing the resources of some distinct tradition is an ineliminable feature of 'calling,' its examination requires both an appropriate context and an interpretivist (Pulla & Carter, 2018) approach to the analysis of qualitative data. Our data collection involved four semi-structured interviews with long-standing volunteers in the context of a Christian charity combatting poverty in the United Kingdom. 5 Rather than producing generalizable findings it is important to provide a rich account for which a small number of key informants is sufficient (Saunders & Townsend, 2016), for the purpose of this paper is to provide an example of where workers' sense of 'calling' is clearly tradition-based to illustrate our theoretical arguments, not to form arguments based on the empirical data alone. The first-named author, who had worked in the organization though not with the research participants in this study, supplemented a consistent set of interview questions with active listening to prompt participants when she judged necessary and to elicit their rationales for joining and continuing to serve. In particular, she sought to explore the relationship between faith and their directedness towards work. At the time of data collection (February 2018), the participants had occupied their roles for between two and eight years. All volunteers were by chance female, two were aged 65+, one was in the 51-64 category and the fourth 41-50. Participants were chosen on the basis that they were Debt Help Managers, Debt Help Coaches or Group Work Managers (for these positions involve greater responsibility than Group Work Coaches), had conducted their role for a minimum of one year thus demonstrating long-term commitment to the work, and worked voluntarily. Identifying participants for these semi-structured interviews required assistance from a gatekeeper known to the first-named author. Participants were initially approached via email and interviews took place for around one hour in a location of the participant's choice to encourage receptivity. Interviews were recorded for accuracy and then later transcribed, facilitating analysis. Semi-structured interviews boast flexibility, permitting the interview to follow the natural flow of conversation, while also allowing for detailed responses that help to develop a holistic picture (O'Leary, 2017). While initial questions are intended to frame the conversation, this opportunity to probe also allows researchers to deepen the most relevant responses (Fisher & Kirby, 2014;Rowley, 2012). Semi-structured interviews help to develop a rich picture of the workers' structures of consciousness, beliefs, and intentions. Interview questions (see Table 1) were crafted based on ideas from the relevant literature. For example, the question of what the participants would change about their work relates to Wrzesniewski and Dutton's (2001) theory of job crafting, and questioning what would cause the volunteers to leave their role emulates Bunderson and Thompson's (2009) findings that the zoo-keepers who viewed their work as a 'calling' would continue their work unpaid if the situation dictates, signifying low intentions to resign. The interviewer avoided asking direct questions relating to meaningfulness and 'calling,' instead opting for open, non-leading questions allowing the participants to share from their own perspective (Hennink et al., 2020). Additional probes were given where the interviewer sought further detail or clarification on specific topics which arose in the participant's response (ibid) such as evangelism or the influence of God in their decisions, eliciting an enhanced rationale for their choices to enable the researcher to assess the tradition-dependence of the participant's account. This paper was produced using research from the dissertations of two of the authors. The data was analyzed using King's (2012) template analysis which provides a flexible approach to hierarchical coding, allowing templates to be adapted to the needs of a particular study (Brooks et al., 2015). Template analysis is popular in various disciplines including organizational studies and has also been used in recent articles on meaningful work (e.g. Vu & Burton, 2021). Two rounds of analysis were undertaken on each interview transcript, allowing themes to emerge. This analysis revealed that the workers found meaning in their work from several sources and for various reasons (such as the workers' previous experience and the opportunity to lead other team members), these formed the initial themes, all of which were connected to the volunteers' relationship with Jesus Christ and their experience of 'calling'. This paper largely makes use of power quotes (Pratt, 2009) for the purpose of space. In addition to our research subjects experience of receiving a divine call, we should also note one other crucial way that our research differs from the existent literature on 'work as a calling.' While the interdisciplinary literature tends to examine the experience of 'calling' amongst paid employees, there is much less literature focusing on volunteers. We find this surprising, especially considering that unpaid work clearly fell into Bellah et al.'s (1996, p. 88 and elsewhere) notion of 'calling.' Subsequently, Tipton (2018), who originally drafted the section on 'calling' within Habits of the Heart has continued to write about the applicability of this orientation to volunteer contexts, particularly in his discussion of those who extend, or "re-create" their true 'callings' in retirement. He argues that, whether for pay or not, there is "no release from such a call" as that which the civic and biblical traditions outlined to continually "learn to serve the good…through deliberation and discipline, and to come to rule oneself in order to rule and be ruled justly" (Tipton, 2018, p. 182). Before or during retirement, Tipton claims that volunteer work construed as a 'calling' provides a morally meaningful way for individuals to have a hand in "enlivening the common good" (Tipton, 2018, p. 65). As we intend to show in what follows, doing work "that matters" in this way, so Tipton also states, "abounds in religious and civic groups" (Tipton, 2018, pp. 64-65). Our analysis of voluntary staff members, all of whom align with some denomination of Christianity, addresses this gap. 6 In previous centuries, theorists have largely distinguished between paid employment and domestic work, believing that paid employment exists in the public sphere and domestic work in the private (Taylor, 2004). This is thought to be because this dichotomy aligns with the perceived gender stereotypes of that time (ibid), yet this distinction fails to address volunteering and other new forms of work such as zero-hours contracts and unpaid internships (Kelemen et al., 2017). Other scholars have perceived volunteering to be a leisure activity rather than work because it is something individuals choose to do rather than must do, though Overgaard (2019) highlights that this view has lost ground. Implicit in arguments against conceptualizing volunteering as work is the view that work is reducible to employment, which is certainly not the case (Taylor, 2004). More recently, Overgaard (2019, p. 129) and others have argued that volunteering should come to be understood as unpaid work, for it is "in fact and before all else, unpaid labour." By this, Overgaard is not saying that all volunteering should be considered work for we need to refrain from thinking of volunteering as one form of activity, instead we should focus on the content. She claims, "when the same tasks take place under highly structured terms, in the same physical settings and alongside paid staff, under similar managements, and with economic and service-level gains for the organizations, we must recognize it as work" (ibid, p. 133). The charity featured in this study does not employ any of its frontline staff members. It operates its services in partnership with local churches who pay a fee to the organization in exchange for the resources provided by the head office team. The services are largely led by members of the church funding the service. Frontline staff members are approved and managed by head office employees and whilst no frontline workers are paid by the organization, many, though not all, are paid by the church. Both paid and unpaid individuals conduct the same work, furthermore the organization provides the same support and has the same expectations of all frontline staff members. It cannot then be the case that for paid frontline staff members, the work can indeed be labelled as work, but not for unpaid members when their roles are identical. Why then focus on volunteers rather than paid employees? Whilst paid workers can and do indeed view their work as a 'calling', volunteer work has been labelled "exceptionally meaningful" (Florian et al., 2019, p. 595) and so such an extreme case makes a novel contribution to this literature. Extreme cases have been argued to provide greater insights than a typical case, as in extreme cases the phenomenon in question is more intensely visible (Buchanan, 2012;Flyvbjerg, 2006). It is in populations of working volunteers that we have theoretical reasons to anticipate the presence of those who, unmotivated by money, might have experienced a 'calling.' Our participants turned out to be particularly articulate about understanding their work as a 'calling' and experiencing a call from a divine caller. Each of them attested to the experience of a divine 'calling' and attributed their decisions to join, remain and continue working for the charity to this call. Neither the benefits that their work generated for clients who routinely suffered from poverty nor the autonomy and challenge that their work presented daily provided sufficient reason for them to continue to volunteer. Only their relationship with Jesus Christ provided the rationale for their continued commitment. We shall present responses from each of these four study participants, but first, it is necessary to describe the organizational setting, its clients, and the typical work that volunteers undertake. The organization is a multinational charity that tackles debt, poverty, and their causes. It also works with clients on emotional and mental health issues, features that are not often found in debt-counselling charities and agencies more broadly. Established in the 1990s, the organization spans several countries, our research was undertaken in the United Kingdom. The organization offers four services: debt counseling, employment support, addiction relief, and life skills. 7 As previously mentioned, the organization works in partnership with local churches, running hundreds of centers across the country. Client-facing work is undertaken by highly trained frontline staff members and the role of such workers is to aid clients in debt-relief, lead them in finding freedom from addiction, assist them in finding work, and help them to develop important life skills. With neither pay nor the possibility of career progression, it is clear this falls outside Bellah et al.'s (1996) understanding of 'work as a job' and 'work as a career.' For each participant, the experience of a divine 'calling' did not involve hearing a message from God, but rather a belief that only God could have brought about the opportunity to volunteer with the charity, and that God had been spiritually and practically preparing them for the role. This resulted in a strong conviction to undertake the work. Volunteer 1 describes this first-hand experience of receiving a call from God in the following way. She recounts, with a sense of God's providence, that the new church she was visiting close to her home was hosting an event to introduce the work of the charity. She believed her experience as a teacher and a career guidance counselor prepared her for the frontline volunteer role-specifically that of Employment Support Manager. "It just sort of seemed as if everything from my past and my experience came together," she said. Volunteer 1 went on to discuss how "it really did feel that God was using opportunities or experiences that I had had and all the training I'd had to open it up so that we could still be doing something useful in our old age." Hence when the question was posed as to whether she felt that God created the opportunity for her, she responded with a resounding, "Oh yes." Similarly, Volunteer 2 recounted how the experiences she had earlier in his career prepared her for what God needed her to do as a Debt Help Manager for the charity. She explains how her past prepared her in the following way: I used to be an occupational therapist, so I'm used to dealing with visiting people in their own homes, often in absolutely dire circumstances, and I'm pretty good at any sort of people, I'm good at working with disadvantaged people and I can relate to sort of service providers and I worked for a charity that supports people who care for someone at home for ten years, and so that was meeting people in crisis, dealing with service providers, and doing that sort of thing. Volunteer 2 went on to discuss that, while reading through the book of Philippians, she began to sense a divine call to give back in a way that her career had been preparing her for, and so when the opportunity opened up, she said it seemed as if "it kind of fell into place and I kind of feel now, and this just came out the other day, that it's almost like what I do with [name of organization] is like a combination of everything else I've ever done, as if everything has been leading me to that." So, when asked whether she believed that God played a part in her initial engagement with the charity, she responded with assurance, "Absolutely." Volunteer 3 also recounted a divine call and the belief that God had been preparing her for this work. Serving in the capacity of an Addiction Relief Manager, she recalled how through her experience with addicts as a health professional, God had prepared her for this role. Believing that the important healing component of spirituality was largely missing from her work in the British National Health Service, she wanted to serve an organization with a similar set of objectives, but that took "God's healing touch" seriously. Volunteer 3 recalled that her initial interest in the charity "was something God initiated!" "It was something He very much put on my heart and I took to the leaders to tell them about it, and we went forward together," she said. Volunteer 3 spoke with enthusiasm about the sense of emotional 'calling' from God that drew her "heart" to the work of the charity. Similar to the emotional experience of a call from God recounted by Volunteer 3, Volunteer 4 also spoke of a call to serve in the capacity of a Debt Help Coach. She described this as both an "emotional" and "faith-based decision," implying that the emotions that came up for her were connected to the faith-based nature of the decision she realized she needed to make. Much like the other volunteers, Volunteer 4 also strongly emphasized her belief that, while persons without faith in God might very well be able to "do the financial bit" of her role and also "emotionally connect" with individuals served by the charity, it was the relationship that she and others had with God that allowed them to "bring an extra dimension" to the lives of those that they felt strongly called to serve. She stated that this vertical dimension of volunteers' work enables them to "fill a space," or a spiritual void, for people-a void that only God was capable of filling in the lives of those she felt called to help, too. Volunteer 4 explains this in the following way: I think as people of faith we can give them hope for the future. It [faith in God] gives [those being served] a wider sense of living, that their lives can change, not just on the outside but on the inside, and I think, yeah, people [without faith in God] could do it, but I don't think they could do it where they could see that [similar] sort of change [as people with faith in God have seen]. Providing individuals with a wider sense of living and connecting on matters of spiritual importance enables Volunteer 4 to foster deep connections with those whom she serves. These deeper connections that she attributes to her faith gives her relationships a kind of long-lasting "momentum." "I've got clients who I still connect with from eight years ago who I have coffee with," she said. It's this belief in the spiritual transformation of a life through an encounter with God that inspires her and others to say, when they come alongside those they serve, "We are a Christian charity. We believe that God can change things for you." Clearly, Volunteer 4, much like the other three volunteers, attributes the personal transformation stories that she partakes in to the same God that called her to serve. Each of these four volunteers described a divine call to their work, believing that it is God who drew them to the work, prepared them for the work, and continues to sustain them in the work that they do, allowing them to provide the fullest possible support to those whom they serve. It is clear that their accounts explicitly entail a preservation of the important connection between the divine caller and the called. Each of the volunteers attested to the belief that they would not be able to deliver the same quality of service apart from a strength they exhibit as a result of their divine call to serve. What we shall go on to see is that these volunteers derive a great amount of both stamina and meaning from their enduring call to serve in the unique ways that they believe God has, by His grace, both practically prepared them and naturally gifted them. Even when the work gets difficult, these volunteers are reminded of and highly motivated by the higher spiritual purpose of their work, believing that God graces them with an endurance they would not otherwise naturally possess. This provision stems from their divine call to serve and gives them the strength to carry out their responsibilities, even when they are tempted to give up. 8 In this sense, the motivation appears to derive not so much from the 'calling' as the caller. The prominence of God is a consistent theme within theistic traditions such as Christianity, Judaism and Islam, even to the extent of martyrdom. Johnson and Zurlo (2014, p. 683) claim "the motivations of the killed" are a defining factor of martyrdom, it is a martyr's refusal to deny their beliefs that often leads to their death. Volunteer 1 described her enduring commitment to the charity by telling a story intended to illustrate the ways in which God has worked through her prayers to both enrich others' lives and keep her motivated when the going gets tough: We've just had a really good encouraging twelve months with what we're doing. The [employment support service], part of it has been a bit up and down for various reasons which we might come onto later, but we've seen people coming into work, finding work, we've seen God answering prayers, I just remember one lady who was a bit iffy about the whole thing and when I was doing a session, a one-to-one session talking about her particular situation and I said well you know, would you like me to pray for you for anything specific and she said 'oh yeah … I want a job and it's got to have this, this and this.' Very, very specific. There were about four or five different things. So, we prayed, and she sort of waltzed up the next week and said, 'I've got a job, and it's got this. Do you remember we prayed it had to have this, this, and this?' And I couldn't even remember all the criteria that she put there, but it was just phenomenal the way God had answered that prayer… It's made a big difference in her life and so, you know, little incidents like that are what keep you going and you think yeah, this is it. Herein, Volunteer 1 explains that the circumstances over the last 12 months of service had "been a bit up and down," but that God gave her the determination to stick with the work during moments of despondency. She later went on to describe a primary source of her despondency, saying "because of me and the way I operate, I do find some of the contacts with external agencies quite challenging because it can be difficult … getting the message [of the charity] across." At times, Volunteer 1 admits, the boredom expressed by members of these external agencies wears on her, leaving her feeling downtrodden. This is especially the case when she all-too-often encounters external agency workers who don't exhibit a sense of 'calling' to their work, but rather seem to have an "I've just got to get through this" look on their face when it comes to the services that they are instructed to provide by their organization. Nonetheless, seeing the ways that God has answered Volunteer 1's detailed prayers for those whom she serves gives her the stamina to "keep going." Despite the fact that not everyone with whom the charity partners always believes in its mission and values, Volunteer 1 exhibits a belief that God regularly shows her the way her 'calling' for her life as an Employment Support Manager is making a difference. It is the revealed "aspects of actually seeing people's lives change and seeing people blossom", particularly when she sees God answering prayer and moving in the client's life through her work, that causes Volunteer 1 to remain motivated in her role and derive a great amount of meaning from it. For Volunteer 1, it is "the opportunity to spend a lot more time with people, it's not just ticking boxes and getting through [the journey out of debt], it's spending time with them, sharing your story and being able to share the gospel if it's appropriate and when it's appropriate" that makes the greatest difference for the client. For this reason, she holds the role could not be occupied by someone outside of the Christian faith or indeed, it could be said, who will not respond to Christ's call to "Go into all the world and proclaim the gospel to the whole creation" (Mark 16 v 15, English Standard Version). Volunteer 2 believes that it is her clear and personal experience of a 'calling' to this work that marks the key difference in her commitment during seemingly impossible situations. Volunteer 2 spoke with some astonishment about the fact that she has even remained committed to the charity for this long. For example, recounting "such a huge uphill struggle" when she almost resigned for good "twice over Christmas, because of a most horrendous first client," Volunteer 2 remembers thinking to herself, "I'll never be able to do this, I can't do it, I just cannot do it." In past workplace environments, Volunteer 2 remembers feeling similarly and actually leaving these other roles to find new work. "I've never really enjoyed my jobs, you know. When I was working as an occupational therapist, I never really felt that it was a 'calling' in that sense." But this work has been different for Volunteer 2 because she derives a great amount of meaning from seeing people's lives change, mostly by "seeing people's lives completely transformed by giving their lives to Jesus" and being able to equip other volunteers to help bring about this life change. In particular, she highlights the enjoyment of "empowering other people and putting them on a pedestal" by providing fellow volunteers with the opportunity to evangelize, allowing them to disciple a client new to faith or be "the eyes and the ears…to pick up whether God wants to say something to this client" during a visit. Both a recognition of her 'calling' to the work and her belief "in the ethos of what [the organization] does" gives Volunteer 2 a kind of stamina that she finds unusual based Footnote 8 (continued) how 'calling' is understood differently by Biblical and secular traditions. We thank Dr. Chris Lutz for highlighting this. on her recent difficulties settling into previous job roles. In her current role with the organization, she remembers thinking, "I didn't want to let this girl and these clients down, and I suppose as well I didn't want to let it beat me." Having left several "bit jobs" and other volunteer posts in the past towards which she did not sense any real 'calling,' she now finds within herself a desire "to work really hard at the budget visits" and other aspects of her role that challenge her. Because of these challenges, Volunteer 2 says that "every six months or so I just think I can't do this anymore, really can't do it anymore," but then six more months of volunteering with the charity seem to pass right by, surprisingly. Whilst she appears to still be conscious of the fact that this 'calling' may not last forever, she exhibits a great degree of stamina and derives meaning from "seeing people's lives completely transformed by giving their lives to Jesus…and seeing people grow in confidence and also go debt free." Much like Volunteer 1, the workings of what Volunteer 2 attributes to God's grace in her life and the lives of those she both serves and volunteers with keeps her motivated each time she feels she is at the end of her wit. This theme of struggle with clients who exhibit a certain degree of dysfunction appears to be common to all of the volunteers who were interviewed, but answering the caller and seeing "God working," seeing people "become Christians," seeing "people walking free from gambling and alcohol and becoming part of the community by starting to serve within the church" are all recurring and deeply meaningful experiences that keep Volunteer 3 motivated, too. Due to these signs of transformation, Volunteer 3 endures the difficulties because, "when it's tough, I know God's called me to do it," she says. Volunteer 3 also derives a great amount of meaning from her encouraging community of fellow volunteers and Christian believers. She recounted that "there's been some quite clear words spoken over me over the years about that ['calling' to serve with the charity], so, yeah, that's always something to fall back on when it's tough." Because of the difficulties that she believes she couldn't face within her role apart from God's grace, she believes that although the course itself could be run by "anybody of any faith and no faith" to attend, "to really make it effective you need that extra spiritual dynamic." To be clear, Volunteer 3 does not embrace just any general view of reliance upon some abstract idea of higher power, which might merely be thought to create some sort of positive placebo effect in her life and others. Rather, she says that "Jesus is that higher power and he's the one that keeps us going and can do that inner healing we can't actually do, or we could maybe trick ourselves into thinking we can do, but I don't think that lasts really." Her meaning and motivation clearly stems from the ways that she sees God, rather than something like serendipity, speaking into her life through other believers and using her as a vessel for positive change. Like the other volunteers, Volunteer 4 also said that aspects of her work pushed her "completely out of [her] comfort zone." However, seeing her work as a 'calling' and remaining conscious of the ways that God is using her, too, Volunteer 4 is willing to stretch herself. While this personal and spiritual growth can be challenging at times, she derives a great degree of meaning from her work as a Debt Help Coach. As she said, "I just like being with other people and I like seeing them change, you know. I like seeing people who once would only speak to me through a letterbox now out and about in the community on their bikes." That being said, Volunteer 4 was very transparent about the despondency that still sneaks up on her, and she particularly dislikes "having to go backwards and forwards" with clients on paperwork which makes work/ life balance difficult. "If you don't switch your [organization name] phone off, people do start speaking to you all day every day," she says. "There's always the challenge of margins in your life, you know-how much margin have you got for your family, how much margin have you got for yourself?" Still, her divine 'calling' to the work has kept her motivated for eight years, and as a result, she believes that she may be able to go until "70, but who knows!" Like the other volunteers, Volunteer 4 observes the ways that the same faith which gives her an uncanny degree of stamina also transforms the lives of those whom she serves. It is the experience of sticking beside people that God has called her to serve for the sometimes-slow process of change, and being a witness to it, that Volunteer 4 finds to be the most meaningful aspects of her work. "You see some really interesting people, there's real, interesting, and difficult circumstances that change," she says. Findings from the semi-structured interviews furnish us with a deeper understanding of how the experience of a divine 'calling' can motivate volunteers working for this charity to persevere through the challenges such work brings. While they can derive self-realization and provide a service to others partly without this call, participants' perception of the role's difficulty suggests that personal fulfilment and a desire to help others are insufficient sources of motivation by themselves. Despite short or sometimes long periods of despondency, each volunteer seems conscious of an endurance that they are graced with by God, and carrying out this work that He would have them do for the common good is the most meaningful aspect of each volunteers' 'calling' and their lives more broadly. Discussion Volunteers for the charity present clear examples of responding to a divine 'calling.' Each of them interpreted circumstances in terms of divine planning and once it became clear to them that God had prepared them for the task, they (albeit sometimes reluctantly) obliged. It is evident that the workers' conception of 'calling' does not align with the expressivist view, including Dik and Duffy's 'transcendent summons' and Bunderson and Thompson's 'classical view,' rather it draws on the resources, the language and beliefs, of the (broadly understood) Christian tradition. Note immediately the contrast between the volunteers' accounts and discussions of serendipity in the extant literature. The characterization of circumstances as being serendipitous provides the secular agent with an account of the conflation of circumstances that the believer not only can but must interpret as divine plan. Even at the level of the characterization of circumstances the tradition-dependence of interpretation is clear, as Bellah et al. (1996) argued. It should be remembered that Bellah et al. (1996) were strongly influenced by and regularly met with the moral philosopher Alasdair MacIntyre during the writing of Habits of the Heart. MacIntyre's influence is referenced within the preface and multiple footnotes of Habits of the Heart. It remains most evident in Bellah et al.'s employment of terminology from his corpus, such as 'practices,' 'narratives,' 'traditions,' 'moral communities,' and so on (ibid). Furthermore, in a public address promoting the release of Habits of the Heart, Bellah (1986) made it clear that this connection was not intended to be a secret when he reflected on "one of the principal arguments of Habits as a whole, namely, some of the thinking of Alasdair MacIntyre, particularly as it's expressed in his book, After Virtue" (p. 5). MacIntyre was present for many "research meetings" where he provided "suggestions" that went on to become the normative framework for Bellah et al.'s definition of 'calling' (Bellah et al., 1996, p. xlvi). It is within their account of 'work as a calling', in Chapter 3 of Habits of the Heart, that all the aforementioned terminology from MacIntyre's corpus appears. This happens alongside the tenth footnote of the chapter, which points readers to Chapter 10 of After Virtue, wherein MacIntyre states that morality "is always to some degree tied to the socially local" (MacIntyre 1981, p. 130). Bellah et al. (1996) presented the traditions of utilitarian and expressivist individualism, of civic republicanism, and the biblical tradition in just this sense within the history of the United States. During the period while Bellah et al. were drafting their first edition of Habits of the Heart, MacIntyre was working on the sequel to After Virtue (1981), the text that was to become Whose Justice? Which Rationality? (1988). With this text, a concept of tradition that was predominantly deployed as a sociological category in After Virtue (1981) became epistemological but it was no less consistent with Bellah et al.'s (1996) Habits of the Heart for that. The central argument of Whose Justice? Which Rationality? (1988) was that there was no independent rationality that could enable reasoners to make definitive judgments between the claims of rival traditions. What appears rational in one tradition may and often does conflict with the rationality of another, for example attributing a patterned confluence of circumstances to pure chance appears to another, such as our participants, to involve a denial of God's active participation in their lives. The resources of traditions provide us not only with a stock of characterizations-for action, time, circumstances-as we have seen, but also for our evaluative procedures, those that require us to respond if we believe God has called us to do so or those that require us to undertake an investment appraisal or a risk analysis. On MacIntyre's (1988) account, such procedures depend for their coherence on wider sets of connected beliefs that justify the decision procedures and thereby the decisions that result. Although MacIntyre has not engaged explicitly with the traditions that were highlighted in Habits of the Heart (Bellah et al., 1996), he has illustrated debates and personal conversions between traditions in a variety of contexts including the Catholic tradition (MacIntyre, 2009), the Scottish Enlightenment (MacIntyre, 1988), phenomenology (MacIntyre, 2006) and Modern Morality (MacIntyre, 2016). In the latter text he engaged with the concept of 'vocation', which parallels that of 'calling' (McPherson, 2012), in narratives about Vasily Grossman and Denis Faul's quest to flourish as independent practical reasoners (MacIntyre, 2016, pp. 264 and 298). Much like Bellah et al., MacIntyre describes how tradition-based "practical reasoning" is learned via one's 'vocation' and how it sustains human lives, such as Grossman's "ruthlessly truthful self-questioning" (ibid, p. 264). His "vocation as a writer…directed his actions toward the ends mandated by this task" (ibid). The pursuit of those ends associated with his 'vocation,' so MacIntyre says, "gave finality to his life" in ways that the "modern sense" of happiness fails to recognize "but in fact eudaimon" captures quite well (ibid). Despite Grossman dying an unhappy man by all modern standards, "what was crucial was his now unwavering commitment to a task that gave point and purpose to everything in his life" (ibid). Like Grossman, MacIntyre also describes Faul as someone who exhibited a clear sense of calling. "Faul was only nine years old when he decided that he had a vocation for the priesthood, and in this intention, he never wavered" (ibid, p. 298). He submitted himself fully to this practice, and "the reasoning that found expression in his everyday life took its beginning from premises about what both natural and divine law prescribed and permitted for someone such as himself" (ibid). MacIntyre earlier featured the relationship between divine calling and particular types of work in noting Edith Stein's discovery that God can be worshipped through scholarship (MacIntyre, 2006, p. 177). In introducing a chapter on conversions, he argues further that it is a misconception that "to be converted to some particular form of belief in the God of the great theistic religions is necessarily to move beyond reason and perhaps against reason" (MacIntyre, 2006, p. 143). Rather, on MacIntyre's account, to deploy any language of decision is to draw on the resources of traditions, more or less coherently. Those who have attempted to develop a tradition-independent definition of 'calling' (e.g. Wrzesniewski et al., 1997), one that could accommodate both God and serendipity (Dik & Duffy, 2009), or both outer and inner requiredness (Bunderson & Thompson, 2019) are at a greater distance from Bellah et al.'s (1996) conceptualization of 'calling' than they may perceive. On MacIntyre's account: There is no standing ground, no place for enquiry, no way to engage in the practices of advancing evaluating, accepting and rejecting reasoned argument apart from that which is provided by some tradition or other. (MacIntyre, 1988, p. 350) In light of the notion of tradition-constituted rationality, it is no surprise then that both Dik (2020) and Bunderson and Thompson (2018) provide evidence of writing in such different terms about 'calling' for Christian and secular readers. Our participants demonstrate adherence to Biblical rationality by characterizing their decision to work for the charity as the result of divine 'calling,' by their belief in the power of prayer to explain client outcomes, and of the practical verifications of this call that both keep them motivated to endure the difficult aspects of their work and allow them to find deep meaning in work that brings themselves and others closer to Jesus Christ. How does the argument and evidence of this paper in respect of a divine call and its connection to the experience of workers who find their roles deeply meaningful, differ from the self-understanding exhibited in widely cited secular accounts of 'calling?' There are three main implications. The first is that the radical tension between secular and religious accounts of 'calling' is ineliminable. Whilst we have overwhelming evidence that the work choices of both secular and religious agents are profoundly influenced by the experience of something they both refer to as being 'called,' the meaning of this term and its implications differ widely. In the former case, were I to fail to respond positively to such a call, I may experience psychological distress (Dempsey & Sanders, 2010) but in the latter I would be placing my relation to the divine, the ultimate source of meaning in my life, in jeopardy. Second, scholars who seek to grasp the tradition-based notion of 'work as a calling' must not bend the concept to the point where the integral relationship between a caller and the called is broken. Only when this relationship remains is the tradition-based concept of 'calling' coherent and distinguishable from the expressivist notion. These volunteers' experience of a divine call endures beyond an initial draw to their work as a result of some abstract reference to serendipity, which Dik and Duffy (2009, p. 427) include in their analysis of 'transcendent summons.' It also differs from Dik and Duffy's (ibid), Wrzesniewski et al.'s (2009, p. 3), as well as Bunderson and Thompson's (2019, p. 421) notion of 'moral obligation' to others that frequently receive attention in the literature, as indicated by Potts (2019, pp. 28-54). Instead, all four of the volunteers exhibit the perception of a lasting call from God, unless or until God calls them elsewhere. Third, when describing the 'calling' of believers, researchers' terminology needs to be true to the relevant faith tradition if it is to capture its distinctiveness. Terms such as "traditional" or "neoclassical" as opposed to or in addition to "modern" accounts still mostly speak of the phenomenon of 'transcendent summons' in vague or impersonal terms, such as "destiny or prosocial duty" (Bunderson & Thompson, 2019, p. 429), which is of very little, if any, relevance to the experience of a divine 'calling' from God. None of this is to deny that there are rival traditions within Christianity, whose interpretations of the faith have differed, sometimes to the point of organizational schism. The Preface to the 1996 edition of Habits of the Heart highlights that "ascetic Protestantism" (Bellah et al., 1996, p. x), the Biblical tradition central to the early history of the United States, is in many ways at odds with the Catholic tradition that subsequently grew in importance. These traditions have their own debates around the concepts of work and 'calling.' For example, whilst the notion that work can be the vehicle through which a Christian 'calling' is lived out is relatively uncontroversial amongst Protestants, including the participants in our research, no such agreement is to be found in the Catholic intellectual tradition. Perhaps the most extreme example of its denial 9 is the twentieth century Catholic philosopher, Josef Peiper, who distinguished the world of wonder from the world of work in which moderns were trapped: the inhumanity of the total world of work: the final binding of man to the process of production, which is itself understood and proclaimed to the intrinsically meaningful realization of human existence (Pieper 1998(Pieper [1948 This view is as far as can be imagined from the notion of achieving a distinctly Christian 'calling' through work for which both Dik (2020) and Bunderson and Thompson (2018) have argued when writing for Christian readers. Living traditions comprise precisely these kinds of ongoing internal debates in addition to debates with other traditions (MacIntyre, 1988) in which concepts are both subjects of dispute and resources to be deployed. This paper has argued that 'calling' should be understood as a concept whose definition differs profoundly between adherents of different traditions and that therefore the search for a tradition-independent definition, one that has characterized recent debate, is in vain. Conclusions This paper has presented arguments and evidence for three theses. First, despite claiming adherence to the seminal definition of 'calling' by Bellah et al. (1996), researchers' subsequent conceptualizations have differed markedly from the original precisive account. This is especially ironic because these developments towards an expressivist account of 'calling' were precisely those that Bellah et al. (1996) warned against. Using Bellah et al.'s (1996) framework as it was intended can provide conceptual clarity in a way that Wrzesniewski et al.'s (1997) interpretation cannot. Consequently, the second thesis our study proposes is that 'calling' should be considered a relationship, a call and response between the caller and the called, as suggested in Bellah et al.'s (1996) analysis of the Biblical tradition and civic republicanism. Yet this understanding is absent from Wrzesniewski et al.'s (1997) reading and many who follow. Even Dik and Duffy's (2012) 'transcendent summons' and Bunderson and Thompson's (2009) 'neoclassical calling', both attempts to integrate modern and divine accounts, neglect that to be 'summoned' requires a summoner, and to have 'gifts' requires a giver. For the volunteers we interviewed, this relationship appeared to be the primary source of their 'calling', other bases, including the pro-social impact of their work on beneficiaries, the challenge of work and its autonomy, play a secondary role in their work-based deliberations. Thirdly, Bellah et al.'s (1996) account is precisive because it deployed a notion of tradition that bears striking resemblance to that of MacIntyre's (1988) tradition-constituted rationality. The argument that rationalities are themselves constituted by traditions places firm limits on the kinds of ideas and resources that form coherent concepts. A combination of divine and expressivist accounts of 'calling,' as have been developed in the literature, does not meet the standards of coherence internal to distinctly tradition-based accounts of rationality. They must be treated and defined separately. We recognize that these findings have only been supported by our small-scale study and should be supplemented by further empirical research before they can be regarded as anything but suggestive. Studies of the experience of divine 'calling' would obviate the neglect of "spiritual motivations" within this discourse and the literature on business ethics more broadly (Guillen et al., 2015, p. 803), as noted in the Introduction. Regardless of audience, and whether researchers share the faith commitments of research participants, further studies will illuminate the distinctive implications that the experience of divine 'calling' has for the workplace. Informed by MacIntyre's notion of tradition-dependence, we would argue that this work should ensure that participants' accounts are rendered as they are articulated, without diluting their spiritual components. Only when interdisciplinary research stands ready to engage with these spiritual experiences more explicitly, without translating them into secular terms, will a more accurate picture emerge regarding the implications of workers' experience of divine 'calling.' Such persons exhibit a belief in a higher power that they personally relate to in ways that the language of serendipity does not capture. For the volunteers that we studied, just as Dorothy Sayers (1949, p. 54) states in her famous essay, their occupation marks "the full expression of the worker's faculties, the thing in which he [or she] finds spiritual, mental, and bodily satisfaction, and the medium in which he [or she] offers himself [or herself] to God." Without pay or promise of promotion, it is safe to say that, like Sayers, such individuals possess the view that "work is not, primarily, a thing one does to live, but the thing one lives to do" (ibid). In fact, one could go even further, arguing that, for the volunteers we interviewed, the practices they participate in as a part of their 'calling' function not simply as a service to others, but also to God who called them to this work. Our participants' understanding their work as akin to worship reflects a Calvinist Christian tradition and is evidently deeply meaningful and motivating to these frontline volunteers in addition to the ways that their work betters them as persons and makes a tangible contribution to the good of others and their communities. We suspect that similar studies of other individuals who similarly regard their spiritual life as highly important would reflect these results, too. Perhaps, then, in addition to studying similar experiences of a divine call, future studies could benefit from more narrowly examining the connection that may exist between 'work as a calling' and work, as the Calvinist tradition understands it, as a form of worship. Funding No funding was received for conducting this study. Conflict of interest The authors have no conflicts of interest to declare that are relevant to the content of this article. Ethical approval The data featured in this article was collected in keeping with Northumbria University ethical standards, following approval from the institution. Informed consent was gained from both the organization and the participants. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
17,032.4
2022-02-21T00:00:00.000
[ "Philosophy" ]
Temperature Dependence of Structural and Optical Properties of ZnO Nanoparticles Formed by Simple Precipitation Method ZnO nanoparticles were successfully synthesized by a simple precipitation method. The effect of growth temperature on the structural and optical properties of the resulting nanoparticles was investigated by transmission electron microscopy (TEM), ultraviolet-visible spectroscopy (UV-VIS) and photoluminescence (PL) spectroscopy. TEM images and selected area electron diffraction (SAED) pattern showed that the nanoparticles were polycrystalline ZnO having hexagonal wurtzite structure. The average size of the nanoparticles were 4.72nm to 7.61nm synthesized at room temperature to 80°C. This result is also consistent with the calculated sizes using effective-mass approximation model. From the optical absorption data, the band gap energy of nanoparticles blue-shifted due to the quantum confinement. Introduction Particles of semiconductor materials in nanoscale have recently gained much interest due to their unique physical and chemical properties, which are different from the bulk materials.Among these materials, ZnO nanoparticles have been consistently studied due to its several possible applications in sensors [1], catalysis [2], photovoltaics [3][4] and piezoelectric transducers [5], among others.ZnO is an n-type semiconductor with a direct wide band gap of 3.37 eV and a large exciton binding energy of about 60 meV at room temperature [6].It also has a high absorption coefficient, good carrier mobility and flexible synthesis techniques.The properties of nanostructured ZnO depend strongly on its structural morphology and crystal size [7].Thus, a deep understanding of these properties is necessary for fundamental study and utilization for commercial applications. Various synthesis methods have already been developed for ZnO nanoparticles such as metal organic chemical vapor deposition (MOVCD) [8], sol-gel [9], flame spray pyrolysis [10], thermal decomposition [11] and precipitation [12].However, these methods require controlled reaction systems such as high temperature, high cost materials and complicated equipment [13] except for the precipitation route which offers a simple and economical method for large scale production.In this study, a simple precipitation method was utilized to synthesize ZnO nanoparticles.The effect of growth temperature to the structural and optical properties of the resulting nanoparticles was also examined using transmission electron microscope (TEM), (UV-VIS) and photoluminescence (PL) spectroscopy. Experimental Dimethyl sulfoxide (DMSO, (CH 3 ) 2 SO) and ethanol (CH 3 CH 2 OH, 99.5%) were used as solvents; zinc acetate dehydrate (Zn(CH 3 CO 2 ) 2 •2H 2 O) as precursor; tetramethylammonium hydroxide (N(Me) 4 OH•5H 2 O) for hydrolysis.In a typical preparation, 6 ml of 0.55M N(Me) 4 OH in ethanol was added dropwise at approximately 0.6 mL/min to 0.1M Zn(CH 3 CO 2 ) 2 •2H 2 O dissolved in DMSO under constant stirring.The nanoparticles were precipitated and washed twice by centrifugation for 30 min in ethyl acetate to eliminate excess reactants.After which the nanoparticles were redissolved in 3 ml ethanol for storage.The morphology and structure of ZnO nanoparticles were examined by transmission electron microscope (TEM, JEM-3100FEF) and selected area diffraction (SAED).For optical characterization, the UV-VIS transmittance and photoluminescence (PL) spectra were measured by JASCO V-530 and FP 750 spectrometer, respectively. Results and discussion Figure 1 shows the TEM images of ZnO nanocrystals formed at different temperatures.At room temperature, monodispersed ZnO nanocrystals with a mean diameter of 4.72nm were formed as seen in Figure 1 nanocrystals increased to about 5.24nm.As can be observed, the particles have started to agglomerate due to Oswald ripening process which could be induced by increasing supply of thermal energy.At higher temperatures of 60°C and 80°C, larger particles with mean diameters of 6.70 and 7.61nm developed, respectively, as seen in Figure 1(c) and 1(d).When the growth temperature is increased the critical particle radius increased as well resulting to the disappearance of the smaller particles to supplement the growth of the larger ones [14].Here, the following chemical reactions govern the growth stages of ZnO nanoparticles: is dehydrated into ZnO following crystal growth.Figure 2 shows the UV-VIS absorption spectra of the synthesized ZnO nanocrystals for different temperatures.As the reaction temperature is decreased, the characteristic absorption peaks of the synthesized ZnO nanocrystals blue-shifted from 3.54 to 3.71 eV due to quantum confinement effect.On the other hand, these energy band gap values can be used to calculate estimation of size variation from the absorption spectra through the effective mass approximation model [15]: where E g Bulk = 3.37 eV is the energy of band gap of the bulk ZnO, M is the effective mass, ħ is Planck's constant, R y *= 0.04 eV is the Rydberg's constant, D is the diameter of the nanoparticles and E g * is the energy band gap in eV that can be calculated from the excitonic absorption peak using Equation 6: ICNMS 2016 The size values using the effective mass model are given in Table 1.The results are in agreement with those measured using TEM having the same trend of increasing size as growth temperature is increased.Figure 3 shows the PL spectra of the ZnO nanoparticles at different temperatures.It has been well established that nanostructured ZnO with small size and defects demonstrates strong visible emission [17] attributed to defects such as oxygen (V o •• ) and zinc (Zn i '' ) interstitials.Here, only a strong visible emission can be observed at room temperature synthesis.At the same time large size and crystalline in nature nanoparticles exhibit stronger UV emission [16] which is related to a near band-edge transition of ZnO, specifically, the recombination of the free excitons.When the temperature is increased to 40°C and 60°C, both UV and visible emissions can be seen due to the development of larger particles in this temperature regime.Defects present decreased as the surface area of the nanoparticles increased due to Ostwald ripening.At reaction temperature of 80°C, only UV emission was observed and could indicate increased crystallinity of the nanoparticles.It should be noted that the emission peaks also red-shifted to higher wavelength supporting the earlier results of an increase in the diameter of the nanoparticles at higher temperatures. Summary We demonstrated a simple precipitation method for the preparation of ZnO nanoparticles using zinc acetate and tetramethylammonium hydroxide as precursors.The particles were hexagonal ZnO with average sizes of 4.72-7.61nmfor increasing reaction temperatures.This trend is consistent from calculations using effective mass approximation model.From absorption data, the band gap values blue-shifted to lower temperatures revealing quantum confinement at smaller diameters.These sizecontrolled properties of ZnO nanoparticles exhibit good utilization for various commercial applications. Figure1shows the TEM images of ZnO nanocrystals formed at different temperatures.At room temperature, monodispersed ZnO nanocrystals with a mean diameter of 4.72nm were formed as seen in Figure1(a).Increasing the temperature to 40°C, the mean diameter of the Table 1 . Measured and calculated nanoparticle size aBy UV-VIS, bBy TEM
1,528.8
2016-01-01T00:00:00.000
[ "Materials Science" ]
Tribological Performance of Silahydrocarbons Used as Steel-Steel Lubricants under Vacuum and Atmospheric Pressure The silahydrocarbons of tetraalkylsilanes with different substituted alkyl groups (named as SiCH) were synthesized and evaluated as lubricants for steel-steel contacts by a home-made vacuum four-ball tribometer (VFBT-4000) under atmospheric pressure and under vacuum pressure (5 × 10−4 Pa). The SiCH oils possess better thermal stability, low temperature fluidity, and lower saturated vapor pressure than those of multialkylatedcyclopentanes (MACs). The tribological performances of the SiCH oils are also superior to those of MACs and PFPE-Z25 in terms of friction-reduction ability and antiwear capacity under sliding friction at vacuum.The SEM/EDS andXPS results reveal that the boundary lubricating film consisting of (-O-Si-R-) Introduction The use of space satellites for communication, navigation, and defense is becoming more and more important.High costs for both construction and launch satellites are driving the need to extend the service life of these satellites from the current 5 to 8 years to 10 years and even for a longer period.According to NASA, many mechanical failures in spacecraft were caused by tribological problems [1][2][3][4][5][6].Thus, the improvement of lubrication and antiwear performance for the mechanical systems is the key to extend the service life of satellites and spacecrafts.The synthetic liquid lubricants have been very commonly used in aerospace equipment for many years.And low vapor pressure, low pour point, good thermal stability, and good lubrication properties, particularly, good boundary lubrication performance and low wear formation rates, are crucial physicochemical and tribological properties which is to be ensured for aerospace devices [7,8].The majority of current aerospace applications use mineral oils, perfluoropolyalkylethers (PFPEs), or synthetic hydrocarbons, such as multialkylatedcyclopentanes (MACs) and polyalphaolefins (PAO) [9,10].PFPEs are the mostly used liquid lubricants for aerospace mechanisms but suffer from poor boundary lubrication capability, the incompatibility with conventional additives, the catalytic degradation, and so forth [11,12].MACs and PAO are of limited use at low temperature and in vacuum. Recently, a relatively new type of space lubricants (i.e., silahydrocarbons) has been developed.These lubricants contain only silicon carbon and hydrogen [8].Early studies [13][14][15][16] have shown that silahydrocarbons have significantly lower melting points compared to corresponding hydrocarbons.Silahydrocarbons are superior to mineral oils and synthetic hydrocarbons in both thermal and oxidative stabilities as well as viscosity-temperature behaviour.They possess unique friction-reducing and antiwear properties while retaining the ability to solubilize conventional additive, which is an attractive future lubrication requirement for aircraft and aerospace mechanisms, particularly for low temperature and high vacuum applications use in space. The aim of this work was to study the physical properties of silahydrocarbons of tetraalkylsilanes with different substituted alkyl groups (named as SiCH) and evaluate the tribological performance of SiCH as liquid lubricants for steel-steel contacts by a vacuum four-ball tribometer under atmospheric pressure and under vacuum pressure.For comparison, the tribological properties of two aerospace lubricants, namely, PFPE-Z25 (under the trade name "Z25") and MACs, were also investigated under the same circumstances. Experimental The SiCH oils and MACs were synthesized according to the references in laboratory [17][18][19], and the molecular structures of the SiCH oils were characterized by an intensity fluctuation spectroscopy 66 v/s Fourier transformation infrared (FTIR) spectroscopy and 400 MHz Bruker-400 FT nuclear magnetic resonance spectroscopy (NMR) spectrometers.PFPE-Z25 is commercially provided by Fomblin Inc., in the US, which is used in the aviation and military industries. The molecular structures of the SiCH oils and MACs are shown in Figure 1; several typical physical properties of the SiCH oils and MACs are shown in Table 1.The kinematic viscosities of the lubricants were measured using an SVM3000 Stabinger viscometer at different temperatures according to ASTM D445 designation.The saturated vapor pressure of the lubricants was evaluated on a home-made vacuum grease saturated vapor pressure tribometer by the method of evaporation.Thermogravimetric analysis (TGA) was performed on a Perkin-Elmer TGA-7 conducted in nitrogen atmosphere from 20 ∘ C to 600 ∘ C at a heating speed of 10 ∘ C/min.The evaporation weight loss of the lubricants was evaluated in a vacuum oven under approximately 5 × 10 −3 Pa at 125 ∘ C (the results were shown in Table 2). In order to test the anticorrosion properties of the SiCH oil to metal substrates in the presence of water, the corrosion tests were carried out in an environmental chamber under the hot and humid condition according to standard method of ASTM D130-94.The polished GCr15 bearing steel (SAE52100) block was used as the substrate, and the testing conditions are as follows: test temperature of 100 ∘ C, 80% relative humidity, and duration of 24 h. The tribological performances of the SiCH oils, MACs, and PFPE-Z25 as lubricants for steel-steel contacts were evaluated on a home-made vacuum four-ball tribometer (VFBT-4000) under atmospheric pressure and under vacuum pressure (5 × 10 −4 Pa).The vacuum four-ball tribometer which was designed and manufactured by the State Key Laboratory of Solid Lubrication, Lanzhou Institute of Chemical Physics, Chinese Academy of Sciences, based on the configuration of a traditional four-ball tribometer, was employed.As shown schematically in Figure 2, the vacuum chamber was evacuated by using a series of a turbomolecular pump and a mechanical pump.The tribological characteristics of liquid lubricants for aerospace applications were evaluated by this tribometer under the pressures of 5 × 10 −4 Pa.It can also be run at lower vacuum (about 10 Pa) and at atmospheric pressure with air or nitrogen.All tribological performance tests were performed under the load of 392 N with a rotating speed of 1450 rpm at 25 ∘ C for 30 min.The steel balls (diameter 12.7 mm, hardness HRC 59 to 61) were made of GCr15 bearing steel (SAE 52100).Before and after each test, test specimens were ultrasonically cleaned in petroleum ether (normal alkane with a boiling point of 60∼90 ∘ C).For each sample, three tests were conducted to minimize data scattering.At the end of each test, the wear scar diameters of the three lower balls were measured with an optical microscope with an accuracy of 0.01 mm, and then the average wear scar diameter of the three identical tests was calculated as the wear scar diameter (WSD) in this paper.The friction coefficients were recorded automatically with a computer equipped with a four-ball tribometer. Experiments using a scanning electron microscope with a Kevex energy dispersive X-ray analyzer attachment (SEM/EDS) and an X-ray photoelectron spectrometer (XPS) were conducted to examine the morphology and chemical composition of the wear scars and the possible tribochemical changes involved in the sliding process.The SEM/EDS analysis was performed on a JSM-5600LV SEM.The XPS analysis was carried out on a PHI-5702 multifunctional Xray photoelectron spectroscope, with Al-K radiation as the exciting source.The binding energies of the target elements were determined at a pass energy of 29.35 eV with resolution of approximately ±0.3 eV, with the binding energy of carbon (C1s: 284.8 eV) as the reference. Physical Properties of Silahydrocarbon Lubricants. The molecular structures of SiCH are shown in Figure 1 and are analyzed by infrared spectroscopy (IR) and proton nuclear magnetic resonance ( 1 H-NMR).It can be seen that the molecular structure of SiCH resembles branched macromolecules (tree-like structure); the substituted alkyl (-R) in the molecular structures was selected as hexyl(-C 6 H 13 ), octyl(-C 8 H 17 ), and decyl(-C 10 H 21 ), named as SiCH-1, SiCH-2, and SiCH-3, respectively. As an example, the findings of SiCH-3 are shown as follows [IR (KBr film), ]max/cm −1 , Figure 3(a)]: C-H stretching vibration bands appear at 2956 cm −1 , 2921 cm −1 , and 2853 cm −1 ; C-H band modes at 1466 cm −1 and 1378 cm −1 , and the band at 720 cm −1 is the-(CH 2 ) 7 -rocking vibration; Si-C Table 1 shows several physical properties of SiCH, MACs, and PFPE-Z25 oils.It can be seen that the three kinds of SiCH oils have lower saturated vapor pressure compared with MACs, but higher than that of PFPE-Z25 oil.The viscosity indices (named as VI) of the three SiCH oils are similar.The kinetic viscosity increases with the carbon number increase of the substituted alkyl groups within the SiCH molecular structures; the viscosity of SiCH-1 is the smallest among the three SiCH oils at −20 ∘ C, which are also superior to MACs. Table 1 reveals that the three SiCH oils also exhibit better low temperature fluidity, with their pour points lower than −55 ∘ C. By contrast, MACs could not flow under −55 ∘ C.These results indicate that the SiCH synthetic oils can be used in a wide temperature range and especially exhibit good fluidity under low temperature conditions. Table 1 also presents the thermal stability results of these oils.SiCH-3 exhibits the highest thermal decomposition temperature among all the liquid lubricants in this study.Moreover, the thermal stability of the SiCH oils increases with an increase in the chain length of the substituted alkyl within the molecular structures. Meanwhile, the evaporation weight loss results of these oils at 125 ∘ C with 24 hours under vacuum pressure (nearly 5 × 10 −3 Pa) are listed in Table 2.The vacuum evaporation weight loss of SiCH-3 is the smallest among the oils in this study, which is close to that of the PFPE-Z25 oil; it is also indicated that the lubricant with longer chain length structure would show lower volatility.The results show that all of the SiCH oils are superior to the synthetic oil of MACs in the evaporation weight loss, which is due to the tree-like molecular structure of SiCH. The corrosion results of SiCH-3 with steel were selected to describe the anticorrosion property in Figure 4 because the SiCH oils possess the similar anticorrosion properties.It can be seen that there is no corrosion on bearing steel with the oils under the hot and humidity conditions, and no corrosion could be produced during the friction process.5(a) and 5(b)) and the average friction coefficient (Figure 6) lubricated with the SiCH oils, MACs, and PFPE-Z25 by a vacuum four-ball tribometer under atmospheric pressure and under vacuum pressure, respectively.The wear scar diameters of these lubricants by this experiment are shown in Figure 7. Tribological Performance. Figures 5 and 6 present the curves of friction coefficient (Figures It can be seen from Figures 5(a) and 6 that MACs show the steadiest friction curves and the lowest average friction coefficient (approximately 0.08) compared with the SiCH oils and PFPE-Z25 under atmospheric pressure.It is also seen that SiCH oils with different chain of the substituted alkyl groups exhibit different tribological behavior.The friction coefficient decreases along with the increase of carbon number in the alkyl chain under this tested condition.The curves of friction coefficient of the SiCH oils are all instable and show transient seizure-like high friction [20][21][22] and transient dry friction or abrasive wear occurred on the worn surface.This result should be explained as that some compounds, for example, SiO , might be produced on the steel surfaces lubricated by the SiCH oils during the high friction period, which facilitate returning to a low-friction period.Meanwhile, comparing the antiwear performances of these lubricants under atmospheric pressure as seen in Figure 7, MACs exhibit the smallest WSD. In addition, the antiwear property of SiCH-3 is superior to that of PFPE-Z25 and the WSD lubricated by SiCH-3 is the smallest among the three SiCH oils.This finding indicates that the tribological properties of the SiCH oils would improve with an increase in the chain length of the substituted alkyl in case this tendency continues.The curves of friction coefficient results of these lubricants under vacuum pressure are shown in Figure 5(b).SiCH-3 exhibits the steadiest curves of friction coefficient, and did not show transient seizure-like high friction during the entire sliding process under vacuum pressure.The MACs show higher average friction under vacuum pressure than that under atmospheric pressure, which is contrary to the results of other lubricants.It can be seen from Figure 7 that SiCH-3 also shows the smallest wear scar diameter as lubricant for steel-steel contacts among the three SiCH oils, which is benefited from the longer chain in the molecular structure, which indicating that the antiwear property of SiCH-3 is superior to that of PFPE-Z25 and MACs under vacuum pressure.It is noticeable that the average friction coefficient and WSD of MACs are the maximum under vacuum pressure, which is contrary to the results obtained under atmospheric pressure.The tribochemical reactions of the SiCH oils with the steel surface are different under atmospheric pressure and the vacuum pressure; thus, different boundary films might be formed during the sliding friction under different conditions. Surface Analysis. Since SEM/EDS and XPS analyses of the worn scars lubricated by SiCH oils with different substituted alkyl groups show similar results, we describe here the emphasis of the results of SiCH-3.Figure 8 shows the typical SEM images and elemental distributions of silicon on the wear scar surfaces lubricated by SiCH-3 under atmospheric pressure and under vacuum pressure, respectively.The wear scar (Figure 8(d)) that was lubricated by SiCH-3 under vacuum pressure is smaller and the worn surface is smooth along with mild scuffing.However, wider wear scar is shown in Figure 8(a) after the lubrication by SiCH-3 under the atmospheric pressure with serious adhesion wear (Figure 8(b)).The results of wear scar morphologies on the worn surfaces (Figures 8(b) and 8(e)) are consistent with those of the elemental distributions of silicon (Figures 8(c) and 8(f)), implying that silicon is enriched on the wear track and the concentration of silicon is clearly higher on the worn surface lubricated under vacuum friction condition than that under atmospheric friction condition. The SEM/EDS analysis shows that the elemental distribution of silicon on the worn surface lubricated under vacuum friction condition is different from that under atmospheric friction condition, which explains the different tribological performances of the oils under atmospheric pressure and under vacuum pressure. XPS analysis is used to further clarify the chemical states of the typical elements on the wear scar surfaces lubricated by the SiCH oils.XPS results indicate that complicated tribochemical reaction occurred during friction.Figure 9 shows the XPS spectra of typical elements of O and Si on the worn surfaces lubricated by SiCH-3 under atmospheric pressure and under vacuum pressure.Under atmospheric pressure, it can be seen that the binding energy of Si2p appears at 102.3 eV, combining with the binding energy of O1s at 531.3 eV, which is corresponding to the compound of (-O-Si-R-) [23].Moreover, the Si2p peak at 103.1 eV is ascribed to SiO 2 .Lubricating with SiCH oil under atmospheric condition, much oxygen dissolving in the oil participated in the tribochemical reaction to produce some oxidative compounds.However, the main binding energy of Si2p (101.7 eV) under vacuum pressure is different with respect to that of Si2p (102.3 eV) under atmospheric pressure, which is characterized by the compound of (-Si-R-Si) with lack of oxygen.And abrasive wear is observed on the worn surface under vacuum pressure during the friction process, which can be seen from the SEM images (Figures 8(d ) and 8(e)). There is few flaking on the worn track, which presents that SiCH performs excellent lubricating properties under vacuum pressure. On the basis of the above results, it illustrates that different tribochemical reactions occurred on the worn surface lubricated by the SiCH oils when the sliding friction experiments were carried out under different conditions.The boundary film consisted of the compound (-O-Si-R-) and SiO 2 is formed by the tribochemical reaction and serious adhesion wear under atmospheric pressure, whereas the compound of (-Si-R-Si) is formed on the worn surface by abrasive wear under vacuum pressure during the sliding friction with the SiCH oils. Conclusions Sliding friction experiments of SiCH synthetic oils as lubricants for steel-steel contacts were carried out on a vacuum four-ball tribometer under different atmospheric pressure.Based on the above experimental results, the following conclusions are drawn. (i) The SiCH synthetic oils were prepared and they possess good thermal stability, low temperature fluidity, and lower saturated vapor pressure, denoting that they are superior to that of MACs as aerospace lubricants. (ii) The SiCH with decyl group substituted shows the best tribological behaviour among the three SiCH synthetic oils for steel-steel contacts under vacuum pressure and is superior to those of MACs and PFPE-Z25 in terms of friction-reduction ability and antiwear capacity. (iii) The SEM/EDS and XPS results reveal that the tribochemical reactions of SiCH oils with the steel surface are different under atmospheric pressure and under vacuum pressure during the sliding friction. The boundary lubricating film that consisted of the compound of (-O-Si-R-) and SiO 2 is formed by the tribochemical reaction and serious adhesion wear under atmospheric pressure, whereas the compound of (-Si-R-Si) is formed on the worn surface by abrasive wear under vacuum pressure. Figure 1 : Figure 1: Molecular structures of the prepared SiCH oils. Figure 7 : Figure 7: Wear scar diameters of SiCH oils for steel-steel contacts under atmospheric pressure and under vacuum pressure (392 N, 30 min, 1450 rpm, and four-ball tribometer). Table 1 : Physical properties of these liquid lubricants.
3,915.8
2014-01-01T00:00:00.000
[ "Materials Science" ]
Exploring finite-size effects in strongly correlated systems Complexities greatly limit any study of strongly correlated systems to a small number of particles. Thus, any attempt at understanding infinite systems such as those arising from neutron matter (NM) must consider finite-size (FS) effects at play when below the thermodynamic limit (TL). In these conference proceedings we provide some examples of FS effects at work and discuss our prescription for extrapolating the physics of extended systems. We present our methodology and calculations performed for an assortment of strongly correlated (SC) systems. Ab initio, non-perturbative Quantum Monte Carlo (QMC) methods can be employed to accurately compute ground-state energies and finite-temperature properties. We apply these to periodically modulated NM and use our results to constrain phenomenological theories of nuclei and study the static response of NM. Introduction Strongly interacting systems in quantum mechanics are those in which the interaction energy among constituents is significant, i.e. comparable to the kinetic energy.These are much harder to understand than systems with weak interactions.The latter can be well approximated by free particles in an external potential and thoroughly analyzed using well-established methods such as perturbation theory.Unfortunately, correlations in strongly interacting systems are complicated and cannot be tackled using traditional techniques that work for weakly interacting systems.Investigations are generally limited to few particles because of complexities introduced by the interactions.Thus it is impossible to directly compute macroscopic properties of such systems using modern computational technology.Nevertheless, understanding the relationship between finite and thermodynamic limit (TL) physics may allow an extrapolation to such scales.The exploration of finite-size (FS) effects in strongly correlated (SC) systems is therefore highly important and very rewarding. There is no shortage of SC systems in low-temperature physics.They appear in nuclear, condensed matter, solid-state, and cold-atom physics [1].Many superconductors are SC [2].Neutron matter (NM) is a SC system as well [3,4].The nuclear force is strong and especially complicated.There are central, spin, and tensor terms to name a few.Neutrons do not form bound systems by themselves so they are easier to study than combinations of protons and neutrons.They are also relevant to actual physics: they appear in large quantities in neutron stars and are the main constituents of neutron-rich nuclei [3].The physics of NM is consequential to neutron-star structure.Its importance is recognized and much research continues to investigate and improve our understanding of its equation of state (EOS) [5][6][7][8][9][10].We present our contribution here on the response of NM to an external sinusoidal potential [11,12].Not only does this provide feedback on the nature of pure NM Convergence in energy to the TL for a non-interacting free-Fermi gas.The energy is plotted in units of E FG which is the energy per particle at the TL.Re-plotted and inset using a linear scale for N [12]. through the static response functions, it is also a better comparison to naturally occurring physics than homogeneous NM.The density of neutrons is not constant in most neutron-rich systems.This inhomogeneity arises in neutron-star crusts because the matter contains a lattice of nuclei that perturbs the unbound neutrons.Nuclei are inhomogeneous because the density of nucleons decreases towards the edge of the nucleus.The connection to nuclei from NM is made through various energy density functionals (EDFs) that are fit to masses and radii of nuclei [13][14][15][16].The NM EOS can be used as a constraint as well [14,[17][18][19][20][21][22].We use our NM results as an input constraint for EDFs of the Skyrme type [11,12].Calculations for SC systems have greatly benefitted from modern advancements in computation.Today's supercomputers allow for simulations of sizable numbers of particles using methods that are derived from first principles.The non-perturbative family of Quantum Monte Carlo (QMC) methods are excellent for tackling a wide variety of complicated problems.We apply Auxiliary Field Diffusion Monte Carlo (AFDMC) to calculate ground-state energies for our neutron system.This is a projector method that extracts the ground-state from a trial wave function and is especially suited for handling complicated spin correlations arising from the nuclear interaction.Of course these simulations are largely limited by particle number.In what follows we discuss the consequences of FS on quantum systems and apply our observations to NM. Finite-size effects for non-interacting fermions 2.1 The free-Fermi gas The natural starting point for understanding FS effects is to consider non-interacting particles and explore FS in a free-gas of fermions.We confine N particles to a cubic box of side length L and impose periodic boundary conditions on the wave function.This respects translational invariance as one can imagine identically occupied cubes filling up all of space for an extended system in the TL.The energy eigenstates of a free-Fermi gas are plane waves with wave vectors k = (2π/L)(n x , n y , n z ), where n x , n y , n z are integers.At T = 0 the lowest available energy levels are occupied.The energy is given by E = 2 k 2 /2m.Given spin there is an additional degeneracy for each spin-projection state. The macroscopic system/TL can be envisioned by iteratively adding particles at a fixed density by increasing the box length L accordingly.In this limit intensive properties like energy per particle converge to a macroscopic value.It can be analytically shown that the energy per particle converges to E FG = (3/5)E F , where E F is the Fermi energy.This is shown in Fig. 1 where the ratio of energy per particle to E FG converges to 1 as N goes to infinity.Any difference between a finite system and the TL is called a FS effect.The FS effects in Fig. 1 are quantified as |E FG (N)/E FG − 1| and are larger at smaller particle numbers.The cusps in the inset occur near closed shells where all occupied energy levels are filled to capacity.The wave function is ambiguous whenever an occupied energy level is not completely filled because only a subset of degenerate states are occupied and this subset is not unique.For this reason it is usually preferred to study closed shells.Closed shell configurations for the free-Fermi gas occur at N = {2, 14, 38, 54, 66, 114...}.The FS effects in Fig. 1 appear small near the N = 66 closed shell.This motivated us to choose 66 particles in our NM study. It's also interesting to explore the energy cost associated with adding an extra particle.At large N we expect E(N +1)−E(N) ≈ E FG .Fig. 2 displays this convergence at a density of 0.05 fm −3 .Note that increasing N implies increasing the box length L. Wave vector magnitudes decrease with increasing L because k-space is reciprocal to physical space.This implies that the energy of N particles decreases as extra particles are added at fixed density.Since the energy of an additional particle is ≈ E F when N is large, this implies that the N particle energy change is Calculations for a non-interacting gas are trivial compared to a strongly interacting gas.Energies can be computed for thousands of particles.This allows us to determine TL properties of noninteracting particles in a one-body potential.Our prescription for extrapolating an interacting system to the TL uses the TL energy of a non-interacting gas: Table 1.SLy4 energy per particle for 66 neutrons (first column) in a box at various densities.The second column contains the extrapolated results computed using Eq. 1.The final column has known SLy4 energies in the TL [12].For large particle numbers N, the observed spike disappears [23]. where E I and E NI are energies per particle for the interacting and non-interacting systems respectively.Eq. 1 extrapolates to the TL by adding a non-interacting correction to the energy of interacting particles.We applied Eq. 1 to local density approximation (LDA) calculations of homogeneous neutron matter.These are calculations that use EDFs to compute the ground-state energy.The SLy4 parametrization of the Skyrme EDF was used.[14] The energy per particle of 66 neutrons at various densities are listed in Table 1.They agree to within 0.5% with known SLy4 energies (final column). In related recent work, we have also extended the study of FS effects to the case of finite temperature [23].There, assuming classical statistics and quantum mechanics, the energy is computed via the averaging of the energies over a Maxwell-Bolztmann distribution.This was carried out at a constant density of nσ 3 = 0.2063 for several particle numbers (σ is here just a length scale, which would correspond to the hard-sphere diameter if interactions were turned on).By taking the derivative of E/N with respect to temperature, we have calculated the specific heat for this system, Fig. 3.This shows a convergence to the equipartition theorem value (i.e., 3/2) at increasingly lower T as N gets larger.For small particle number values N, there is a "spike", followed by a rapid decrease.This goes away as N gets closer to the thermodynamic limit. Periodic modulation and static response Turning back to the zero-temperature problem, Eq. 1 applies to the general case of a non-interacting gas in an external potential: where v ext = i v(r i ).Although analytical solutions do not necessarily exist, numerical calculations of the energy are easy to implement.We consider the periodic one-body potential v(r i ) = 2v q cos(q • r i ).This potential is also used when we study NM.For finite N we enforce that a whole number of periods of v span the box length L. This is to respect translational invariance.Consequently, calculations of energy at constant density are limited to N that respect this constraint.This can be seen in Convergence to the TL for a non-interacting gas in a one-body periodic potential v(r i ) = 2v q cos(q • r i ). The amplitude is 2v q = 0.5E F and periodicity set so that two periods span the box at the smallest N point.The density is constant at 0.1 fm −3 [12]. potential is set to 2v q = 0.5E F and q = 1.4433 fm −3 .E NI (∞) is taken to be approximately E NI (66000) for our calculations.The FS effects are once again seen to die down to 0 as N approaches infinity. What's nice about the cosine potential is that one can easily extract the linear static response function from energy versus potential strength v q .Note that the response function is a property of the unmodulated gas.It quantifies how the system responds to small perturbations.An analytical solution in the TL exists for the non-interacting Fermi gas and is given by the Lindhard function [24] (solid line in Fig. 5).We calculated the response function for 66 and 66000 particles at a density of 0.1 fm −3 .FS is evident for 66 particles where there is a suppression (squares) compared to the Lindhard function.The 66000 particle response (circles) shows convergence with the TL.Response functions for the case of the non-interacting Fermi gas (the density is 0.1 fm −3 ).Shown are the cases of 66 and 66000 particles, as well as the response in the TL [12]. Quantum Monte Carlo QMC methods are a stochastic approach to solving the many-body Schrödinger Equation.Most of the algorithms produce exact results for many-body properties.The major exception is for fermions where the fermion-sign problem complicates things significantly.There exist continuous and lattice formulations of the method as well as both zero and finite temperature applications.QMC is very good for studying SC systems up to a decent number of particles.It has been very successful in nuclear physics [25][26][27].The number of particles is limited by the computational complexity associated with scaling and is the reason why FS effects are important. Auxiliary Field Diffusion Monte Carlo AFDMC is an extension of DMC specialized for handling complicated spinor interactions [28].The central idea behind the method is to stochastically evaluate the evolution of a trial wave function in imaginary time τ: |Ψ 0 = lim τ→∞ e − Hτ |Ψ T .The excited states decay in this evolution leaving only the ground-state wave function at infinite imaginary time.The ground-state energy is extracted from the statistical data produced in the evolution.Whereas DMC only handles the coordinate degrees of freedom in this fashion, AFDMC treats the spinors stochastically as well. AFDMC is suitable for the nuclear interaction which contains complicated spin dependence.We use AFDMC to calculate ground-state energies of 66 neutrons.The Hamiltonian contains both twonucleon (NN) and three-nucleon (NNN) interactions: where v ext is a sum of one-body potentials: v(r i ) = 2v q cos(q • r i ).The nuclear interaction is strong but also very-short range (∼ 1 fm).This lessens the FS effects in comparison to a longer-ranged interaction.The particles are placed in a box with periodic boundary conditions (just like the noninteracting problem).Furthermore, whenever the evolution produces a coordinate lying outside of the box, the particle reappears at the opposite side of the box.This respects translational invariance and the idea that there are identical boxes filling up all of space.Similarly, when evaluating the interaction part of the Hamiltonian one can consider the interaction between particles in the box and the nearby boxes surrounding it.This helps to mitigate FS effects.Next we look at some results from this method. Equation of State We ran simulations for a range of densities and external potential strengths 2v q .The Argonne v8' (AV8) and Urbana IX (UIX) potentials were used for the NN and NNN interactions respectively [27,29].This was done for various periodicities of the external potential as well.The energy per particle always falls for increasing potential strength.This makes sense because the particles collect at the troughs of the cosine where the potential is negative.This is visible in Fig. 6 (circles) which depicts AFDMC energies and potential strengths for 66 particles at a density of 0.1 fm −3 with q = 4π/L (i.e. two periods inside the box).The change in energy with respect to increasing v q is what we refer to as static response.Applying Eq. 1 to these results gives the response for neutron matter.The solid line is energies calculated using SLy4 in the LDA.These results are more closely related to nucleic systems since EDFs are fit to them.The dashed line gives a modified SLy4 where we tuned the isovector term in the EDF to match the energy changes from AFDMC. Conclusion We have taken a look at the impact that FS has on quantum mechanical systems.The need to understand these was stressed for SC systems where N is limited by the complexity of the interaction.FS effects in the non-interacting free and periodically modulated gas were investigated, motivating the study of 66 neutrons.It was shown that the much more trivial non-interacting energies provide a FS fix for interacting systems.We then provided an overview of QMC methods which are ideal for studying SC physics.We applied AFDMC to calculations of periodically modulated neutron matter. The results are related to the physics of neutron-star crusts as well as neutron-rich nuclei through the EDF approach. Figure 1 . Figure1.Convergence in energy to the TL for a non-interacting free-Fermi gas.The energy is plotted in units of E FG which is the energy per particle at the TL.Re-plotted and inset using a linear scale for N[12]. Figure 2 . Figure 2. Energy difference from adding a single particle to N particles at a fixed density of 0.05 fm −3 . Fig 4 where the energy per particle at a density of 0.1 fm −3 is only plotted at discrete values of N. The one-body EPJ Web of Conferences 182, 02044 (2018) https://doi.org/10.1051/epjconf/201818202044ICNFP 2017 Figure 4 . Figure 4. Convergence to the TL for a non-interacting gas in a one-body periodic potential v(r i ) = 2v q cos(q • r i ).The amplitude is 2v q = 0.5E F and periodicity set so that two periods span the box at the smallest N point.The density is constant at 0.1 fm −3[12]. Figure 5 . Figure 5. Response functions for the case of the non-interacting Fermi gas (the density is 0.1 fm −3 ).Shown are the cases of 66 and 66000 particles, as well as the response in the TL[12]. Figure 6.66 neutron energy per particle versus one-body potential strength with two periods of cosine potential in the box.Circles are AFDMC results with NN+NNN interactions.The solid line depicts SLy4 results from the LDA.The dashed line is SLy4 constrained by AFDMC results.The density is 0.10 fm −3 [12].
3,747.8
2018-08-01T00:00:00.000
[ "Physics" ]
Beyond Digital Twins: Phygital Twins for Neuroergonomics in Human-Robot Interaction Among the most recent enabling technologies, Digital Twins (DTs) emerge as data-intensive network-based computing solutions in multiple domains—from Industry 4.0 to Connected Health (Pires et al., 2019; Bagaria et al., 2020; Juarez et al., 2021; Phanden et al., 2021). A DT works as a virtual system for replicating, monitoring, predicting, and improving the processes and the features of a physical system—the Physical Twin (PT), connected in real-time with its DT (Grieves and Vickers, 2017; Kaur et al., 2020; Mourtzis et al., 2021; Volkov et al., 2021). Such a technology, based on advances in fields like the Internet of Things (IoT) and machine learning (Kaur et al., 2020), proposes novel ways to face the issues of complex systems as in Human-Robot Interaction (HRI) (Pairet et al., 2019) domains. This position paper aims at proposing a physical-digital twinning approach to improve the understanding and the management of the PT in contexts of HRI according to the interdisciplinary perspective of neuroergonomics (Parasuraman, 2003; Frederic et al., 2020). INTRODUCTION Among the most recent enabling technologies, Digital Twins (DTs) emerge as data-intensive network-based computing solutions in multiple domains-from Industry 4.0 to Connected Health (Pires et al., 2019;Bagaria et al., 2020;Juarez et al., 2021;Phanden et al., 2021). A DT works as a virtual system for replicating, monitoring, predicting, and improving the processes and the features of a physical system-the Physical Twin (PT), connected in real-time with its DT (Grieves and Vickers, 2017;Kaur et al., 2020;Mourtzis et al., 2021;Volkov et al., 2021). Such a technology, based on advances in fields like the Internet of Things (IoT) and machine learning (Kaur et al., 2020), proposes novel ways to face the issues of complex systems as in Human-Robot Interaction (HRI) (Pairet et al., 2019) domains. This position paper aims at proposing a physical-digital twinning approach to improve the understanding and the management of the PT in contexts of HRI according to the interdisciplinary perspective of neuroergonomics (Parasuraman, 2003;Frederic et al., 2020). APPROACHING AND ADOPTING DIGITAL TWINS The DT definition is still an object of debate, and reaching one could be a necessary step for efficiently managing its technical requirements in terms of computing and connectivity (Shafto et al., 2012;Haag and Anderl, 2018;Jones et al., 2020;Kuehner et al., 2021;Singh et al., 2021;Botín-Sanabria et al., 2022;Wang D. et al., 2022). However, we can ignite our discussion by considering how Fuller et al. (2020) highlighted that a DT is not just a digital model or an offline simulation of a physical object. Nor does a DT correspond to a digital shadow, depicting the real-time states and changes of a PT that can just be manually modified. The changes in a DT automatically mirror and affect the status of its PT: the data flows bi-directionally ( Van der Valk et al., 2020) and in real time between twins in digital and physical worlds, possibly without any human intervention through the DT-driven control of an actuated PT. However, a DT is typically "played" by experts like managers, engineers, and designers as a complex interactive simulation to predict future issues in the PT according to its past and current behavior (Semeraro et al., 2021). This leads to new policies as feedback to the real system, even with the assistance of artificial intelligence layers (Umeda et al., 2019;Gichane et al., 2020). Considering their functions (Khan et al., 2022) each DT can focus on (i) monitoring a PT, (ii) simulating the future states of a PT, (iii) directly interacting-as an "operational DT"-with a cyber-physical system as PT. In particular, literature in robotics offers interesting solutions of intuitive extended reality interfaces (Alfrink and Rossmann, 2019;Burghardt et al., 2020) to ease the interaction of an expert with a DT. In the next section, we propose that such an approach can be further enhanced by emulating certain PT components through a DT and others through a physical replica of the robotic system. PHYGITAL TWINS IN HUMAN-ROBOT INTERACTION Performing holistic, physical, and reality-based interaction with a robotic system is more intuitive for the user than contactless gestures to program or command the device and change its state to accomplish a task (Jacob et al., 2008;Heun et al., 2013;Blackler et al., 2019;Ravichandar et al., 2020). Following this reasoning, we decided to highlight the opportunity of emulating a PT through what we labeled as a "Phygital Twin." This term has already been used by Sarangi et al. (2018) to describe an IoT setup designed to collect data and represent an environment (even through portable devices) to assist a farmer in precision agriculture paradigms. However, we envisioned the usage of this label for a wider class of solutions by pondering the meaning of the "phygital" attribute outside the domain of twinning processes. As a neologism (merging two words: physical and digital), this attribute has been typically adopted across various domains like design and marketing, blending real and virtual dimensions as in its etymology (Gaggioli, 2017;Mikheev et al., 2021). This term was used, for instance, to define Tactile User Interfaces (TUIs) like the "phygital map" in Nakazawa and Tokuda (2007), the paradigms of "phygital play" (Lupetti et al., 2015) in mixed reality-based robotic games (MRRGs) (Prattico and Lamberti, 2020), and interactive solutions for work and education proposed during the COVID-19 pandemic (Chaturvedi et al., 2021;Burova et al., 2022). Overall, these are just examples in a general virtual-real convergence trend (Tao and Zhang, 2017), like cyber-physical twins (Czwick and Anderl, 2020). This trend occurs in healthcare too (Gregory, 2022) about managing chronic conditions and predicting their progress or the therapeutic outcome (Voigt et al., 2021;Barresi et al., 2022). Furthermore, we must highlight how intrinsically phygital are the recent definitions of the metaverse, a digital world embracing cyber-physical systems and also DTs in its connection with the real world (Yoon et al., 2021). Exploiting the phygital approach we foresee a Phygital Twin (PDT, highlighting both its physical and digital elements) as in the example in Figure 1. Within a PDT, certain components of the PT are replicated by digital objects and others by physical objects within an integrated extended reality model. These physical objects would be secondary instances of the same products (not necessarily a robot) in the PT. In Figure 1, an example of the human-exoskeleton system in a real context is the PT emulated by a DT (in green, on the left), based on a fully virtual model of the HRI system. On the other hand, the same PT can be represented (on the right) by a PDT, based on a virtual human "wearing" a real exoskeleton (identical to the one in the real-world context and, possibly, sustained by a mannequin) into a laboratory. Both settings, visualized by an expert through a mixed reality headset, enable the live visualization of anomalies in the right shoulder of the worker in this example. Different from the case of the fully virtual model on the left, the expert on the right can decide to alter the phygital model through intuitive physical interactions with the lab exoskeleton (working as a TUI), performing tests according to past and current data from the PT. Indeed, the expert receives visual feedback from the DT and more intuitive visuotactile feedback from the PDT. After obtaining the informed consent of the worker in the PT system, the experts can also update the remote wearable robot software according to their predictions. Thus, the PDTs enable intuitive phygital interactions with experts to assess and improve the PT. Furthermore, its physical components can emulate the ones of the PT more reliably than a virtual simulacrum because they are based on the same products. The PDT computer-generated elements may also be visualized through a virtual reality headset instead of a mixed reality one, according to the need of depicting the PT context as a whole. However, focusing further on the virtual human component can also be greatly advantageous to deepen our knowledge of the user's conditions, especially in terms of neuromotor and neurocognitive processes, as the next section will propose. NEUROERGONOMIC TWINNING OF HRI SYSTEMS Through digital human modeling (Paul et al., 2021), DTs can contribute to monitoring, assessing, and designing different human-system interactions (Caputo et al., 2019;Greco et al., 2020;Sharotry et al., 2022;Wang B. et al., 2022) according to the perspective of human factors. In particular, neuroergonomics (Mehta and Parasuraman, 2013)-especially computational neuroergonomics (Farahani et al., 2019)-can advantageously exploit twinning for understanding how the human nervous system works in real contexts (Cheng et al., 2022), and improving the design of any item interacting with it. This is certainly true about neuroergonomics in HRI contexts (Cassioli et al., 2021) for applications like monitoring motor control difficulties (Memar and Esfahani, 2018), providing robots with adaptive features (Lim et al., 2021), and improving brain-robot interfaces (Mao et al., 2019). Overall, the exploitation of DTs in this field can inherit the corpus of knowledge in neuroscience, especially when human-machine interactions are investigated (Gaggioli, 2018;Ramos et al., 2021). Interestingly, literature in this area already shows several approaches presenting analogies with PDTs, which can contribute to neuroergonomics in HRI by offering intuitive interactions with a phygital emulation of the human-robot system. For instance, the field of bionic prosthetics (Frossard and Lloyd, 2021) offers this kind of solution, with emphasis on twinning the residual limb more than the device. Interestingly, labeled as "mechatronics-twin" a framework integrating a 6-DoF manipulator with biomechanical models to explore, through simulations, the operational behaviors of prosthetic sockets with amputees. Such an example sounds quite FIGURE 1 | A Physical Twin (PT), based on a human-exoskeleton system in a context of usage, connected (on the left) to its Digital Twin (DT) in green, based on a virtual model of the human-robot interaction (HRI) system, or (on the right) to its Phygital Twin (PDT), based on a holographic human "wearing" a real exoskeleton in a laboratory. close to the concept of PDT, which can have additional features of real-time bidirectionality, intuitive physical interaction, and ecological validity (resemblance with real contexts). Furthermore, Pizzolato et al. (2019) proposed human neuromusculoskeletal (NMS) system models for DTs to improve the outcome of the interactions between users and assistive or rehabilitative machines. NMS models implemented in robot control solutions can offer phygital features. For instance, the output of the interaction between a user and a mechatronic device (possibly enriched by extended reality solutions) can become a quantifiable index of healthy and pathological conditions and responses to treatments. This would make such an output a peculiar type of digital biomarker (Wright et al., 2017): a "phygital biomarker" or possibly, a "neurophygital biomarker" (a promising step in this direction is based on neuromechanical biomarkers for rehabilomics) (Garro et al., 2021). In line with this reasoning, we could think about "neurophygital twins" to extract biomarkers from the activity of their PTs: mechatronic devices like rehabilitative exoskeletons (Buccelli et al., 2022) or, possibly, any other robot (including humanoids) designed to interact with humans wearing sensors. Through intuitive phygital interactions between the researcher or the clinician and the lab replica of the same machine in the real world, neuroergonomic hypotheses on psychophysiological and motor processes underlying HRIs can be tested in simulated experiments based on a PDT. We could also envision the development of neurorobotic systems mimicking neurocognitive and neuromotor processes to physically replace a virtual human model in a PDT: in this case, the neurorobotic model would be validated through its interaction with another machine within the same PDT. However, before addressing such challenges, the current constraints in our knowledge and know-how must be pondered. Besides the technical limitations in twinning (first of all, the computational burden of emulating neural processes in ecologically valid settings, without considering the connectivity issues to approach the real-time standards), we must also highlight how both DTs and PDTs raise ethical issues on privacy and consent in data representation and storage, and on concepts like "normality" and enhancement (Bruynseels et al., 2018;Braun, 2021;Nyholm, 2021). These issues should be discussed within the frame of the enablers and the barriers to twinning adoption (Perno et al., 2020), even pondering the opportunities offered by novel technological frameworks (Yi et al., 2022). CONCLUSION This position paper presented a novel "twinning design" concept: PDT, based on physical replicas of PT components enriched with virtual models and computational features to establish intuitive and reliable phygital interactions with experts. Thus, a PDT would facilitate the experts' task of assessing and improving the PT conditions. Furthermore, PDTs provide neuroergonomics with tools for iterative human-centered design and evaluation of robotic systems into a "metalaboratory" before and after their deployment. AUTHOR CONTRIBUTIONS GB devised the conceptual contents and structure of the paper and wrote the initial draft. CP, ML, and LDM improved the document and considering further potential applications of the proposed approach. All authors revised and approved the manuscript.
2,975
2022-06-28T00:00:00.000
[ "Computer Science" ]
UVR2 ensures transgenerational genome stability under simulated natural UV-B in Arabidopsis thaliana Ground levels of solar UV-B radiation induce DNA damage. Sessile phototrophic organisms such as vascular plants are recurrently exposed to sunlight and require UV-B photoreception, flavonols shielding, direct reversal of pyrimidine dimers and nucleotide excision repair for resistance against UV-B radiation. However, the frequency of UV-B-induced mutations is unknown in plants. Here we quantify the amount and types of mutations in the offspring of Arabidopsis thaliana wild-type and UV-B-hypersensitive mutants exposed to simulated natural UV-B over their entire life cycle. We show that reversal of pyrimidine dimers by UVR2 photolyase is the major mechanism required for sustaining plant genome stability across generations under UV-B. In addition to widespread somatic expression, germline-specific UVR2 activity occurs during late flower development, and is important for ensuring low mutation rates in male and female cell lineages. This allows plants to maintain genome integrity in the germline despite exposure to UV-B. to the filtering conditions used, this UV-B treatment did lead to more UV-A than in the control treatment. However, the amount of UV-A radiation in the control treatment reached up to 80% and more for wavelengths greater than 360 nm compared to the UV-B treatments. Below 360 nm the transmission decreased due to the transmission characteristics of the filter glass, therefore, the UV-A radiation is reduced to about 10% at 330 nm compared to the UV-B treatments. The UV-B treatments resembled natural conditions during the main A. thaliana-growing season (April/ May) along the European north-south UV-B cline at 60°N, 52°N and 40°N, which can be approximated to Helsinki, Berlin, and Madrid, respectively. Wild-type and all mutant genotypes showed comparable growth at rosette stages under control conditions (Fig. 1b). Under the highest simulated natural UV-B, wild-type and uvr8 plants did not show significantly reduced rosette diameter, while tt4, uvr2, uvr3, uvr2 uvr3 and uvh1 mutant plants did (t-test P values: 5.390E À 01, 9.113E À 01, 4.3E À 06, 1.6E À 16, 4.4E À 02, 2.6E À 16 and 8.2E À 03, respectively; Fig. 1b). This suggested that not all A. thaliana mutants found to be UV-B-and/or UV-C-hypersensitive in laboratory would show similar phenotypes under natural UV-B conditions. Fig. 2 and Supplementary Data 1). This revealed a total of 2,497 novel single-base substitutions and 22 one-to-four base pair deletions. Using di-deoxy sequencing, we confirmed 58 out of 59 randomly selected mutations, suggesting a 1.7% false-positive discovery rate in our analysis (Supplementary Data 2 and Methods). A false-negative mutation discovery rate was estimated to be 0.15% by simulations (see Methods). Wild-type plants without UV-B treatment accumulated on average 2.6, 2.0 and 2.4 spontaneous mutations per haploid genome and generation (hereafter as 'mutations') in the first (Fig. 1c), the second and the third generations (generation average 2.3), corresponding to 2.2, 1.7 and 2.0 Â 10 À 8 mutations per site, respectively (Supplementary Data 1). Similar numbers of novel mutations (2.0-5.7) were observed in the progenies of control uvr8, tt4, uvr2, uvr3 and uvr2 uvr3 plants ( Fig. 1c and Supplementary Data 1). In contrast, compromised NER in uvh1 plants resulted in 20.3 mutations. This represented 7.8-fold increase (Fisher's exact test, P ¼ 4.9E À 12) compared with wildtype and illustrated importance of NER for general genome stability in A. thaliana. Treatment with 100, 150 and 300 mW m À 2 induced 3.3, 5.0 and 2.8 mutations, respectively, per haploid genome and generation in wild-type plants ( Supplementary Fig. 3a). Subsequently, the UV-B BE of 300 mW m À 2 was used as the standard UV-B treatment. Loss of UVR8 and TT4 functions did not significantly change the mutation rates (5.6 versus 7.8 and 5.7 versus 6.7 mutations under control and UV-B; Fisher' exact test P ¼ 0.2203 and 0.6455, respectively; Fig. 1c). In UV-B-treated uvh1 plants, we found 27.4 new mutations, which represented a significant 1.3-fold increase compared with 20.3 new mutations under control conditions (Fisher's exact test, P ¼ 0.03772). The only drastic increase in mutation rate in a single mutant was observed in the progeny of UV-B-irradiated uvr2 plants containing on average 64.3 new mutations (Fig. 1c). This corresponded to a high 14.7-fold increase over the control uvr2 plants with 4.4 mutations per genome and generation (Fisher's exact test, Po2.2E À 16). The 7.3 new mutations in UV-B-treated uvr3 plants represented a lower, but still significant 2.1-fold increase over the control treatment (Fisher's exact test, P ¼ 0.01965). UV-B-exposed uvr2 uvr3 double-mutant plants had 66.0 new mutations (Fisher's exact test, Pr2.2E À 16; Fig. 1c). The progeny of uvr2 uvr3 plants exposed to 0, 100, 150 and 300 mW m À 2 UV-B BE revealed on average 2.0, 39.1, 65.3 and 66.0 mutations per haploid genome and generation, respectively ( Supplementary Fig. 3b). This corresponded to 19.5-, 32.6-and 33-fold increase and indicated a UV-B dose-dependent accumulation of mutations at the lower and saturation at the higher UV-B doses, respectively (Fisher's exact test; all Po2.2E À 16 in UV-B versus control; UV-B BE of 100 versus 150 and 300 mW m À 2 : P ¼ 2.0E À 08 and 1.2E À 08; UV-B BE of 150 versus 300 mW m À 2 : P ¼ 0.8978). The UV-B treatment also affected the frequency of nonsynonymous amino-acid mutations. They were approximately threefold more frequent in UV-B-treated (300 mW m À 2 UV-B BE ) uvr2 versus control wild-type plants (14.7% versus 5.9% of all mutations, respectively; Fisher's exact test P ¼ 0.0254; Fig. 1d). In absolute terms, this corresponded to 10.2 new nonsynonymous amino-acid mutations per one uvr2 plant, compared with an average of 0.2, 0.4 and 0.5 such mutations in control wildtype, control uvr2 and UV-B-treated wild-type plants, respectively (Supplementary Table 1). We also found phenotypically distinct plants in the third UV-B-irradiated generation of the double mutant (see example of semidominant mutant in Supplementary Fig. 3c), suggesting an increased functional impact of the mutations induced by the UV-B treatment on gene integrity in UVR2-defective plants. Spontaneous and induced mutation spectra in A. thaliana. To characterize the treatment-specific mutation spectra, we compared mutations from all control plants with those of all UV-B-treated plants with exception of uvh1 samples, which were excluded owing to a 35% rate of A:T-T:A transversions, compared with o10% in the other genotypes ( Supplementary Fig. 4a). Consistent with previous observation of Ossowski et al. 22 , about half (52%) of all substitutions under UV-B-free conditions were G:C-A:T nucleotide transitions (Fig. 2a). The G:C-A:T frequency increased to 88% after UV-B treatment (Fisher's exact test Po2.2E À 16), which led to significantly reduced proportion of all other substitution types ( Fig. 2a; Fisher's exact test P values for control versus UV-B; A:T-G:C, 2.0E À 02; A:T-T:A, 9.6E À 05; G:C-T:A, 2.1E À 05; A:T-C:G, 3.9E À 12; G:C-C:G, 1.3E À 03). Therefore, simulated natural UV-B caused almost exclusively G:C-A:T nucleotide transitions. ARTICLE To test whether this holds true in major genome fractions, we quantified mutation spectra in genes and transposons separately ( Supplementary Fig. 4b). Under control conditions, G:C-A:T nucleotide transitions remained the major type of change in transposons (66%); however, this trend was absent in genes (23%) where all six possible substitution types showed relatively similar frequencies (10-23%). We also observed more G:C-A:T nucleotide transitions in transposons (65%) than in genes (42%) within the data of Ossowski et al. 22 (Supplementary Fig. 4c). Surprisingly, after UV-B treatment, the G:C-A:T transition rate changed and was even larger in genes than in transposons (93% versus 87%; Fisher's exact test, P value ¼ 0.0038; Supplementary Fig. 4b). Hence, transposons were prone to G:C-A:T transitions under both control and UV-B conditions, while genes only during UV-B treatment. To find whether spontaneous mutation and those induced by UV-B treatment occurred in a particular sequence context, we performed a motif analysis around mutated sites. This revealed an absence of any specific mutation-prone context in the vicinity of spontaneously mutated G:C-A:T sites in control samples (Fig. 2b). However, within UV-B-treated plants C-T and G-A mutations occurred preferentially within the TC(C/T) and (G/ A)GA contexts, respectively. Such an asymmetric and reverse complementing pattern strongly suggests that: (i) G-A mutations are C-T mutations on the reverse strand; (ii) mutations induced by the UV-B treatment occur predominantly at the 3 0 base of the pyrimidine dimer; and (iii) that TC(C/T) represents the UV-B-mutation-prone sequence in A. thaliana. DNA methylation overlaps with the mutated sites. On the basis of the preferential UV-B mutagenesis of DNA-methylated cytosines in the CpG context in mammals 23,24 , we tested for correlation between DNA methylation patterns and mutations induced by the UV-B treatment in A. thaliana. Because DNA methylation is a very stable epigenetic modification, we used existing genome-wide DNA methylation data sets 25,26 . According to the functional types of DNA methylation in plants 25 , we classified cytosines in the CG, CHG and CHH sequence contexts (where H is A, T or C) as being either methylated or nonmethylated and scored for the methylation status at mutated positions. This revealed that both spontaneous and induced mutations overlapped with methyl-cytosines (with the exception of the CHH control group, which contained only 15 testable positions) significantly more often than expected at random based on the genome-wide DNA methylation frequencies (Chi-square test with Yates correction, P values for control versus genome and UV-B versus genome: CNN: 1.12E À 04 ando2.2E À 16; CG: 1.38E À 02 ando2.2E À 16; CHG: 6.59E À 03 and o2.2E À 16; CHH: 6.83E À 01 and 3.10E À 07; Fig. 2c). Hence, this suggests that methyl-cytosine is prone to mutate under UV-B conditions compared with non-methylated cytosine. Because DNA methylation is concentrated into transposon-rich chromosomal regions in A. thaliana 25,26 , we tested whether the mutations show particular genomic distribution. Both control and UV-B treatments led to hypo-accumulation of mutations in genes, relatively random accumulation in intergenic regions and hyperaccumulation in transposons (Fig. 2d). We confirmed this trend using independent data set of Ossowski et al. 22 However, UV-B treatment induced B10% more mutations in genic regions compared with control plants. Therefore, the UV-B treatment adds to the mutagenic effect of DNA methylation, but also affects non-methylated cytosines in genic regions. Accumulation of induced mutations during development. Early embryonic separation of gametic and somatic cell lineages largely prevents transgenerational inheritance of somatic mutations in mammals 27 . In contrast, the late separation of germline cells in plants 28 allows the inheritance of mutations induced during vegetative growth in cells of the apical meristem into the progeny. Alternatively, mutations can occur later after separation of male and female cell lineages and/or gamete formation. To determine whether mutation induced by UV-B treatment accumulated during particular developmental stages, we analysed the ratio of heterozygous and homozygous mutations in the progeny of the first generation of plants in control and UV-B treatments. If all mutations occurred before the differentiation of the male and female organs, we expected a 2:1 ratio of heterozygous versus homozygous mutations in an inbreeding constitutively monoecious species such as A. thaliana. We found ratios of 1.4:1 (wild-type control), 2.5:1 (wild-type UV-B-treated) and 1: 1 (uvr2 control), but there were significantly 8.1-fold more heterozygous than homozygous mutations (44.22 versus 5.44 per haploid genome, respectively) in the progeny of UV-B-treated uvr2 plants (Fisher's exact test P values when compared with the other groups: 2.95E À 08, 5.83E À 05 and 7.97E À 05, respectively; Fig. 3a). This strongly suggested that the combination of UV-B treatment with uvr2 genotype leads to mutations mostly after the split of female and male cell lineages. To validate this, we expressed luciferase-tagged UVR2 under control of its native promoter (UVR2promoter::UVR2:LUCIFERASE). The reporter line showed strong UV-B-independent developmentally controlled UVR2 accumulation in meristems (root apical meristem, young leaves, flowers, flower buds, axillary buds, closed anthers and young pistils), scars after petals and sepals and weaker expression in expanded leaves (Fig. 3b-e; the control non-transgenic plants are shown in Supplementary Fig. 5). No expression was observed in green or dry seeds (Fig. 3e). The strong UVR2 expression in floral tissues supported the results of our genetic analysis. Occurrence of a high number of mutations in male and female cell lineages allowed us to test whether there are sex-specific preferences in mutation accumulation in A. thaliana. We grew uvr2 uvr3 plants under control UV-B-free conditions until bolting, and then exposed half of the plants to UV-B until flowering and subsequently reciprocally crossed UV-B-irradiated and control plants (Fig. 3f). The resulting F1 hybrids were grown under non-UV-B conditions, and genomes of eight plants per crossing Table 2). This suggests that UVR2 is required for protection of both female and male genome stability, and UV-B treatment induces a similar number of mutations in both sexual lineages. Discussion Land plants are exposed to solar UV-B during their entire life 3 . In order to minimize UV-B-induced damage, plants use multiple protection and repair pathways, including flavonoid sunscreen, direct reversal of pyrimidine dimers and NER 6,8,15,29,30 . We determined the frequency of transgenerationally inherited mutations induced by UV-B treatment in A. thaliana wild-type and mutant plants treated with simulated solar UV-B, resembling natural conditions from Helsinki (south Scandinavia) to Madrid (central Spain). The simulated natural UV-B conditions had only a minimal effect on the rosette growth of wild-type Col-0, indicating that they were well in the photomorphogenic range. A wild-type-like phenotype of the UV-B photoreceptor mutant was unexpected as uvr8 was found to be UV-B-hypersensitive in previous studies 19,31,32 . The most likely reasons were acute UV-B stress doses applied to non-acclimated plants and/or use of mutants in more sensitive genetic background in the other studies. In contrast, tt4 and uvr2 plants were highly sensitive to the simulated natural UV-B, suggesting that flavonoid production and CPD repair, respectively 6,13 , are the most important mechanisms sustaining plant growth under simulated natural UV-B. Under control conditions, we observed on average 2.3 Â 10 À 8 mutations per site, which is approximately threefold more than the previously estimated mutation rates of 7.1-7.4 Â 10 À 9 for A. thaliana 22,33 . This could be because of presence of UV-A and/or higher photosynthetically active radiation (PAR; 400-700 nm; 340 mmol m À 2 s À 1 ) fluence rate applied in our control treatment compared with a typical A. thaliana growth chamber (100-150 mmol m À 2 s À 1 ). However, PAR applied in this study corresponds to a partially shaded natural site, while the full exposure to the sun is simulated using much higher PAR fluence rates (800 mmol m À 2 s À 1 ; refs 11,19). Simulated natural UV-B conditions caused only small (1.2-2.2-fold) increase in mutation rates of Col-0 wild-type plants. This is in agreement with a previous study, where simulated solar UV-B regimes provoked only one to four germinal somatic homologous recombination events per 250,000 seedlings 11 . The robust protection of A. thaliana transgenerational genome stability against UV-B strongly depends on direct reversal by UVR2 CPD photolyase (summarized as schematic model in Fig. 4). The uvr2 plants accumulated, on average, 64.3 new mutations per haploid genome and generation under the simulated central Spain UV-B regime. Some of these mutations apparently led to a loss of function for housekeeping genes within just three generations. In contrast, loss of UVR3 and UVH1 resulted in a significant, but much lower number of mutations. This may reflect low abundance of UV-B-induced (6-4)PPs (10-25%) relative to CPDs (75-90%) and partial redundancy of NER and UVR3 in repair of (6-4)PPs but not CPDs in A. thaliana 13,29 . DNA sequences prone to accumulate UV-B-induced mutations have been unknown in plants. We showed here that sensitivity to our UV-B treatment is determined by both genetic and epigenetic means. Mutations occurred preferentially in the TC dipyrimidine sequence context, and were enriched at methylated cytosines. This differed from spontaneous mutations, which were determined mainly epigenetically by DNA-methylated sites in transposons, but showed no association with particular short sequence motifs. The typical A. thaliana-hypermutable sequence TC(C/T) identified here differed from those in humans in at least two aspects. First, we did not observe any CC to TT dinucleotide mutations, which were found frequently in the human eyelid cells 34 . Second, in human skin cells the mutated cytosine was frequently followed by a guanine ((T/C)CG) 23 . A high proportion of (T/C)CG mutations in humans is most likely caused by the enhanced formation of pyrimidine dimers at methylated cytosines 23,24,35,36 , which are found exclusively in the CG context in mammalian somatic cells 37 . Absence of such pattern in A. thaliana can be explained by presence of DNA methylation in any cytosine context in plants and low number of methylated cytosines in the A. thaliana genome 25,26 . Although mutations induced by our UV-B treatment were enriched in A. thaliana at the positions of methyl-cytosines (27%) relative to genome background (15%), they were not limited to them, and majority of the mutations (73%) appeared at nonmethylated positions. This trend was weaker for spontaneous mutations (60% at non-methylated sites) and suggested that UV-B and spontaneous mutations may quantitatively differ in generating C-T transitions via indirect (involving uracil intermediate) or direct conversion, respectively 38 . Animal male and female germline cells separate from somatic cell lineages early during embryo development, and the latter do not divide any more during the post-embryonic phase 39 . In contrast, plant germline cells with undifferentiated sex divide several times during vegetative growth and separate into male-and female-specific cell lineages only during late flower development 40 . This potentially increases the risk of inheriting mutations via somaclonal sectors. In the first postirradiated generation of control and UV-B-irradiated plants, we found B1:2 ratios of homozygous and heterozygous mutations, respectively. This showed that the spontaneous mutations occurred before the split of male and female cell lineages and the same was true also for mutations induced by UV-B treatment in UVR2 plants. However, there were fourfold more heterozygous mutations in progenies of UV-B-irradiated uvr2 plants. This provided strong genetic evidence that UVR2 prevents UV-Binduced mutations in germline cells mainly after separation of male and female cell lineages, and this UVR2 function seems complementary to its role in resolving CPDs in somatic cells 13 . In mammals, mutation rates can be much higher in male than in female gametes 39 . Here we showed that uvr2 plants derived from UV-B-irradiated male and female reproductive tissues carry almost identical numbers of mutations, suggesting that male and female mutation rates may be more equal in plants. Mammalian mutation bias is caused by accumulation of mutations from DNA replication errors in sperms, which are products of many more cell generations than eggs 39 . It is unknown how many cell divisions (and DNA replications) are required for the development of A. thaliana anthers and carpels; however, the information is available from meiosis onwards. At the onset of meiosis there is a single round of DNA replication followed by two rounds of cell division. Subsequently, the released microspore undergoes two rounds of DNA replication and cell division resulting in one vegetative and two sperm cells. The megaspore replicates and divides three times and produces embryo sac with seven nuclei, including haploid egg cell 41 . Hence, there is comparable number of DNA replications in plant mega-versus microgametogenesis. This may explain similar number of mutations observed in our experiments; however, on the other hand it also shows that CPD direct reversal is important in both A. thaliana sexual lineages. This is unexpected because eggs are embedded much more in plant tissues than pollen and, therefore, should receive less UV-B damage. We speculate that this may be due to greatly reduced (haploid and unreplicated) genome constitution during gametogenesis, which may limit availability of homologous chromosomes and sister chromatids for homology-based DNA damage repair. In addition to its activity in somatic cells, direct reversal of CPDs by UVR2 is the key mechanism protecting integrity of DNA from UV-B-induced mutations in A. thaliana male and female germline tissues. Direct reversal activity may be particularly important during plant haploid stage, when homology-based repair pathways may not be fully effective because of limited template availability. Therefore, UVR2 is necessary to avoid solar UV-B-induced genetic defects that could be transmitted to the future generations. Methods Simulation of solar radiation. Simulation of solar radiation was performed in the sun simulators of the Research Unit Environmental Simulation at the Helmholtz Zentrum München, Neuherberg, Germany. Simulated spectra (280-850 nm; Fig. 1a and Supplementary Fig. 1a,b) were obtained by a combination of metal halide lamps (HQI/D, 400 W; Osram, München, Germany), quartz halogen lamps (Halostar, 300 and 500 W; Osram), blue fluorescent (TLD 18, 36 W, Philips, Amsterdam, the Netherlands) and UV-B fluorescent tubes (TL12, 40 W, Philips). The natural balance from ultraviolet to infrared radiation was achieved by filtering through borosilicate, lime and acrylic glass filters and a water layer and measured using a double monochromator system (Bentham, UK). The filtering in control condition excluded the entire UV-B, present in UV-B treatments. Owing to filter characteristics, B80% and more of UV-A were transmitted at control conditions for wavelength 4360 nm compared with UV-B treatments, whereas at shorter wavelength of 330 nm only 10% were transmitted (Supplementary Fig. 1a,b). The standard growth conditions were set to resemble the main A. thaliana-growing season: day ¼ 14 h, 21°C, relative humidity 60%, PAR ¼ 340 mmol m À 2 s À 1 , which resembles natural PAR at shady sites; night ¼ 10 h, 16°C, relative humidity 80%, no PAR, UV-B radiation 1 h after onset of PAR for 10 h. Dusk and dawn was simulated by switching on/off different groups of lamps. Four irradiation conditions were applied corresponding to: 0 (control), 100, 150 and 300 mW m À 2 UV-B BE normalized at 300 nm according to the generalized plant action spectrum 21 (Fig. 1a and Supplementary Fig. 1b). This realistically mimics UV-B BE doses during spring in northern mid-latitudes (40°N, 50°N, 60°N) at, for example, Madrid, Berlin and Helsinki, respectively. The simulated UV-B BE (ref. 21) dose of 300 mW m À 2 (ultraviolet index ¼ 6; UV-B ¼ 1.2 W m À 2 ), applied widely in this study, matched well the integrated values of the spectral irradiance in Madrid (UV-B BE (ref. 21) ¼ 265 mW m À 2 ; ultraviolet index ¼ 7; UV-B ¼ 1.3 W m À 2 ; modelled for 30 March 2015, 12:00 GMT (total ozone column of 300 DU, surface albedo of 0.1), using the Tropospheric Ultraviolet and Visible model; http://cprm.acom.ucar.edu/Models/TUV/Interactive_TUV/; Fig. 1a). Plant material. Following A. thaliana homozygous genotypes in Col-0 background were used: wild-type; uvr8-6 null 19 (SALK_033468), tt4 (SALK_020583C), uvh1 (SALK_096156C), uvr2 (WiscDsLox466C12), uvr3 (WiscDsLox334H05) and uvr2 uvr3. Each genotype was amplified twice by a single seed descent to reduce any potential heterozygosity, and the resulting seed population was bulk-genotyped before mutation accumulation experiments (Supplementary Fig. 1c). Seeds were sown on a standard soil, and 15 plants per genotype were kept in the described UV-B conditions until seed harvest. Using a single seed descent amplification strategy, we produced three UV-B-irradiated generations (Supplementary Fig. 1c). Note that the sequenced and the irradiated plants were not identical, but siblings (that is, seeds from G1 UV-B-irradiated patent were split into several parts. One part was grown in sun simulator as UV-B-irradiated G2 and the second part was grown in non-UV-B chamber to obtain material for sequencing). This was done in order to avoid stressing UV-B-irradiated plants by additional wounding damage that could potentially influence mutation frequencies. The UVR2promoter::UVR2:LUCIFERASE reporter line was constructed using the Gateway System (Invitrogen) and the Gateway binary vector pGWB435 was used to fuse firefly's LUCIFERASE gene to the C terminus of UVR2. The line was stably expressing the construct over multiple generations and T-DNA was excluded to disrupt a gene open reading frame by mapping T-DNA position using TAIL-PCR. Nucleic acid isolation and whole-genome sequencing. From 15 irradiated plants per generation, genotype and treatment, we selected randomly five individuals and grew one progeny plant per individual in a chamber without UV-B radiation for 3 weeks. Subsequently, vegetative rosettes were harvested and DNA extracted with a Nucleon Phytopure Kit (GE Healthcare). Sequencing libraries were prepared using a TruSeq DNA Kit (Illumina). Fragment sizes and library concentrations were assessed on a Bioanalyzer (Agilent) and high-quality libraries were 100 bp pairedend-sequenced on a HiSeq2500 (Illumina) instrument to an average 35 Â genome coverage (Supplementary Fig. 2 and Supplementary Data 1). Mutation detection and validation. Reads were adaptor-and quality-trimmed using SHORE (v8; ref. 42). Filtered and trimmed reads where aligned to Col-0 reference sequence (TAIR10, 119 Mbp) using GenomeMapper 43 integrated in SHORE (v8) using a maximum of 5% of the read length as mismatches including a maximum of 5% gaps. Read pair information was used to help to remove redundant alignments. Only uniquely mapped reads (after read pair correction) were considered. In order to remove reads originating from the same molecule (because of PCR amplification), we also removed reads with identical 5 0 alignments using SHORE. Next, we generated a genome matrix containing information on total coverage and the single base counts for A,C,G,T,-and N for each re-sequenced genome at each reference sequence position. Positions covered by o20 reads were marked as low coverage. All other positions were classified as follows: (i) homozygous wild-type, (ii) homozygous mutant, (iii) heterozygous or (iv) undefined based on the allele frequency of the non-reference alleles. Frequency thresholds were determined empirically (Supplementary Data 1 and Supplementary Figs 6 and 7). Low complexity and tandem repetitive genome regions (comprising 2.95 Mb of the reference sequence), identified by RepeatMasker and TandemRepeatFinder, were excluded during this step to avoid false-positive mutation calls. Novel mutations should be specific to the genome under consideration (focal genome). Therefore, we compared the variant/allele call in the focal genome with the alleles in nine other genomes of the same genotype (using only the first generation). For focal genomes in generations two and three, we excluded the respective parental genome from this filtering step. A variant call was considered as novel mutation, if none of the other nine genomes showed the same variant and at least six of them showed evidence for a homozygous wild-type allele at this position (Table 1). In addition, we used the following criteria for background filtering: (i) more than one of the background genomes is labelled 'undefined'; (ii) one of the background genomes shows a different homozygous or heterozygous mutation at the same base position; (iii) more than three of the background genomes are insufficiently (o20 Â ) covered; or (iv) less than six background genomes have homozygous wild-type allele calls at the respective position. We kept track of each position that could be analysed in the focal sample even if the position was called homozygous wild-type (accessible sites), in order to assess the frequency of mutated versus non-mutated accessible sites. The accessible sites NATURE COMMUNICATIONS | DOI: 10.1038/ncomms13522 ARTICLE included B75% of the B120 million sites of the nuclear genomes. Normalized number of mutations per genome was calculated as n, where: n ¼ ((total genome/ accessible genome) Â number of accepted mutations)/number of treated generations. Assignment of mutations to different genome regions (genes, TEs and intergenic regions) was carried out using current A. thaliana genome annotations (TAIR10) for genes and TEs. If a TE overlapped with a gene model, we considered the overlapping part as TE, based on the notion that this is frequently DNAmethylated in all cytosine contexts. TE genes were also treated as TEs in our analysis. Estimation of false mutation rates with simulated data. We introduced 900 in silico mutations into the Col-0 reference sequence (TAIR10); 308 were homozygous and 592 were heterozygous reflecting the spectrum of mutations reported in this study. We simulated 25 Mio 100 bp Illumina read pairs with an insert size of 370 bp and a sequencing error rate of 2% using wgsim (https://github.com/lh3/wgsim). The sequencing depth for the simulated genome was 41 Â , which is even slightly lower than the average coverage obtained for the real data (60 Â ). The analysis was performed as described before, and nine of the sequenced G1 Col-0 (five control and four Madrid-like UV-B) genomes were used for filtering as background genomes. The allele frequency distribution for variable sites in the simulated genomes was similar to the distributions observed in real data ( Supplementary Figs 6 and 7). However, as the simulated data showed many more variable sites, the simulated sequencing error rate (2%) appeared to be higher than in real data. We found a clear separation in allele frequencies of homozygous and heterozygous variants ( Supplementary Figs 6 and 7b). However, the distribution revealed that many of the putative heterozygous variants with an allele frequency between 0.1 and 0.2 are masked by a huge amount of putatively erroneous sites with low mutant allele frequencies. In contrast, only a much smaller number of putative heterozygous sites was observed with an allele frequency between 0.2 and 0.8 in both data sets ( Supplementary Figs 6 and 7a). Assuming that the frequencies of real heterozygous sites should be normally distributed with a mean of 0.5 implies that variants with a frequency o0.3 seemingly include a lot of false-positives. The minimal turning point at 0.3 in histogram indicates that using this as a cutoff ensures that we exclude the majority of false-positives while sacrificing only a very small number of true-positives. We found in total 91,500,586 (75% of the genome) accessible sites in the simulated data, which is similar to the real data. In all, 24% of the simulated mutations were in regions that were not accessible according to our definitions. Note that this does not affect the mutation rate estimations as mutation frequency is estimated across the number of accessible sites. Of the remaining 685 in silico mutations located at the accessible site, 684 were identified by our approach (Supplementary Fig. 7b). Only one heterozygous mutation could not be reported, as it had an allele frequency below 0.3. Together, this simulation suggests a false-negative rate of 0.15%. We did not encounter any false-positive in this simulation, suggesting that our strict cutoffs are very robust against false-positives even at high sequencing error rates. In order to support this finding, we tested a random set of 59 candidates from a total of 2,497 mutations identified in the real sequencing by Sanger sequencing. We were able to confirm 58 of them (Supplementary Data 2). DNA sequence motif analysis. For each accepted mutation, we extracted positions three bases up-and downstream from the respective position. Mutations were grouped by the type of base change (for example, C-T) and the extracted sequences were used as input for the software weblogo v3.4 (ref. 44), which generates bit scores for each base (A, C, G or T) at a specific position. If a base is found more often than expected according to the background probability of each base (here C ¼ G ¼ 0.2, A ¼ T ¼ 0.3), it gets a higher bit score. DNA methylation analysis. DNA methylation data were retrieved from publicly available wild-type A. thaliana data sets GSM980986, GSM980987 and GSM938370 (ref. 26). Only nucleotide positions with Z10 sequencing reads were considered for analysis. A cytosine was considered as methylated if its methylation frequency reached Z10% in at least two biological replicates. Because these criteria are partially different from those applied in other studies 25,26 , we obtained generally higher DNA methylation frequencies. Statistical significance of the results was tested as the number of methylated and unmethylated cytosines in sample A versus sample B using Chi-square test with Yates correction. Data availability. Illumina reads generated in this study have been deposited to the European Bioinformatics Institute (EBI) database under the accession numbers (PRJEB13889; http://www.ebi.ac.uk/ena/data/view/PRJEB13889). All other data supporting the findings of this study are included in the manuscript and its supplementary files or are available from the corresponding authors upon request. Homozygous mutation, accepted 0.8-0. 9 Undefined, mutation not accepted 0.3-0. 8 Heterozygous mutation, accepted 0.1-0. 3 Putative sequencing error, not accepted o0. 1 Reference allele, accepted
7,463.4
2016-12-01T00:00:00.000
[ "Biology", "Environmental Science" ]
The Ecological and Social Costs of Economic Development and Their Influence on Management in Europe This article reviews the appearance of ecological and social costs of economic development within the globalization process for illustrating their influence on European management especially focusing on ecology. The article concludes that the process of European unification is successful in many areas but also faces a lot of problems and challenges. To the very important ones belong the social inequalities, unemployment and the destruction of nature. Thanks to the development of science and technology the creation of new economical activities in Europe is possible. They can help to create new possibilities for both, the solving of the ecological problems and to create jobs in many sectors of economy. The contemporary management should play a very important role in this process. Europe as one of the first continents which started the dynamic process of industrialization and destroying of nature can also be the first one to unite the contemporary economy with pro-economical activities. This would allow to connect the solving of social and ecological problems. Introduction The article starts with an introduction into the process of globalization and illustrates the challenges resulting from economical development. It continues with the formulation of problems: Do possibilities exist to reduce the ecological and social costs of economical development? If the answer is "yes" -in which way can this be achieved? Can environmental management create an alternative? Hence the hypothesis was formulated: It is possible to reduce the ecological and social costs of economic development by combining economic activities with pro-ecological ones within the environmental management. In the process of gathering dates the documentary research method was used. The Globalization Process Since the end of the Second World War the globe seems to shrink rapidly, increasing the mutual relationships between its particular regions surprisingly quickly. As a result, over the past decades, the term "globalization" has emerged, becoming a synonym for progress and prosperity for its proponents, while for its opponents it became a slogan for a reactive system of capitalistic exploitation. Despite the continual controversy concerning the term's origin and conditions of emergence, its procedural character and global influence on economics, as well as on the exchange of goods and information, is undoubted. [2 p. 547] The 21st century appears to be a particular time in the history of mankind. The processes commenced in the previous centuries are leading to a transformation of lifestyle on an unprecedented scale. [3 pp. 104 -108 ] There are also positive achievements of civilization as evidenced by the elimination of certain illnesses, limitation of famine and the general if unbalanced technical development. [4 p. 2124] Negative Effects of Globalization However, there is also a multitude of negative effects of running such policies as exploiting the natural resources of the Earth in a manner that is not thought out but rather more like plundering. "Large areas of the Earth`s surface, especially in arid and semi-arid regions, have been used for agricultural production for millennia, yielding crops for ever-increasing number of people. Concerns about the relationship between population growth and environmental degradation are frequently focused rather narrowly on aggregate population levels. Yet, the global impact of humans on the environment is as much a function of per capita consumption as it is of overall population size. For example, the Unites States comprises only 6 per cent of the world`s population, but it consumes 30 -40 per cent of our planet`s natural resources. Global overconsumption and uncontrolled population growth present a serious problem to the environment. Unless we are willing to change the underlying cultural and religious value structure that has combined with the social and economic dynamics of unrestrained capitalist accumulation, the health of Mother Earth is likely to deteriorate even further." [5 pp. 85 -86] Modern times are also characterized by a large dissonance emerging between the commonly declared slogans such as "human dignity" and "human rights" while the practices in force in many parts of the world greatly differ from these declarations. [6 p.185] This refers to a large extent to the issue of military conflicts as well as social and ecological imbalance. [7 p.175] Political Integration in Europe Another symptom of the changes reflecting the spread of globalization is the occurrence of "political integration", whose symptomatic symbol could be termed as the process of unification in Europe. Likewise, in other parts of the world the processes of "integration" are emerging -as exemplified by the American continent. The emergence of large transnational structures in the shape of the European Union admittedly does not signify the immediate disappearance of nations, but does however constitute a major step in the direction of transferring elements of the previously held power from a national level to that of a supra-national level. This is accompanied by the awareness among citizens of belonging to a greater organizational structure that transgresses the previous national boundaries, as well as creating within a continent a European cultural community. It is necessary to mention the practical side of this phenomenon -migration flows, spreading of familiarity with foreign languages, mixed marriages, use of different systems of education, as well as the impact of mass cultures on the inhabitants of various countries. This leads to the erosion of the existing concepts of national states and the birth of new forms of ethnic and cultural identities. In the logic of the integration processes, the shift of the burden from the centre to the lower organizational levels is a natural sequence of events. This particularly refers to those countries which consist of culturally "independent" provinces or possess significant ethnic minorities e.g. in the case of the EU there is a clear strife towards the passing on of some of the responsibility for decision-making and their realization in the case of the regions in question. This helps to release ethnic tension to a certain extent in the case of the occurrence of separatist trends in some countries. Problems and Challenges The process of European`s unification is in many areas successful but also faces a lot of problems and challenges. To the very important ones belong the social inequalities, unemployment and the destruction of nature. "Transboundary pollution, global warming, climate change, and species extinction are challenges that cannot be contained within national or even regional borders. They do not have isolated causes and effects. They are global problems, caused by aggregate collective human actions, and thus require a coordinated global response. To be sure, ecological problems aggravate by globalization also have significant economic ramification. Although these effects will be more significant for less developed countries than for rich countries, they will nonetheless affect all people and all nations. […] They are economic, political, cultural but above all ethical issues that have been expanded and intensified by globalization processes." [5 p.90] In contemporary times, we are also faced with a rich variety of production techniques -many of which are deemed to be "damaging for the natural environment" are in fact even tolerated by the same politicians who refer to the need to protect the "natural resources" of our planet. Much points to the fact that in the period of the industrial revolution an unwritten rule was created with regard to the priority of economic interests over the protection of the natural environment, thus facilitating the realization of policies of economic growth while ignoring the impact this has on nature to a serious extent. However true to say that the dramatic effects of degradation of the Earth's resources finally forced the inclusion of ecological issues in the economic sphere, the practice of "choosing the lesser evil" is still very much in evidence today-the so-called situation of conflict between economic and ecological interests, at the expense of the latter ones. "What is currently at stake is the conservation of the Earth and the biosphere, the thin layer that is the scene of all of life. Desertification, deforestation, erosion, ozone depletion, acid and the greenhouse effect are just a few of the threats facing us. Another spectre is that of nuclear war, which would certainly be the end of everything. Human suffering and the destruction of Nature are everyday realities. At the same time, the call for action is becoming louder. Something must be done before Man effectively destroys the world; this awareness is gaining ground at all levels [...] The question is: how can we turn the tide, how can we create and preserve a future for coming generations, with adequate scope for nature and a healthy environment?" [8 p.9] Significant changes have occurred over the last few years in the sphere of agriculture. The so-called "mad cow's disease" or foot and mouth disease showed the limits of industrial fattening of animals -the spongy degeneration of the brain is however only one of many possible dangers that are associated with the policy of maximizing profits in agriculture at all costs. Mass production of cheap food entails negative consequences for the health that are difficult to foresee -mainly due to the use of chemical substances in agriculture. The acceptance of genetically manipulated food on the market which has not been sufficiently tested in a process of long term tests would appear to be particularly controversial. It is also important not to forget about the ecological costs of mass fattening of animals. It is necessary to consider the sense of changing the form of productionperhaps working out a pro-ecological form of agricultural production which would take the form of producing healthy food that does not destroy the natural environment and would provide increased possibilities of employment. We should move away from large breeding farms which produce unhealthy food in a manner that is harmful to the environment. "Another significant ecological problem associated with population increases and the globalization of environmental degradation is the worldwide reduction of biodiversity. Seven out of the biologists today believe that the world is now in the midst of the fastest mass extinction of living species in the 4.5-billion-year history of the planet. According to recent OECD reports, two-thirds of the world`s farmlands have been rated as "somewhat degraded". Half the world`s wetlands have already been destroyed, and the biodiversity of freshwater ecosystems is under serious threat. Three-quarters of worldwide genetic diversity in agricultural crop and animal breeds has been lost since 1900. Some experts fear that up to 50 per cent of all plant and animal species -most of them in the global South -will disappear by the end of this century." [5 p.87] Constructive steps on the road to improving the situation in agriculture are hindered by the "agrarlobby" which benefits from the current situation, which is witnessed not so much in terms of the numbers of citizens employed in agriculture as in terms of the economic and political potential at its disposal. [9] Pro -Ecological Economy Laws to prevent environmental destruction should be connected with pro-ecological economic activities. This is a very important source of the creation of new jobs. [10]. A particular area of activities in welfare states should be efforts aimed at preventing the negative effects of the process of globalization. [11 p. 513] Many proposals have been repeatedly offered by governments of particular countries aimed at the elimination of the possibilities of various abuseboth on the part of particular producers and states themselves. Recently, there has also been an increase in the significance of eco-tourism, whose services are availed of by an increasing number of health conscious people. Unhealthy living conditions that exist in large city agglomerations encourage people to search for alternatives both in the form of "healthy" holidays or a few days of rest e.g. on Saturdays and Sundays, while also in the form of changing lifestyles and way of living -evidence of this is shown in the increase of "green belts" in city areas. The afore-mentioned changes create new jobs and lead to the formation of healthier habits that are safer for the natural environment. [13 p.165] Maintenance of the established social standards as a result of international agreement forcing capitalists to take responsibility for the businesses run on the countries of their choice -mainly by adhering to legal and tax systems. Without questioning the sense of such a step it is important to note that in present times the failure to take any measures comes in conflict with western states which have possibilities at their disposal to force economic magnates to run a more pro-social form of economic activity. Companies which focused on profit apply the concept of "moving capital", which means moving the production to countries where they have access to a cheaper workforce and greater tax grace. More detailed analysis indicates that this is by no means the end of the moving capital -very frequently products made this way are returned to the company's mother country with the aim of selling the goods there as the place where the goods originated does not usually have appropriate dynamics of purchasing power -due to poverty of those societies. This is connected with the fact that western countries are still the most powerful market, which makes them attractive for various producers. If such a procedure was not possible the western countries instead of receiving such products with open arms would block their access to the market by indicating the dishonesty of such practices and firms with moving capital would in the fear of losing profits be definitely more careful about making decisions to move production facilities abroad. Regardless of the use of these possibilities of action the developed countries should strive to reduce the costs of production, as these amounts are often associated with the need to finance various undertakings in the form of "additional costs" -e.g. social care. Aside from this, it is important to add that the amount of earnings of employees in western Europe is not very economically motivated relatively speaking -it is influenced by other factors-e.g. tariff conditions. [14 pp.109 -123] Conclusions In the course of the research the hypothesis "It is possible to reduce the ecological and social costs of economic development by combining economic activities with pro-ecological ones within the environmental management" was verified. Thanks to the development of science and technology the creation of new economical activities in Europe is possible. [15 pp.38 -48] They can help to create new possibilities for both, the solving of the ecological problems and to create jobs in many sectors of economy. The contemporary management should play a very important role in this process. Europe as one of the first continents which started the dynamic process of industrialization and destroying of nature can also be the first one to unite the contemporary economy with pro-economical activities. This would allow to connect the solving of social and ecological problems.
3,430
2015-01-01T00:00:00.000
[ "Economics" ]
Risk analysis and reliability of the GERDA Experiment extraction and ventilation plant at Gran Sasso mountain underground laboratory of Italian National Institute for Nuclear Physics The aim of this study is the risk analysis evaluation about argon release from the GERDA experiment in the Gran Sasso underground National Laboratories (LNGS) of the Italian National Institute for Nuclear Physics (INFN). The GERDA apparatus, located in Hall A of the LNGS, is a facility with germanium detectors located in a wide tank filled with about 70 m3 of cold liquefied argon. This cryo-tank sits in another water-filled tank (700 m3) at atmospheric pressure. In such cryogenic processes, the main cause of an accidental scenario is lacking insulation of the cryo-tank. A preliminary HazOp analysis has been carried out on the whole system. The risk assessment identified two possible top-events: explosion due to a Rapid Phase Transition RPT and argon runaway evaporation. Risk analysis highlighted a higher probability of occurrence of the latter top event. To avoid emission in Hall A, the HazOp, Fault Tree and Event tree analyses of the cryogenic gas extraction and ventilation plant have been made. The failures related to the ventilation system are the main cause responsible for the occurrence. To improve the system reliability some corrective actions were proposed: the use of UPS and the upgrade of damper opening devices. Furthermore, the Human Reliability Analysis identified some operating and management improvements: action procedure optimization, alert warnings and staff training. The proposed model integrates the existing analysis techniques by applying the results to an atypical work environment and there are useful suggestions for improving the system reliability. Introduction According to many authors, to improve safety one has to know where the risks are (Pasman et al., 2009).This is certainly true when it is necessary to design the safety of complex systems, where the predictive analysis of failure modes requires identification of the hazardous conditions, to quantify their probability of occurrence and to define representative accident scenarios.The representativeness of these scenarios is subject to the knowledge of production processes and system parts and the quantitative risk analysis requires that all failure events be considered (Zhao et al., 2016). The proposed methodologies have been separated into three different phases: identification, evaluation and hierarchization (Tixier et al., 2002). From the accidental risk analysis, during the identification phase, a pressure rise in the cryostat over the design value (a) and exceeding of the containment and insulation conditions (b) of the cryogenic liquid was identified and it was possible to identify a dominating critical scenario due to a mixing of the shielding water of the Water Tank with the cryogenic liquid (LAr), which leads to an explosive effect due to Rapid Phase Transition (RTP).This mode can be considered as a critical sub system of containment loss (mixture of LAr and Water) A second scenario is a relevant rise of cryogenic liquid evaporation higher than the functional values.There is a risk of hypo-oxygenation and hypothermia, depending on the modalities with which the release itself is managed.This scenario has a draining flow limit as the critical sub system. Based on the Safety Management System procedures currently in use at the Laboratories, the risk analysis for all the apparatus is a step-by-step complex procedure that must be completed and agreed upon before the installation of the experiment. According to the known whole documentation, the proposed analysis has been focused on the critical factors (evaluation phase) related both to the cryogenic liquid evaporation and to the functionality of the extraction/ventilation system, in order to identify, as hierarchization phase, possible failure causes of the system and evaluate the device reliability of the system. The "core" of the research consists of the application of industrial risk assessment techniques and methodology in a high technology context and in a "prototype scale", as each experiment is really unique in the world.Moreover, the attention to the safety issues has to take into account the boundary condition of the underground labs, the confined area and the proximity to a public motorway tunnel. Gran Sasso National Laboratory (INFN) INFN Gran Sasso National Laboratory (LNGS) is the largest underground laboratory in the world devoted to neutrino and astro-particle physics and it offers the most advanced underground infrastructures in terms of dimensions, complexity and completeness. LNGS is funded by the National Institute for Nuclear Physics (INFN), the Italian Institution which coordinates and supports research in the field of elementary particles, nuclear and sub nuclear physics. The laboratory is on one side of the 10-km long highway tunnel which crosses the Gran Sasso massif.It consists of three huge experimental halls (Hall A, Hall B and Hall C, each 100-m long, 20-m wide and 18-m high) and the connection among the halls is achieved by other smaller galleries: car tunnel, truck tunnel, connecting tunnels. Halls are equipped with all technical and safety equipment and plants necessary for the experimental activities and to ensure proper working conditions for underground users. The 1400 m rock layer above the Laboratory represents a natural coverage that provides a cosmic ray flux reduction by one million times; moreover, the flux of neutrons in the underground halls is about a thousand times less than on the surface, due to the very small amount of uranium and thorium of the calcareous rock of the mountain. The permeability of cosmic radiation provided by the rock coverage together with the huge dimensions and the impressive basic infrastructure, make the Laboratory unmatched in the detec-tion of weak or rare signals, which are relevant for astro-particle, sub nuclear and nuclear physics. The research areas are: -study of rare nuclear phenomena; -study of the most penetrating components of cosmic rays; -neutrino physics; -dark matter. LNGS is subject to the European Directive Seveso III (2012/18/UE): the underground labs are classified as major accident hazard plants due to the presence of experiments using and storing remarkable amounts of substances classified as dangerous for the environment.According to Seveso III, LNGS have adopted a Safety Management System (SGS), and before starting any activity or new project/ experiment, LNGS and Experimental Collaborations must realize a Safety Risk Analysis in order to evaluate the probabil-ity of occurrence of possible events and to guarantee the highest safety standards in a complex system such as the one in which LNGS are involved. The GERDA experiment cryogenic gas extraction The GERDA experiment has been proposed in 2004 as a new 76 Ge doublebeta decay experiment at LNGS.The GERDA installation is a facility with germanium detectors made out of isotopically enriched material.The detectors are operated inside a liquid argon shield: GERDA experiment has been designed for the clean handling and the stable long-term operation of the Germanium Detector Array in a shield of liquefied gas, copper and water that suppresses the environmental radioactivity by a factor of ~1/10 8 . The Ge detectors are lowered from the lock in the clean room into the center of a double-walled vacuum isolated cryostat (Ø 4.2m, H=9m), which is filled with 6.5*10 4 liters of liquid argon (T = -175 ºC).The cryostat (see Figure 1) is manufactured from 30 tons of selected stainless steel of low radioactivity; its vertical walls are covered with 16 tons of ultrapure copper.The cryostat adheres to the principle of "leak before break", has no penetration below the fill level and is certified for 1.5 bar overpressure being actually operated at 1.2 bar. The shield is completed by a tank (Ø=10m, H=9m) filled with ultrapure water; the water level rests at 8.4 m.It contains 580 m 3 of purified water.It suppresses not only the external gamma radiation but also moderates and absorbs neutrons very efficiently.The water serves also as a radiator for a Cherenkov detector which allows to identify and veto the few muons, ~60 per hour, which penetrate through the Gran Sasso massif into the GERDA setup. The water tank can be completely drained within less than 2 hours. The water tank has been built around the cryostat from top to bottom: roof and topmost cylindrical ring have been built first and have then been lifted by the hall crane for the assembly of the next ring underneath. All vertical surfaces within the water tank including those of the cryostat have been covered with a reflective and wave-length shifting foil for improved detection of the Cherenkov light.The purple layer on the cryostat's wall consists of 6 mm thick extruded poly-styrene foam serving as a thermal impedance which limits the evaporation in case of a leak in the inner container.A similar barrier is mounted on the inner wall. The final section of the own GERDA extraction and ventilation system is connected to the main general exhaust of the Underground Laboratories, that refers to two pumping air stations: the Assergi (AQ) station and the Casale San Nicola (TE) station.The Underground Labora-tories ventilation system ensures the ejection of smoke and gases out of the laboratories up to a flow rate of ≈6*10 4 m 3 /h.The LNGS ejection system of cryogenic gases for GERDA is designed to ensure a maximum flow rate of 10 4 m 3 /h: the ejection point is close to the clean room at 7.30 meters high, into which all the cryogenic gases are directed and where the heat exchanger is installed.The underground laboratories ventilation system is managed by a "slow-control software". Two motorized dampers are installed close to the connection between the ejection system and the underground laboratories ventilation system; furthermore, another damper is installed on the new ejection system to provide air ejection from the heat exchanger release point. The system is equipped with AISI 304 tubes with thickness in compliance with regulations, and is structured with a support and anchoring system, manual air dampers and extraction grills, motorized air dampers, control and detection devices. The GERDA extraction system has to guarantee a constant ejection with a 2.5*10 3 m 3 /h flow rate, up to 10 3 m 3 /h in case of "sudden" cryogenic gas release. Hazard identification The main cause of an accidental scenario is the lack of insulation of the cryo-tank. A preliminary HazOp analysis has been carried out on the whole system and in particular, the risk assessment identified two top-events more critical: -TOP EVENT 1: explosion due to a Rapid Phase Transition -RPT; -TOP EVENT 2: argon runaway evaporation. The RPT explosion is due to the contact between the liquid argon and the water, with the production of a great amount of gaseous argon and shock wave.In this case, the lack of insulation of the cryostat could be due to an overpressure or to a crack in the vessel (i.e.human error in the welding phase, wet corrosion, etc.). The use of a cryostat "intrinsically safe" allowed to reduce the estimated probability of occurrence of RPT to 10 -8 event/year (Guarascio et al., 2013).The GERDA cryostat, in fact, is realized with suitable materials (leak before break), with double wall and double containment. The argon runaway evaporation beyond functional values leads to asphyxiation and hypothermia risks. The estimated thermal power of the Argon is 5 kW/m 2 with initial evaporation rate of 10 4 m 3 /h.A double polycarbonate layer, the LEXAN, is inserted in the inner and outer walls of the cryostat, in order to reduce the thermal transmission coefficient and to achieve a greater insulation of the cryostat. The hazard analysis conducted before the approval and the installation of the GERDA experiment shows the following results: Table 1 Hazard analysis results. According to Italian Legislative Decree 334/99, all the events with a probability of occurrence greater than 10 -6 event/ year have been considered (Guarascio et al., 2013). Among the different events identified in the Risk Assessment, the Top Event 2 has certainly the higher probability of occurrence.In fact, during the whole design process and analysis, several structural measures have been put into practice: -the original single copper wall cryostat has been replaced by a double wall stainless steel one; -there is a mutual independence between the two stainless steel walls, guaranteeing a "double containment wall all over"; -the vacuum gap between the walls is under monitoring; -the cryostat has been "coated" with a lexan layer both on the inner and on the outer side, and with a mylar layer on the outer side, completely "wrapping" the whole cryostat; -a thermo-mechanical analysis shows that the leak from a wall as consequence of a single break on the other one is drastically reduced. For the above cited reasons, the RPT Event resulted as being extremely unlikely (< 10-8ev/y) and attention has been focused on the TOP EVENT 2 (Marcoulaki et al., 2016) deepening the analysis and proposing technical improvements. HAZOP Analysis of the extraction and ventilation plant Hazards related to the extraction and ventilation plant have been identified: this plant has been divided into nodes; for each node process parameters and guide words have been applied (Groth et al., 2012). The primary objective of the plant is to ensure, in an emergency situation, the gaseous argon extraction up to a flow rate of 10 4 m 3 /h.Considering the scope of the plant, the HazOp Analysis (Stefana et al., 2015) highlighted that a lack of flow in the piping could be due to the fan seizing or to an incomplete opening of the shut-off dampers, caused by human error or component breakdown.According to the human reliability analysis, the critical events and the corresponding causes leading to the unsuccessful operation of the plant are reported in Table 2. Fault Tree Analysis-FTA In this section is conducted the Fault Tree Analysis (Guarascio et al., 2007) of the Top Event identified by the previous HazOp: gaseous Argon release in Hall A. For the event occurrence (Brighton et al., 1994), both the incomplete opening of the air dampers and the failure of the extraction system have to occur, and for this reason the FTA has two branches, connected by the logic gate "AND": failure of the extraction and ventilation plant and incomplete opening of the dampers. Figure 2 The GERDA extraction plant layout. The system is equipped with leak of liquid, temperature variation and oxygen deficiency detectors (Crowl et Louvar, 1990), connected to the corresponding optical and /or acoustic alarms (see Figure 2).In case of failure of the mechanized extraction system, the operator activates the P2 command and then the P3 one.The output of Fault tree analysis is summarized below according to Figure 3.The failure modes aim to evaluate the probability of occurrence according to the top events (Khan et al., 2001). First branch: incomplete opening of the dampers The main causes of the incomplete opening of the dampers represented by the logic gate "OR" are: -Damage of mechanical device. -Failure of the damper activation system. The damage of mechanical device is a "basic event", representing a final cause without sub-events. The failure of the damper activation system could be due to two main causes, represented by the logic gate "OR": -Facilities failure. -No signal for the damper opening. The facilities failure is a "basic event" representing a final cause without sub-events. The absence of signal for the damper opening could be due to two main causes, represented by the logic gate "OR": -Sensor warning failure. -Failure on the activation of the opening command. The sensor warning failure could be due to three main causes, represented by the logic gate "OR": -PLC-Programmable Logic Control Damage -Sensor Damage -Wiring Damage.These three events are "basic events" representing a final cause without sub-events. The failure on the activation of the opening command could be due to two main causes, represented by the logic gate "OR": -Alarm system Damage. -Human Error: wrong reaction to the alarm warning. Both the events are "basic events" and the human error has been analyzed by the Human Reliability Analysis -HRA. Second branch: failure of the extraction system The main causes of the failure of the extraction system, represented by the logic gate "OR", are: -Ventilation system failure. -Absence of electric power supply. The failure of the ventilation system could be due to two main causes, represented by the logic gate "OR": -Fan jam. -Electric Motor Failure.Both these events are "basic event" representing a final cause without sub-events. The absence of electric power supply could be due to two main causes, represented by the logic gate "OR": -Error on the start command activation -Lack of energy power.The error on the start command activation could be due to two main causes, represented by the logic gate "OR": -Human Error. -Alarm system Damage.Both the events are "basic events" and the human error has been analyzed in the Human Reliability Analysis -HRA. The lack of energy power could be due to two main causes, represented by the logic gate "OR": -Wiring Damage. -Energy shut-down.Both the events are "basic events" for the TOP EVENT and complete the structure of the FTA. Basic Events: Fan jam, Wiring Damage, Energy shut-down, Electric Motor Failure, Human Error and Alarm system Damage have a greater influence on the probability of occurrence of the top event, therefore: -a little improvement of the electric line reliability involves a great improvement of the entire system reliability; -once the top event occurs, the lack of energy power has a probability of occurrence equal to 1. The FTA results suggest the use of UPS, the improvement of the opening damper system maintenance and the optimization of the intervention and training procedures. Event Tree Analysis (ETA) The Event Tree proceeds with another two branch points, in order to verify the effectiveness of the control and regulation valve systems, and of the extraction system activated both automatically and by the operator. Once the argon is released from the cryostat, the main cause of the argon emission in Hall A is the failure of the extraction system. Safety measures related to the dampers opening consist of the automatic activation of the control and regulation valves and of the system for the extraction of cryogenic gases.Furthermore, the operator can directly adopt these safety measures, in redundancy with the system. The experiment is equipped with liquid leak detectors, temperature and oxygen sensors. In the ETA construction (Guarascio et al., 2013), two branch points have been considered in order to define the effectiveness of the first safety measure: the reliability of the safety measure itself is guaranteed by the correct operation, both of the detector and the alarm systems.In the branch related to the correct operation of the alarm system, another branch point concerning the correct reaction of the operator is present. Branch point related to the correct operation of the detector system The Event Tree proceeds with a branch point in order to verify the success or not of the systems power-up.In this case, the reaction of the operator it is not considered. The ETA leads to the implementa-tion of safety measures to prevent the main event (Gaseous Argon release): -alerts for the operator by means of optical and acoustic alarms in the control room, done by sensors; -full opening of the dampers in the main general exhaust duct, done by the operator; -starting the extraction electric motor at operating speed in order to conduct the vaporized argon, done by the operator. Branch point related to the correct operation of the alarm system 3.5 Human Reliability Analysis (HRA) HRA (Lombardi et al., 2014) has been carried out in order to deepen the evaluation of the actions performed by the operator in the control room.The task analysis performed ensures the following steps: recognition and identification of the alarm warning, call to the suitable operative action, identification of the extraction system activation button. The HRA (Sun et al., 2012) has identified the main operational conditions influencing the operator work and predisposing him to mistakes, as the following ones: -excess of noise outside the operating room could impede the hearing of the alarm system; -wrong location of the optical alarm warning could impede its own identification; -absence of a feedback device for the information of the operator about the activation of the extraction system; -incongruity between procedures and operational activities; -incomplete layout of the instrumentation; -monotony of the surveillance. The event tree related to the Human Reliability (Ying at al., 2010) regarding the reaction in case of alarm warning is (see Figure 4): -Small letters represent success of the operation; -Capital letters represent the human behavior; -Greek letters represent the operation of the protection systems. Considering the failure probabilities and the operator's mistakes probabilities (Holmes et al., 1998), the overall human failure (G total ) is equal to 0.092 (see table 4). Conclusion The risk analysis procedure (Stamatelatos and Dezfuli, 2011) has been performed on the plants of the GERDA Experiment, located in the Hall A of the Underground Gran Sasso National Laboratory.According to different and complementary methodologies, the Risk Assessment has ensured a better analysis of the system and related hazards, by analyzing also the effect of human behavior into possible failure modes. In particular, the event "Argon runaway evaporation" has been analyzed: its probability of occurrence is equal to 10-4 event/year and the Top Event "Gaseous Argon Release in Hall A " is the most critical one. Based on the analysis (Pasman et al., 2009), three main corrective actions have been adopted in order to increase the reliability of the system.The actions comprehend both technical and managerial measures reported as follows: 1. Structural Intervention -addition of UPS (Unit Power Supply) in the extractor electricity supply system: this measure guarantees relevant reduction of the probability of occurrence of the Top Event [value of probability between 6.26x10 -2 and 1.23x10 -2 event/year]. 2. Preventive maintenance approach for the control, regulation and activation device of the opening of the dampers: it is an electromechanical device subject to failure in its lifetime; periodic inspections and maintenance can lead to a reduction of the Top Events probability. 3. Periodical information and train-ing of the operating staff in charge of controls: the continuous training is crucial in order to reduce the probability of mistakes and to ensure an higher level of attention in the operations of the staff itself. According to the evidences of available statistical reports, the effect of non compliant human behavior is the most relevant element about the causal analysis of the system failure (Duijm et al., 2006). Based on the result of this back analysis a lot of control procedures are necessary in order to decrease failure occurrences (Sun et al., 2011). The following step of this research will be the evaluation of the human error according to the integrated techniques of fault tree analysis and Ishikawa's model tested on the procedures usually employed. Figure 1 Figure 1 labeled view of the GERDA installation. Figure 3 Figure 3Results of Fault Tree Analysis (FTA) Figure 4 Human Reliability Analysis.Reaction in case of alarm warning Table 2 Synthesis of the HazOp Analysis. Table 3 Characteristics of the system and Conditions for the FTA development.
5,317.2
2017-09-01T00:00:00.000
[ "Physics" ]
A Review of Numerical Models of Radiation Injury and Repair Considering Subcellular Targets and the Extracellular Microenvironment Astronauts in space are subject to continuous exposure to ionizing radiation. There is concern about the acute and late-occurring adverse health effects that astronauts could incur following a protracted exposure to the space radiation environment. Therefore, it is vital to consider the current tools and models used to describe and study the organic consequences of ionizing radiation exposure. It is equally important to see where these models could be improved. Historically, radiobiological models focused on how radiation damages nuclear deoxyribonucleic acid (DNA) and the role DNA repair mechanisms play in resulting biological effects, building on the hypotheses of Crowther and Lea from the 1940s and 1960s, and they neglected other subcellular targets outside of nuclear DNA. The development of these models and the current state of knowledge about radiation effects impacting astronauts in orbit, as well as how the radiation environment and cellular microenvironment are incorporated into these radiobiological models, aid our understanding of the influence space travel may have on astronaut health. It is vital to consider the current tools and models used to describe the organic consequences of ionizing radiation exposure and identify where they can be further improved. Introduction Understanding the risk to astronaut health from exposure to the space radiation environment, including that from high-energy and high-charge particles (HZE), has been a priority since the beginning of the National Aeronautics and Space Agency's (NASA's) human spaceflight endeavors [1].The magnitude of radiation exposures and the corresponding effects vary enormously, from a negligible increase in cancer risk after the mission to in-flight death from acute radiation syndrome.For many reasons, space-based research has limited capability to ascertain magnitudes, explain mechanisms, or predict the occurrence of adverse health outcomes in humans [1].Alongside experimental radiobiology, numerical models aimed at describing radiation effects in humans were developed from a study where simplified cells composed of deoxyribonucleic acid (DNA), cytoplasm, and cellular membrane were irradiated with gamma-or x-rays [2].These studies broadly concluded that DNA was the more radiosensitive structure of the two intracellular compartments and the primary target of relevance to radiation-induced biological effects, excepting hereditary effects.More recent evidence suggests there are multiple biological targets within the cell, but this early experimental evidence altered the trajectory of radiobiological model development [3,4].Cellular models based solely on DNA damage and cell death do not entirely explain or even agree with some experimental data, particularly for acute and late effects.A likely explanation for these limitations is that non-nuclear subcellular structures are important but have not been explicitly considered yet [5].Furthermore, there is a need to integrate modeling of key mechanisms of biological action, including those at the subcellular, cellular, tissue, and organism levels [6]. Many acute and long-term effects can arise from exposure to ionizing radiation.In addition to tissue and organism level effects, such as cognitive impairment, acute radiation syndrome, and degenerative tissue diseases, there has been extensive research focused on cellular and subcellular level radiation exposure effects.The tools used in risk stratification regarding these conditions and outcomes are limited and could be improved upon [7,8].The National Council on Radiation Protection and Measurements (NCRP) identifies three primary health risk concerns for long-term missions outside Earth's magnetic field: cancerous late effects, noncancerous early effects, and possible effects on the central nervous system from HZE particles [7].The risk is calculated using equivalent dose and a tissue-specific risk coefficient, both of which are estimations [7].Equivalent dose is obtained from relative biological effectiveness (RBE)-derived radiation weighting factors for latent or stochastic effects, and the tissue risk coefficient is approximated from the shielding distributions at different points within each organ [7].The radiation-related cancer risk is well studied and can be quantified with some uncertainty.In comparison, the relationship between low dose rate thresholds, like that seen beyond the low Earth orbit (LEO), and the risk of long-term noncancerous effects occurring is not well-defined.The regulatory bodies of space radiation protection and safety acknowledge the limitations of the conclusions made and note that the late biological effects of radiation are unknown and need further study [7]. With treatment planning in the clinical setting, extra care is applied to delivering the maximum dose to a tumor while simultaneously minimizing the dose to the surrounding tissue (i.e., critical organs or tissue structures) [6].Clinical tools that provide risk stratification of radiation effects are not yet applicable in assessing astronaut health risks from the spaceflight environment [1].Thus, there are limitations to determining the dose and dose rate thresholds for noncancerous late biological effects. To date, there is limited evidence of long-term nonmalignant pathologies manifesting in humans who have flown in space that can be directly attributed to an exposure to the space radiation environment [9].Focusing on nuclear DNA damage and repair mechanisms fails to fully characterize the long-term effects of radiation.Here, we will review the historical development of radiobiological models, the factors affecting radiation sensitivity and resistivity within the cell's microenvironment, and several recent advancements in the radiobiology field, including the role of mitochondria and nuclear DNA damage and repair, and their involvement in radiation response.We will also briefly expand on the epigenetic elements involved in radiation effects outside of hereditary factors. Space Radiation Environment The space radiation environment in LEO consists of four primary sources: solar wind, solar particle events, Galactic Cosmic Rays (GCRs), and trapped particles in the Van Allen belts [9].Outside of LEO, GCRs are the primary concern since they are part of the normal radiation environment and are difficult to completely mitigate with spacecraft shielding. The GCR spectrum consists of relativistic, fully ionized heavy charged particles originating outside the Earth's solar system.The GCR spectrum is composed of approximately 87% hydrogen ions, 12% helium ions, 1% electrons, and 1% HZE [4,10].The HZE contribution to the GCR spectrum ranges from lithium (Z = 3) up to nickel (Z = 28), with a significant contribution to biological damage to living organisms coming from iron (Z = 26) [9,10].Shielding against these HZE particles can lead to potential nuclear reactions within the spacecraft material, generating cascades of secondary particles [7].These secondary particles can then increase astronaut exposure and may confer more risk than the primary radiation [7]. Heavy ion exposures play a critical role in astronaut spaceflight risk assessment.They have a finite range within tissues with minimal dose deposition until the end of the particle track, where nearly all of their energy is delivered [11].As a heavy ion barrels through a medium, it loses energy continuously with each interaction, which can impart biological damage [9].Most of its energy is deposited within a short distance at the end of its path. Figure 1 shows what is referred to as the Bragg peak, which is used advantageously in radiotherapy [9,12,13].In terms of biological effects, heavier ions (Z > 3) like carbon are more effective at creating irreparable DNA damage and their efficacy is not dependent on the presence of oxygen, which is a significant advantage when treating radioresistant hypoxic tumors [11]. Space Radiation Environment The space radiation environment in LEO consists of four primary sources: solar wind, solar particle events, Galactic Cosmic Rays (GCRs), and trapped particles in the Van Allen belts [9].Outside of LEO, GCRs are the primary concern since they are part of the normal radiation environment and are difficult to completely mitigate with spacecraft shielding.The GCR spectrum consists of relativistic, fully ionized heavy charged particles originating outside the Earth's solar system.The GCR spectrum is composed of approximately 87% hydrogen ions, 12% helium ions, 1% electrons, and 1% HZE [4,10].The HZE contribution to the GCR spectrum ranges from lithium (Z = 3) up to nickel (Z = 28), with a significant contribution to biological damage to living organisms coming from iron (Z = 26) [9,10].Shielding against these HZE particles can lead to potential nuclear reactions within the spacecraft material, generating cascades of secondary particles [7].These secondary particles can then increase astronaut exposure and may confer more risk than the primary radiation [7]. Heavy ion exposures play a critical role in astronaut spaceflight risk assessment.They have a finite range within tissues with minimal dose deposition until the end of the particle track, where nearly all of their energy is delivered [11].As a heavy ion barrels through a medium, it loses energy continuously with each interaction, which can impart biological damage [9].Most of its energy is deposited within a short distance at the end of its path.Figure 1 shows what is referred to as the Bragg peak, which is used advantageously in radiotherapy [9,12,13].In terms of biological effects, heavier ions (Z > 3) like carbon are more effective at creating irreparable DNA damage and their efficacy is not dependent on the presence of oxygen, which is a significant advantage when treating radioresistant hypoxic tumors [11]. Figure 1.This maximum dose deposition is referred to as the Bragg peak, which is used advantageously when treating patients with heavy charged particles [13].Graph adapted from Wilson. Quantifying Radiation Damage A radiation particle can interact with its environment to lose all or some of its energy into the medium.If the energy is great enough, an electron of the medium's atom can be ejected from its orbital shell.The atom is then categorized as ionized and can continue interacting with the medium and cause damage.There are two ways for radiation-Figure 1.This maximum dose deposition is referred to as the Bragg peak, which is used advantageously when treating patients with heavy charged particles [13].Graph adapted from Wilson [14]. Quantifying Radiation Damage A radiation particle can interact with its environment to lose all or some of its energy into the medium.If the energy is great enough, an electron of the medium's atom can be ejected from its orbital shell.The atom is then categorized as ionized and can continue interacting with the medium and cause damage.There are two ways for radiation-induced damage to occur: directly and indirectly.Direct action occurs when the projectile reacts with the target (e.g., DNA).Indirect action refers to a particle that hits near the target in the microenvironment, generating free radicals, or ionized atoms, that chemically react with the target [15].Free radicals are atoms or molecules with an unpaired orbital electron, making them highly chemically reactive.The spatial energy distribution between x-or gamma-ray and heavy-ion irradiations are very different.Photons are randomly distributed across the cellular volume and as a result the ionization density is assumed to be homogenous [16].In contrast, heavy ion spatial distribution of energy is more localized, which results in more damage to the volume and lower probability of repair (primarily DNA-strand break repair), thus having a larger biological effect [16]. Linear Energy Transfer (LET) describes the energy a particle transfers per unit length of a track as it traverses a medium [2].High-LET particles, such as carbon, are more densely ionizing along their paths than low-LET particles, e.g., secondary electrons liberated by x-or gamma-rays [12,17].This can result in more damage produced within the medium through high-density clusters of ionization and biological damage.Research suggests that direct action damage is the dominant process responsible for space radiation exposure's more concerning biological effects related to DNA damage and the probability of cell death or misrepair of strand breaks. The cellular response following irradiation depends on how the average LET is specified.There are two ways: track average and energy average LET.For track-averaged LET, one divides the particle's path into segments of equal length, then reports the average energy transferred within a segment.For energy-averaged LET, one partitions the path length into equal energy loss increments and then reports the mean of the iso-energy loss path lengths [2,18].The choice of the average LET used can sometimes make a big difference.While both averages yield similar results with x-rays and monoenergetic charged particles, neutrons are better described by the energy average LET [2].Furthermore, as LET increases, the variability of an ionizing particle's lethality across the cell's cycle decreases so that radiosensitivity appears independent of the cell cycle at higher LET [19].In space, astronauts are exposed to continuous high LET radiation environments at low fluence rates (i.e., low numbers of particles per area of interest) for protracted periods.The radiobiological tools used to describe the long-term effects of protracted low-dose exposure are limited, especially when definitions such as average LET are inconsistent across different experimental and research analyses. Another method used to describe the efficacy of a radiation type is the RBE, which is the ratio of absorbed dose of one type of radiation to a specified, standard x-ray radiation (e.g., 250 kVp x-rays) to produce the same biological effect [20,21].In the field of radiation protection, the estimation of the biological response uses weighting factors that are dependent on the radiating particle type and the radiosensitivity of the organ the particle is traversing [16].These weighting factors, also referred to as quality factors, are upper limits that overestimate radiation effects assigned by the International Commission on Radiological Protection (ICRP) [16,22].The trend for RBE against high-LET radiation of heavy ions, based on the clonogenic death of mammalian cells irradiated in culture, initially shows a positive correlation to about 100 keV/micron.Past this threshold, an inverse relationship is then seen, as shown in Figure 2, and has been linked to double-strand DNA breaks [23].This is referred to as the overkill effect, or overkill phenomenon where additional energy is wasted [24].Figure 2 shows how the RBE maximum varies with particle type, but the overkill effect can be seen to occur at approximately 100 keV/µm [25]. This simplistic definition of RBE, the "same biological effect", can refer to the likelihood of a particle's lethality and the probability of it producing nuclear DNA double-strand breaks [24].RBE can also be a measurement used to describe nonlethal radiation effects outside of DNA double-strand breaks, within the context of space radiation.Nevertheless, the uncertainties are significant and increase directly with increasing RBE [26][27][28].The error within RBE is due to the variability in stochastic processes (i.e., those involved in measuring cellular damage) and radiosensitivity (i.e., affected by cell type, radiation type, and microenvironment).Despite empirical evidence, RBE is roughly estimated because of these factors [28].An error that is too large in the RBE allocation can result in an underdosage or overdosage of the tumorous or normal tissues [29].The guidelines for RBE are partly set by the International Commission on Radiation Units and Measurements (ICRU) and by a country's legal limitations [29].ICRU Report 78 requires that the contribution of the error to the prescribed dose by RBE falls within −5% to +7% and that an RBE of 1.1 is used for proton therapy at all dose levels, regardless of the factors present contributing to the variability [29].Relationship between relative biological effectiveness (RBE), clonogenic cell death, and LET for mammalian cells with carbon, neon, and iron [2,22,24].Here, it can be seen that around 100 keV/micron, along the LET axis, there is a peak after which the RBE not only fails to increase but declines.This simplistic definition of RBE, the "same biological effect", can refer to the likelihood of a particle's lethality and the probability of it producing nuclear DNA doublestrand breaks [23].RBE can also be a measurement used to describe nonlethal radiation effects outside of DNA double-strand breaks, within the context of space radiation.Nevertheless, the uncertainties are significant and increase directly with increasing RBE [25][26][27].The error within RBE is due to the variability in stochastic processes (i.e., those involved in measuring cellular damage) and radiosensitivity (i.e., affected by cell type, radiation type, and microenvironment).Despite empirical evidence, RBE is roughly estimated because of these factors [27].An error that is too large in the RBE allocation can result in an underdosage or overdosage of the tumorous or normal tissues [28].The guidelines for RBE are partly set by the International Commission on Radiation Units and Measurements (ICRU) and by a country's legal limitations [28].ICRU Report 78 requires that the contribution of the error to the prescribed dose by RBE falls within −5% to +7% and that an RBE of 1.1 is used for proton therapy at all dose levels, regardless of the factors present contributing to the variability [28]. In 2008, Cucinotta et al. conducted a study measuring the organ dose equivalent of astronauts aboard the International Space Station (ISS) [29].They reported that about 80% of the radiation contribution was from GCRs and the uncertainties in radiation outcomes are compounded by how broadly radiation quality and dose rate effects are determined [29].The average effective doses for astronauts aboard the ISS, where exposures were modified by the shielding from the spacecraft's walls, did not consider extravehicular activities in the data analysis. RBE is an essential in heavy-ion radiotherapy treatment planning, focusing on tumor control and minimizing damage to the surrounding normal tissues."Clinical RBE" describes the ratio of a prescribed absorbed dose between a photon and a high-LET radiation particle to result in clinically equivalent outcomes [30].Heavy-ion irradiation is empirically found to be more biologically effective when compared to the same absorbed dose between particle types.Clinical RBE values are adjusted based on medical expertise and decisions because the RBE within the tumor volume may vary [15,30].RBE within an ion beam varies and is compensated for by modulating the absorbed dose from RBE model predictions.Thus, clinical RBE is implicitly determined from the model used and the input Relationship between relative biological effectiveness (RBE), clonogenic cell death, and LET for mammalian cells with carbon, neon, and iron [2,23,25].Here, it can be seen that around 100 keV/micron, along the LET axis, there is a peak after which the RBE not only fails to increase but declines.Figure adapted from Sørenson [25]. In 2008, Cucinotta et al. conducted a study measuring the organ dose equivalent of astronauts aboard the International Space Station (ISS) [30].They reported that about 80% of the radiation contribution was from GCRs and the uncertainties in radiation outcomes are compounded by how broadly radiation quality and dose rate effects are determined [30].The average effective doses for astronauts aboard the ISS, where exposures were modified by the shielding from the spacecraft's walls, did not consider extravehicular activities in the data analysis. RBE is an essential in heavy-ion radiotherapy treatment planning, focusing on tumor control and minimizing damage to the surrounding normal tissues."Clinical RBE" describes the ratio of a prescribed absorbed dose between a photon and a high-LET radiation particle to result in clinically equivalent outcomes [31].Heavy-ion irradiation is empirically found to be more biologically effective when compared to the same absorbed dose between particle types.Clinical RBE values are adjusted based on medical expertise and decisions because the RBE within the tumor volume may vary [16,31].RBE within an ion beam varies and is compensated for by modulating the absorbed dose from RBE model predictions.Thus, clinical RBE is implicitly determined from the model used and the input parameters of the radiation environment.Experimentally, RBE can be measured based on the specific irradiation conditions, but these same conditions do not hold true within a patient [31].Between acute reaction (i.e., caused by cellular depopulation) and late reaction (i.e., from chronic inflammation) tissues, proton RBE data from in vitro and acute-reaction in vivo experiments are more likely to underestimate RBE in late-reacting tissues [29]. Incorrect dosages given in the clinic can have significant biological consequences, especially on the probability of causing late-developing malignant (i.e., cancerous) or nonmalignant (e.g., progressive fibrosis, vascular insufficiency, etc.) diseases in treated patients [29].This is also a concern in the realm of space radiation exposure.The technological advancements in treatment planning at the time the guidelines were developed were not able to easily adjust for a variable [29].Instead of use as a definitive indication of whole-organ or whole-body nonmalignant pathological outcomes, RBE is better utilized to describe the frequency and presence of lesions created by DNA strands from ionizing radiation that result in the cell culture's inability to continue proliferating and eventual death [27]. In the following section, the concept of a "sensitive volume" and the assumptions made regarding radiation effects will be discussed in the context of radiobiological model development.Based on the simplified experiment that determined nuclear DNA as the primary target, or "sensitive volume", of ionizing radiation, there is a variability in experimental results that has yet to be well defined.The error in radiation effects and RBE seen clinically and experimentally could stem from this lack of definitive evidence that there is only one target, and it is nuclear DNA.This possibility that there could be multiple sensitive volumes that are responsible needs to be further investigated. Radiobiological Numerical Models The cell is the fundamental building block of human tissues and organs.Development of the first models describing the biological effects seen following ionizing radiation exposure began before the "sensitive volume" within the cell was identified.Soon after the first radiographic image was taken, scientists attempted to model and explain the physiological, biological, and chemical phenomena at the subcellular level following irradiation of an organic medium [6].The first application of target theory was developed in 1924 by Crowther and improved upon by Lea.Target theory dominated the field of radiobiology until the 1980s and had two subcategories: the Single-Target Single-Hit (STSH) model developed by Crowther and the Multi-Target Single-Hit (MTSH) model from Lea [32].The term "hit" refers to the ionizing radiation particle interacting with the medium and depositing dose into the sensitive volume within the target cell.Crowther found an exponential loss in biological activity following exposure to ionizing radiation [33,34].This biological activity, also called cell survival, refers to the cell's ability to continue proliferating after irradiation, and is given by with S 0 for the initial percentage of viable cells, V as the "sensitive volume", and the ionization density, I, determining cell survival.An exponential relationship between radiation exposure and cell survival was expected.It was assumed that the administered radiation would enter the sensitive volume V and inactivate the cell.Crowther's method used roentgens, a unit that better described the ionization of air particles and did not hold for condensed media [33].Eukaryotic cells are composed of organelles communicating with the cell's internal and external environments.Exposure, or the ionization of air, was a reasonable quantity for the intent and purpose of Crowther's experiments, but the quantities defining air effects are insufficient to describe complex biological damage.Lea extended the sensitive volume concept within target theory with the (MTSH) model.He theorized the inactivation of the tested organic samples was related to the formation of lethal mutations and that there were multiple targets within V [35,36].From irradiating bacteria and viruses, he made assumptions that cell killing was a multi-step process: there needed to be an absorption of energy within a sensitive volume; lesions in the cell were created by energy deposition, and in a subsequent step, these lesions resulted in the cell's inability to proliferate [5,35].Target theory models successfully described radiation effects in some microbiological systems but failed to describe radiation effects seen in higher plant types and mammalian samples [5,27]. Experimentally, it was noted that more complex cells had a higher radiosensitivity than bacteria and viruses and that an initial dose to a sample does not always result in an exponential relationship with clonogenic cell survival.The range of dose where there is a delay in lethal damage to the cell is known as the quasi-threshold dose (D q ) and describes the "shoulder" of the cell survival curve, which can be seen in Figure 3 [12].Building from the fundamental assumptions that each ionization causes damage to the molecular structure, Lea derived an equation that involved the molecular mass instead of a volume and D q , which together describe the shoulder portion of the survival curve [33]: where D is the dose, M is the molecular mass, and N is the number of hits within the target.This model assumed that no "hits" meant cell survival, that each target had an equal probability of being hit by ionizing radiation, and that a single hit was enough to inactivate the target [32,35].The MTSH model appropriately follows experimental data in high dose ranges and is described by single-or multi-event killings, shown by the curved and linear portions of the curve, respectively [35]. the cell were created by energy deposition, and in a subsequent step, these lesions resulted in the cell's inability to proliferate [5,34].Target theory models successfully described radiation effects in some microbiological systems but failed to describe radiation effects seen in higher plant types and mammalian samples [5,26].Experimentally, it was noted that more complex cells had a higher radiosensitivity than bacteria and viruses and that an initial dose to a sample does not always result in an exponential relationship with clonogenic cell survival.The range of dose where there is a delay in lethal damage to the cell is known as the quasi-threshold dose (Dq) and describes the "shoulder" of the cell survival curve, which can be seen in Figure 3 [12].Building from the fundamental assumptions that each ionization causes damage to the molecular structure, Lea derived an equation that involved the molecular mass instead of a volume and Dq, which together describe the shoulder portion of the survival curve [32]: where D is the dose, M is the molecular mass, and N is the number of hits within the target.This model assumed that no "hits" meant cell survival, that each target had an equal probability of being hit by ionizing radiation, and that a single hit was enough to inactivate the target [31,34].The MTSH model appropriately follows experimental data in high dose ranges and is described by single-or multi-event killings, shown by the curved and linear portions of the curve, respectively [34].[12].D0 describes the slope of the curve's linear portion, and Dq gives the approximate dose range, or width, of the curve's shoulder.A linear slope on these semi-logarithmic graphs describes an exponential relationship.The STSH model, shown on the left, was expected to be seen, but a shoulder would appear in the data instead, depicted on the right.Graphs adapted from Joiner and Kogel [12]. Radiosensitivity may vary depending on the period in the cell cycle when the radiation is received, the dose rate, and the microenvironment where the reactive oxygen species (ROS) are created [2,36].Cell proliferation occurs via a cycle of mitosis, cell division, and DNA synthesis [37].Since radiosensitivity is independent of the cell cycle at higher LET, the models used to explain cell reactions to ionizing radiation are limited to abnormal tissue behaviors, such as the cancerous cells, which have a faster proliferation rate-again, cementing these models' application in the clinical setting [22].The single-and multi-Figure 3. Comparison of the STSH and MTSH models [12].D 0 describes the slope of the curve's linear portion, and D q gives the approximate dose range, or width, of the curve's shoulder.A linear slope on these semi-logarithmic graphs describes an exponential relationship.The STSH model, shown on the left, was expected to be seen, but a shoulder would appear in the data instead, depicted on the right.Graphs adapted from Joiner and Kogel [12]. Radiosensitivity may vary depending on the period in the cell cycle when the radiation is received, the dose rate, and the microenvironment where the reactive oxygen species (ROS) are created [2,37].Cell proliferation occurs via a cycle of mitosis, cell division, and DNA synthesis [38].Since radiosensitivity is independent of the cell cycle at higher LET, the models used to explain cell reactions to ionizing radiation are limited to abnormal tissue behaviors, such as the cancerous cells, which have a faster proliferation rate-again, cementing these models' application in the clinical setting [23].The single-and multi-target models' limitations are that they do not match experimental data in the lower dose range.It was expected that there would not be a shoulder to the curve in a lower dose range, whether for radiotherapy or space radiobiology purposes, but there is.Chadwick et al., 1973, were some of the first to incorporate subcellular components into the numerical modeling [19].This approach was termed the molecular theory and encompassed a broad class of radiobiological models considering subcellular processes in a cell's reaction to irradiation.This model allowed insight into the radiobiological variability seen in irradiation experiments and assumed that damage to the critical structures within the cell affecting reproduction was to the double-strand nuclear DNA [19,39].It was hypothesized that what was seen experimentally was due to the cell's ability to repair DNA damage from irradiation and combat the lethality of the administered dose.Singlestrand breaks would be appropriately repaired, while double-stranded DNA damage would likely lead to permanent cellular damage [19].The purpose of the molecular theory model, shown in Equation ( 2), was to connect the physical and biochemical experimental observations; however, it bypassed several intracellular molecular functions and focused on repair mechanisms specific to nuclear DNA [39]. Here, p is the proportionality factor connecting DNA double-strand breaks to cell death, f is the proportion of DNA double-strand breaks not repaired, n is the number of sites, k is the dose per site needed to result in double-strand breaks, ∆ represents the probability of a single event double-strand break, ε is the proportion of bonds broken that are DNA double-strand breaks, and D is the dose administered. The lethal-potentially lethal (LPL) model formulated by Curtis further built upon the foundation that nuclear DNA is the primary target of ionizing radiation.Figure 4 visualizes the LPL model with η as the implicit dose rate and ε as the repair rate, which is assumed constant; the repair mechanisms work at a fixed rate regardless of the concentration of damaged DNA.Potentially lethal (PL) lesions may be repaired and return the cell to the viable, pre-damaged state, or become lethal (L) lesions that result in cell death.The numerical description of the LPL model is shown in Equation (3).While ε and η are implicitly the repair and dose rates, respectively, t r is the total repair time available after exposure to ionizing radiation. a cell's reaction to irradiation.This model allowed insight into the radiobiological variability seen in irradiation experiments and assumed that damage to the critical structures within the cell affecting reproduction was to the double-strand nuclear DNA [18,38].It was hypothesized that what was seen experimentally was due to the cell's ability to repair DNA damage from irradiation and combat the lethality of the administered dose.Singlestrand breaks would be appropriately repaired, while double-stranded DNA damage would likely lead to permanent cellular damage [18].The purpose of the molecular theory model, shown in Equation ( 2), was to connect the physical and biochemical experimental observations; however, it bypassed several intracellular molecular functions and focused on repair mechanisms specific to nuclear DNA [38]. Here, p is the proportionality factor connecting DNA double-strand breaks to cell death, f is the proportion of DNA double-strand breaks not repaired, n is the number of sites, k is the dose per site needed to result in double-strand breaks, Δ represents the probability of a single event double-strand break, ε is the proportion of bonds broken that are DNA double-strand breaks, and D is the dose administered. The lethal-potentially lethal (LPL) model formulated by Curtis further built upon the foundation that nuclear DNA is the primary target of ionizing radiation.Figure 4 visualizes the LPL model with η as the implicit dose rate and ε as the repair rate, which is assumed constant; the repair mechanisms work at a fixed rate regardless of the concentration of damaged DNA.Potentially lethal (PL) lesions may be repaired and return the cell to the viable, pre-damaged state, or become lethal (L) lesions that result in cell death.The numerical description of the LPL model is shown in Equation (3).While ε and η are implicitly the repair and dose rates, respectively, tr is the total repair time available after exposure to ionizing radiation. 𝑆 = 𝑆 𝑒 (4) Figure 4.The LPL model was built on the assumption that the cell's repair rate of DNA strand breaks is fixed and that the dose rate can be variable [40].η represents the implicit dose rate and ε is the repair rate.Dose to the viable cells could produce potentially lethal (PL) lesions which could be repaired, but if misrepaired, or if the repair is not fast enough to combat the rate of lesion production, then the PL lesions can become lethal lesions and result in clonogenic cell death.If the repair rate is greater than the dose rate, and if any misrepairs do not impede the cell's ability to proliferate, then the PL lesions are resolved, and the cell returns to its viable state [12].Figure adapted from Joiner et al. [12]. Combining the probabilities and concepts within molecular theory and the LPL model, the linear quadratic (LQ) model was subsequently derived.This model has several limitations that have continued into the development of modern numerical models.Although these models hold for the irradiation of individual cells in vitro and in vivo, there are limitations to the validity of these models at low dose rates [12].Furthermore, the time dependence is implicit and may only describe the presence or absence of cell inactiva-tion [12,41].The experimental data described by this model was obtained from yeast, bacterial, and viral samples irradiated in vitro and is defined by the following equation: (5) with S as the percentage of irradiated cells that can continue proliferating and D for the total radiation dose administered [32,36].The αD component describes the single hits on the DNA strands, while the βD 2 term describes multiple hits [32,36].The behavior of the LQ model equation should result in a continuously curving relationship, which does not match experimental data for prolonged radiation dosage since there is a linear portion to the curve [36].The curve in Figure 5 shows the relationship between the dose given and the resulting proportion of surviving cells and the difference in radiation effectiveness for clonogenic death at varying dose levels and particle types. Combining the probabilities and concepts within molecular theory and the LPL model, the linear quadratic (LQ) model was subsequently derived.This model has several limitations that have continued into the development of modern numerical models.Although these models hold for the irradiation of individual cells in vitro and in vivo, there are limitations to the validity of these models at low dose rates [12].Furthermore, the time dependence is implicit and may only describe the presence or absence of cell inactivation [12,40].The experimental data described by this model was obtained from yeast, bacterial, and viral samples irradiated in vitro and is defined by the following equation: with S as the percentage of irradiated cells that can continue proliferating and D for the total radiation dose administered [31,35].The αD component describes the single hits on the DNA strands, while the βD 2 term describes multiple hits [31,35].The behavior of the LQ model equation should result in a continuously curving relationship, which does not match experimental data for prolonged radiation dosage since there is a linear portion to the curve [35].The curve in Figure 5 shows the relationship between the dose given and the resulting proportion of surviving cells and the difference in radiation effectiveness for clonogenic death at varying dose levels and particle types.The α term is also used to refer to lethal damage from a single particle and the β term is reserved for the accumulating lethal damage caused by more than a single particle track [36].The α/β ratio corresponds to the dose at which each type of damage is equal and is often used to characterize the radiosensitivity of cell lines.Low α/β ratio cell lines have a more pronounced curvature to the cell survival graph, while higher α/β ratio cell lines show a more constant rate of cell-killing as the dose increases [36].How a cell survival curve, or response curve, is produced and whether the cell line is "high α/β" to "low α/β" is contingent upon the radiation conditions and potentially the microenvironment and type of cell.Changes to the cell cycle and target environment for low-LET radiation have been shown to cause shifts in the cell cycle and change a cell population from "high α/β" to "low α/β" [36].This comparison of how different cell line types may result in a difference in the resulting cell survival curve can be seen in Figure 6. nuclear DNA, the cell's type, or cellular microenvironment can alter the α/β ratio of a cell population.More models emerged from the LQ model to explain the biological phenomena of cell survival and to try and improve the accuracy, such as the repair-misrepair (RMR) model, the saturable repair model, the two-lesion kinetic model, and the repairmisrepair-fixation model [31,34].These models failed to directly link radiation damage to double-strand breaks for all radiation and cell line types [41].Figure 6.As the dose increases, the surviving fraction decreases, but the severity and concentration of double-strand breaks are variable between radiation types and cell lines.The lower an α/β ratio is, or higher the particle's LET, the more likely double-strand breaks from a single particle interaction will occur when it traverses the biological medium [35].Graph adapted from McMahon [35]. The two-lesion kinetic (TLK) model aimed to connect the biochemical processes of double-strand break repairs with an ionizing radiation's lethality [41].This model considers the variability in cellular DNA repair mechanisms and that these repair systems Figure 6.As the dose increases, the surviving fraction decreases, but the severity and concentration of double-strand breaks are variable between radiation types and cell lines.The lower an α/β ratio is, or higher the particle's LET, the more likely double-strand breaks from a single particle interaction will occur when it traverses the biological medium [36].Graph adapted from McMahon [36]. As the dose increases, the surviving fraction decreases, but the severity and concentration of double-strand breaks vary between types of radiation and cell lines.The α/β ratio describes the type of damage the ionizing radiation is capable of at each dose in relation to the cell line type and is reflected in the curvature of the graph [36].The α term is defined as the probability of cell death from a single incident particle causing lethal damage, while β reflects the probability of lethality from multiple hits [36].It should be noted that cell survival curves for individual cells or asynchronous cell populations differ from the irradiation behavior of synchronous cells such as tissue.As a result, the LQ model has very low accuracy when describing impacts on cellular systems. Further evidence is needed to support how damage to subcellular structures beyond nuclear DNA, the cell's type, or cellular microenvironment can alter the α/β ratio of a cell population.More models emerged from the LQ model to explain the biological phenomena of cell survival and to try and improve the accuracy, such as the repair-misrepair (RMR) model, the saturable repair model, the two-lesion kinetic model, and the repair-misrepairfixation model [32,35].These models failed to directly link radiation damage to doublestrand breaks for all radiation and cell line types [42]. The two-lesion kinetic (TLK) model aimed to connect the biochemical processes of double-strand break repairs with an ionizing radiation's lethality [42].This model considers the variability in cellular DNA repair mechanisms and that these repair systems saturate the microenvironment at higher dosages.The TLK model also differentiated between the two types of DSBs and, while it is similar to the LPL and RMR models, it accounts for the local complexity of the damaged site [42].It can incorporate more parameters into its formalism, allowing for better agreement with experimental data and introducing additional complications. The repair-misrepair-fixation (RMF) model combines the LPL and RMR models with microdosimetry concepts into a predictive model [43].By this combination, the RMF model considers the intra-and inter-track binary misrepairs of DSBs and relates this damage to RBE [44].However, as previously discussed, RBE is inadequate for describing radiation effects beyond the particle's lethality.Furthermore, the RMF model assumes that the interactions of ionizing radiation with nuclear DNA resulting in double-strand breaks affect the nucleus as a whole [43]. There is a continued attempt to incorporate more modern tools into the radiobiological models.The Monte Carlo damage simulation (MCDS) has been combined with the RMF model, dosimetric data, and a Monte Carlo radiation transport model to improve the formalism for cell survival prediction [45,46].This method can predict some double-strand and single-strand break and repair behaviors and can be applied to hypoxic microenvironments with differing types of ionizing radiation.Additionally, the local effect model (LEM) and the microdosimetric kinetic model (MKM) are two that aim to directly correlate energy deposition and subsequent cellular effects [36].These models were developed to connect the deposited energy with the radiation-induced biological effect, though their usage remains primarily limited to radiotherapy.The field of space radiobiology currently relies on the LQ model, and the potential for applying these newer models has yet to be seen.Importantly, radiobiological models are built on the hypotheses of Crowther and Lea and the common assumption that nuclear DNA is the only target of concern when studying radiation effects.However, if this were the case, it can be argued that the survival curves of different cells should be very similar in shape and slope. In vitro radiation studies of irradiating cancerous and noncancerous mammalian cells have confirmed higher radiosensitivity than bacteria and viruses, shown with an increased slope in their survival curves [47].The resulting survival curve comparison between cell types can be seen in Figure 7, where there are distinct slope and shoulder width differences.While prokaryotic cells lack distinct nuclei and organelles, eukaryotic cell structure is much more complex [48].Mammalian cells have a nucleus and most contain mitochondria with mitochondrial DNA, as well as other large subcellular structures.Since the radiosensitivity of mammalian cells is greater, there could potentially be additional targets within the cell outside of nuclear DNA, which is a theory in need of further investigation [47].Contributors to the early models stated that more survival curve analyses were necessary to prove conclusively that nuclear DNA is the primary target of radiation.The need remains.Recently, space radiation protection and guidelines noted a need for more data collection on subcellular target radiation effects, and for the validation of computational transport models [7,31,35]. DNA Repair Mechanisms DNA comprises two strands of a sugar-phosphate backbone and is connected by four nitrogenous bases, or base pairs (bp) [2].The order and pairings of the bases and these Contributors to the early models stated that more survival curve analyses were necessary to prove conclusively that nuclear DNA is the primary target of radiation.The need remains.Recently, space radiation protection and guidelines noted a need for more data collection on subcellular target radiation effects, and for the validation of computational transport models [7,32,36]. DNA Repair Mechanisms DNA comprises two strands of a sugar-phosphate backbone and is connected by four nitrogenous bases, or base pairs (bp) [2].The order and pairings of the bases and these chains of molecules determine the task for each cell and the overall genetic blueprint of the organism [50].AP, or apurinic and apyrimidinic, sites are where lesions are present and can hinder DNA replication and transcription processes [51,52].Ionizing radiation interacting with an organism's cellular structures commonly triggers a stress response in which the free radical production increases, altering the redox state, and/or cellular homeostasis [53].The presence and elimination of free radicals are part of an organism's normal biological function, even in the absence of irradiation.The chain reaction of free radical mechanisms converts nutrients into chemical energy.It maintains redox homeostasis, the cellular function of response and feedback, and is part of maintaining a physiologic steady state [53].However, when radiation exposure triggers a stress response, tissue inflammation can occur to remove diseased and damaged cells and prompt tissue repair mechanisms.During prolonged, continuous exposure to ionizing radiation, the biochemical processes that maintain homeostasis can malfunction, depending on the dose rate.This altered cellular environment affects the subcellular response, leading to more than an acceptable amount of DNA misrepair, secondary oxidative stress responses, deficiencies in DNA repair enzymes, and mutations, and can ultimately result in cell death [53,54]. Changes to the cellular microenvironment from ionizing radiation interactions can damage subcellular structures like nuclear DNA by creating reactive oxygen species (ROS) or free radicals.ROS can be in the form of superoxide anion (O 2 − ) and the hydroxyl radical (OH − ), and subsequently form the hydrogen peroxide molecule (H 2 O 2 ) [55].ROS are involved in a cell's normal function.Approximately 104 lesions per cell from endogenous ROS formed in normal cellular processes are expected to occur daily [56].Ionizing radiation adds to the number of lesions present, or damage load, and at large doses may overwhelm the cell's antioxidative defenses [57].These free radicals can be created directly from ionizing radiation interactions and come from the cell's response to repair the damage from those interactions.As the organism ages, the lesions present on the DNA strands may also accumulate, and if misrepaired, these damaged sites can lead to DNA mutations and dysregulated cellular function [58].Cell survival experiments suggest that the cell's repair mechanism's effectiveness decreases with increasing LET [46].At the same dose rates, high-LET particles cause more oxidative clustered DNA lesions than low-LET radiation sources, since there is a higher percentage of irreparable and more complex double-strand breaks present [59,60].There are observed repair mechanisms supporting some of the postulated radiobiological models.Once a single-strand break (SSB) or double-strand break (DSB) has been created in the target, processes are simultaneously triggered to repair the breaks. Between low-and high-LET damage to DNA, the resultant type of lesion will cause the cell to favor one repair pathway over the other [54].Depending on the distance between lesions along the DNA strands, the resulting breaks are categorized as DSB or SSB.For example, the term "clustered DNA lesions" is ascribed to multiple damaged sites within 20 bp of each other; these can be produced via endogenous or exogenous sources, such as ionizing radiation, normal cell function, or chemical toxicity [58].When such multiple damaged sites are bunched across a short length, they may be more difficult to fully and faithfully repair through homologous recombination repair (HRR) since more of the DNA sequence will likely be lost, making clustered DNA lesions the most lethal form of all DNA damage caused by ionizing radiation [61].Nonhomologous end-joining (NHEJ) is suggested to be the primary repair mechanism for high-LET DNA damage [54].DSBs induced by ionizing radiation have blunt double-strand ends or short single-strand ends, which can be repaired via NHEJ [62]. Compared to DSBs, SSBs with ample space between events have a higher chance of repair.The most common repair method for SSBs is base excision repair (BER), which is an epigenetic regulation of gene expression [63,64].BER restores the complementary nature of the bases in opposite DNA strands and is the most versatile [51,52,61,65].For a BER process to be successful, it needs to result in no significant change to the nuclear DNA strand's radiosensitivity [56].Low LET is more likely to cause such sparsely clustered lesions, and this type of damage can utilize either NHEJ or HRR, whereas high-LET interactions and damage tend to be more densely clustered and, therefore, more complex in their repair [54].Notably, the impact of clustered damage and DSBs on the cell's ability to proliferate depends on the efficacy of the cell's repair mechanisms, the dose rate, and the type of radiation [46]. Mitochondrial DNA Recent research suggests a link between mitochondrial DNA (mtDNA) and radiation effects, but the more commonly used radiobiological models like the LQ model do not take mtDNA into account [66].Mitochondria play a significant role in cellular energy production and are involved in the cell's metabolism and oxidative stress control, as well as cellular death [67].The DNA within mitochondria, referred to as mitochondrial DNA, can affect the risk of certain cancers and neurological diseases, and negatively influence the aging process [68,69].Its importance challenges the assumption that nuclear DNA is the only subcellular structure of Interest in radiobiological models [66].The primary role of mtDNA is to prepare for oxidative phosphorylation, a more efficient metabolic state for generating cellular energy [69].In cases of repair, the microenvironment of the mitochondrion differs from that of nuclear DNA [56,70].It should also be noted that the metabolism of mitochondria has been implicated in bystander radiation effects, but more research is needed to confirm their direct link [66]. Different cell types can be more susceptible to oxidative damage.For example, mtDNA is more vulnerable to oxidative damage and will mutate at a greater rate than nuclear DNA when damaged.This is because their proximity to the electron transport chain increases the chance of accumulating toxic ROS [56].When the cellular environment's redox stasis is imbalanced with an increased ROS level, this can lead to mitochondrial dysfunction and trigger intra-and extracellular distress signaling [69].Oxidative stress, cellular respiration levels, and the mitochondrion's metabolism respond to the environment by undergoing a morphological change to regulate its repair.The mitochondrial double membrane can go through fission and fusion actions to restore its function.The fission process allows the isolation and separation of the damaged proteins within the organelle. In contrast, the fusion process mixes partially and fully functional mitochondria to create more fully functional ones [69].Because of the high consumption and production of oxygen species present compared to other cell types, neuronal and muscular cells are more susceptible to oxidative stress effects [57].Tissues with a higher concentration of mtDNA are expected to have a higher sensitivity to oxidative damage.They are more likely to result in mutations and deletions involved in ATP production.Adenosine triphosphate, or ATP, is the molecule involved in cellular energy generation and in the production of RNA, or ribonucleic acid, which aids in carrying out instructions from nuclear DNA [57].Accumulation of mtDNA mutations may be linked to neurodegenerative diseases, such as amyotrophic lateral sclerosis [57]. Since each cell has multiple copies of mtDNA, it was suggested that strand breaks might not affect overall mitochondrial function [71,72].While not as thoroughly researched as nuclear DNA repair, mtDNA studies show similar repair mechanisms.The typical nuclear DNA DSB repair pathway NHEJ was undetectable in mammalian mitochondria, but microhomology-mediated end joining, or MMEJ, was active [73].MMEJ is a DSB repair mechanism that employs microhomologous or similar short sequence base regions [74].Ionizing radiation causes oxidative stress, leading to mtDNA mutations and deletions.It is suggested that the damaged molecules undergo degradation after a DSB in mtDNA.This will only happen if a small amount of mtDNA is damaged, because certain cell types can have up to thousands of mtDNA molecules and the loss of a few would not compromise function [75].So, the outcome of compromised cellular operation due to mtDNA damage, as suggested, is unlikely but still possible. The ISS is within the LEO and those aboard the station benefit from the added radiation protection that the Earth's magnetosphere provides [76,77].This study of the twins did not conclude that genes altered in-flight compared to pre-flight and post-flight samples, or had increased numbers of DNA damage response (DDR)-which describes the cell's process to repair and replicate DNA and continue through cell-cycle checkpoints-pathways [77].The study instead saw changes to mitochondria within the subjects.The levels of mtDNA present within the subject aboard the ISS were higher than the pre-flight and post-flight sampling [77].There is a positive correlation between time spent on the ISS and the concentration of mtDNA in the subject's blood sample.The presence of mtDNA within the blood is possibly linked to inflammation, a typical result of radiation exposure [78].With a limited testing pool and bias toward Caucasian middle-aged men, the result of this 2019 study implies the effects of prolonged exposure to ionizing radiation.However, additional research is needed to conclude the causation of these physiological changes in post-flight samples. As previously stated, there are differences in the microenvironments surrounding mtDNA and nuclear DNA, and in their composition and damage repair.Nuclear DNA is linked with histones and chromatin-associated proteins that are involved in scavenging free radicals [56,79].Although mitochondrial repair proteins are imported from the nucleus, mtDNA strands lack these scavengers.Furthermore, mtDNA has a higher density of coding sequences related to ATP production, which, if altered, affects overall cellular function.The danger arises if a damaged genome results in impaired oxidative phosphorylation and defective ATP production [80].With a decrease in ATP production comes an increase in ROS production, which can trigger and accelerate the progression of different mammalian diseases [81]. This damage to mitochondria and mtDNA can be more extensive than that seen in nuclear DNA [79].Once oxidative stress damages mtDNA, it lingers much longer than nuclear DNA damage and is more destructive.As a result of these differences in the presence of oxidative stress response, BER is the primary repair process available for mtDNA [56].Though this is a repair mechanism for SSBs, it can still remove and repair deaminated and oxidized DNA bases [82].It excises smaller DNA lesions caused by stressors, but most lesions induced by ionizing radiation are larger double-strand lesions that are irreparable or very complicated to repair.Thus, the maintenance of mtDNA is vital because of the risks involved in untended mutations. Furthermore, it has been shown that when the cytoplasm of a more complex mammalian cell is altered or damaged, it can cause changes in mitochondrial function [83].The component primarily involved in the process of mitochondrial fission is dynamin-related protein 1 (DRP1).This protein activates the autophagy process of the cell, which is also oxyradical dependent [83].This action, where the dysfunctional mitochondria are isolated and degraded by the autophagy process, is thought to protect surrounding structures from the subsequent effects of irradiated cytoplasm.Additional research is needed to confirm the roles each subcellular organelle and gene expression plays in radiation effects, since even cell cytoplasm alterations can damage the organism [83]. Epigenetics Besides direct DNA damage, it is being increasingly recognized that radiation exposure can also affect DNA and histone modifications, i.e., methylation.These modifications, generally known as epigenetic, are the key regulators of the expression of genetic information.DNA methylation is the most studied epigenetic modification of DNA, where the methyl group is bonded to the fifth position of carbon in the process facilitated by the enzymes called DNA methyltransferases and methyl-binding proteins [84]. Evidence accumulated throughout the last few decades convincingly demonstrates the potential ionizing radiation has to affect DNA methylation patterns.In rodent models, whole-body exposure to either γ radiation or x-rays at doses of 1 Gy and above usually results in the loss of global DNA methylation in many organs and tissues within hours of irradiation [85][86][87].This effect may persist, typically in target organs for radiation-induced carcinogenesis, i.e., in the hematopoietic system (hematopoietic stem and progenitor cells, thymus) and mammary gland [85,88].Loss of global DNA methylation in other organs (i.e., muscle or lung) has been shown to have largely transitory effects [85,89]. It must be emphasized that the loss of global DNA methylation is generally accepted as a hallmark of cancer [90].As persistent DNA hypomethylation after exposure to IR was observed mainly in target organs for radiation-induced carcinogenesis, this led to the hypothesis that IR, besides exerting its genotoxic potential, may also cause cancer via an epigenetic mode of action [85].While this hypothesis has not been fully confirmed, several mechanisms closely associated with carcinogenesis provide strong support.For instance, it is generally accepted that loss of DNA methylation usually occurs from otherwise heavily methylated repetitive elements that cover up to two thirds of mammalian genomes [91].DNA methylation serves as a key mechanism of transcriptional silencing for repetitive elements [92].For instance, Long Interspersed Nucleotide Element 1 (LINE-1)-the most abundant repetitive element in mammalian genomes-is a retrotransposon whose 5 ′ -UTR sequence is heavily methylated to prevent its aberrant transcriptional activity [93].As it covers ~17% and 22% of human and mouse genomes, respectively, loss of methyl groups from its promoter can result in its aberrant expression and retrotransposon activity.The latter is exhibited as a random introduction of its copy elsewhere in the genome.Such aberrant LINE-1 activity can lead not only to genome amplification but also significantly increase probability of mutations, as LINE-1's copy can be introduced within the open reading frame (ORF) of a gene, thus affecting its transcription [94,95]. Besides global DNA hypomethylation, gene-specific DNA hypermethylation can occur due to exposure to ionizing radiation.Such events, if located within the gene promoters, are usually associated with transcriptional silencing, as the acquisition of methyl groups within the transcription start sites precludes the binding of transcription factors in the initiation of transcription.Similar to global DNA hypomethylation, hypermethylation-induced silencing of tumor suppressor genes is frequently observed in many cancers, including lung cancer of workers occupationally exposed to radiation [96,97]. Interestingly, exposure to high-LET radiation often shows differential patterns of DNA methylation alterations.For instance, several studies demonstrated loss of global DNA methylation in cell culture after exposure to low mean absorbed doses of protons or 56Fe ions [98,99].However, the results of the in vivo studies appear contradictory, as we and others observed global DNA hypermethylation that stemmed from both repetitive elements and genes [100][101][102]. Another interesting outcome of high-LET radiation exposure is persistent changes in DNA methylation observed in organs that are considered targets for radiation-induced degenerative disease rather than carcinogenesis.For instance, persistently (i.e., 3-9 months after irradiation) altered DNA methylation was reported in the lungs and hearts of experimental mice after exposure to low mean absorbed doses of protons or heavy ions.These results were observed in several independently conducted experiments utilizing different sources and doses/rates of high-LET radiation [100,[102][103][104][105]. Contrary to expectations, high-LET-induced DNA hypermethylation of repetitive elements often resulted in paradoxical reactivation of LINE-1 elements [102,106].It is plausible to hypothesize that the complex interplay between DNA methylation and histone modifications, where the latter may "overwrite" the silencing effects of the former, is responsible for this effect [107].There is a shortage of knowledge regarding the effects that high-LET radiation exerts on histone modifications, and future research is warranted to explore this phenomenon. The epigenetic effects of exposure to high-LET radiation are much more complex and less understood compared to the effects exerted by terrestrial ionizing radiation.Nevertheless, elucidating epigenetic reprogramming, its mechanisms, and its effects on gene expression offers multiple opportunities to better understand the long-term effects of such exposures.Another important implication of epigenetics in space biology is the potential to utilize the methylation status of selective LINE-1 elements as biomarkers for previous exposures.As evident from the discussion above, exposure to ionizing radiation (including high-LET radiation) leaves scars, not only as mutations and irreparable damage to DNA itself, but also as permanently present alterations of DNA methylation within repetitive sequences (i.e., within the promoter regions of LINE-1 elements).Importantly, these altered patterns of DNA methylation can be detected not only in experimental systems, but also in humans previously exposed to ionizing radiation [100,102,108]. Histone modifications are another epigenetic mechanism of control over the expression of genetic information.Histone proteins can form the structural unit of a nucleosome that may undergo more than ten modifications.Perhaps the most pertinent to radiation research is the rapid phosphorylation of Ser 139 on histone H 2 AX-an initial step in recognizing radiation-induced damage [109].The methylation of lysine 9 on histone H3 (H3K9) and lysine 20 on histone H4 (H4K20) are well-known histone marks that are not only associated with transcriptional silencing.Nevertheless, they are also suppressed after exposure to terrestrial ionizing radiation, thus potentially allowing for easier access of repair complexes to the sites of DNA damage [110].Unfortunately, there is a dearth of knowledge regarding the effects of space radiation on histone modifications. Discussion Radiosensitivity depends on cell type, genetic composition, the microenvironment of the cell, and the radiation type and timing [36].Radiation interacting with subcellular structures has the potential to alter the stress response and radiosensitivity of the cell and tissue [111,112].It should be further noted that the radiosensitivity of cells differs between the individual cell type and the tissue as a whole: generally, the tissue or organ has a lower radiosensitivity than the individual cell [36,53].Modeling the results of radiation on individual cells produces results that are not representative of radiation effects on the whole-body scale.Therefore, the LQ model may be inadequate for describing the mechanistic properties of radiation-induced biological and biochemical effects on the tissue or organism level.Individual proliferating cells may follow the LQ model, but aggregate cell populations appear more radioresistant [36]. There are separate benefits to in vitro and in vivo studies.In vitro irradiations may elucidate the behavior of lesion formation along DNA strands, while in vivo or animal studies can better connect whole-organ effects with ionizing radiation exposure.A study published in 2007 following the occurrence of oxidative clustered DNA lesions and DSBs from Cesium-137 gamma rays and 56Fe (at approximately 1.046 GeV/nucleon) found that high-LET radiation was more likely to create DSBs than oxidative DNA lesions [59].This study also concluded that low LET induced higher yields of DSBs and oxidative lesions than high-LET particles.The samples were placed in a solution to mimic the cell's natural chemical microenvironment.Between the two radiation sources, 56Fe ions resulted in a longer delay in DSBs returning to background levels [59].Oxidative clustered lesions in the DNA strands also had a longer repair delay than DSBs, averaging between four and five days.During the fourth and eighth days of the study post-irradiation, DSBs within the samples increased, which was potentially related to apoptotic DNA fragments from misrepairs or unsuccessful repairs [59].Other studies conducted within the field of radiobiology also concluded similar findings that DNA clustered lesions from high-LET interactions may have a delay in their repair, a misrepair, or a less completed repair of their DSBs [54].In one of these studies, the irradiated cells were human monocytes, similar to the simplistic cells previously mentioned: composed of cytoplasm, a nucleus with DNA, and lysosomes [59].This specific study did not compare its results to the irradiation of a more complex mammalian cell and acknowledged that the presence of some of the much smaller or shorter DNA fragments may not be detectable. The level of oxygenation within the cellular microenvironment affects the effectiveness of the bombarding radiation, a concept closely studied in the context of hypoxia and tumor radiation response [2].A more oxygenated environment will produce more free radicals, which can cause damage and alterations to the structure, nuclear DNA, and function of the cell [112].Oxygenations may not play the same role for normal tissue in the space radiation environment.Where a tumor may have a hypoxic center that becomes oxygenated through targeted radiotherapy treatments, astronauts receive whole-body, continuous doses from charged particles [9].Additionally, cell populations have naturally differing oxygenation levels.For example, lung epithelium has a higher oxygen-rich microenvironment than cardiac myocytes [113,114].Therefore, how these tissues react to ionizing radiation will also differ. Animal models have been used alongside numerical and computational efforts to predominantly enhance the study of nuclear DNA repair mechanisms.To better understand how BER affects miscoding nuclear DNA repair, genetically modified β-null mouse cells are deficient at repairing methylation-induced DNA lesions and can be used to study the monofunctional alkylating agents responsible for transferring single alkyl groups and often result in DNA coding errors and strand breakage [51].Methyl methane sulfonate (MMS) seen in these β-null cell types is a monofunctional alkylating agent, and the presence of MMS-induced damage, in partnership with defects in genetic BER process deficiencies, has been connected to disease phenotypes [51].Furthermore, an additional study hypothesized that nuclear genetic protein mutations and a reduction in the BER enzymes present within the cell can cause a build-up of nuclear and mtDNA mutations and lead to neurodegeneration [57].There have been additional studies investigating the hypothesized connection between BER misrepair and the occurrence of Alzheimer's disease [115]. In 2016, an experiment compared the presence of cardiovascular disease (CVD) in in-flight and non-flight astronauts to that in C57BL/J6 mice irradiated with a simulated galactic cosmic ray spectrum [76].It suggested a connection between radiation exposure beyond LEO and the development of CVD, such as occlusive arterial disease (e.g., myocardial infarction, stroke).A conclusive link could not be determined because of limitations within the study, including a small sample size, an unknown source responsible for the results, and the differences in dose rate between the irradiated mice and the nominal space radiation environment [76].Few studies have been conducted following the development of nonmalignant disease occurrence using astronauts.In 2019, another study followed the physiological difference between two male monozygotic twins where one incurred a prolonged stay of 340 days aboard the ISS.The comparison between the two subjects suggested that longer-duration missions could result in changes to cardiovascular physiology and affect oxygen distribution within the body as a consequence, which may alter the resulting biological effects [77].From the study's DDR, it was assessed that chromosomal aberrations potentially pointed to telomere-related instability [54,77].Telomeres are the subcellular structure responsible for maintaining genomic integrity and play a role in preventing DNA degradation and erroneous DDR [77]. Our understanding of how charged particles interact with cells has made significant advancements and has more recently been used in space exploration research [116].Given the low dose, low dose rate, and complexity of the space environment, models with a strong biological mechanistic focus may be best suited for space radiation research, but the utilized models still center on nuclear DNA damage and repair, effects of misrepair aberrations, and cell death, which are all topics more suited for radiotherapy treatments.The simplicity of the LQ model makes its use attractively straightforward, and it implicitly takes into account biological and chemical mechanisms occurring during ionization radiation interactions.However, its simplicity limits its ability to explain or model the underlying mechanisms. Looking to the more recently developed numerical models that aim to incorporate biochemical and biophysical aspects, there are still limitations with each of these emerging methods.The TLK model can better match experimental data, but its limitation lies in its focus on cell survival and nuclear DSBs.It is based on experimentally irradiated Chinese hamster ovarian (CHO) cell data.And while this model was in good agreement with single-dose-administered survival (as opposed to the continuous dose present in space) and DSB rejoining data for the CHO cells, there were inconsistencies in more variable dosages and radiation types when compared to other experimental data [42].The MCDS model is able to take differing environments and radiation types into account, but the system is hypothesized to overestimate the number of DSBs and does not consider the bystander effect [45].Even the LEM and MKM models that incorporate dosimetric concepts fall short compared to experimental results and do not work for all irradiated cell types [36].Proponents of these early models stated that more survival curve analyses are necessary to prove that nuclear DNA is the primary target of radiation. To have a better overlap between experimental results and model predictions, both need to explore the impact that other subcellular components such as mtDNA have on cellular function and viability.There is also a need to better understand how the differing repair mechanisms between the two types of DNA affect potential mutations in irradiated samples and individuals.The animal studies mentioned within this review looking into the efficacy of BER repair need additional exploration prior to inclusion in any clinical considerations involving DNA repair from spaceflight radiation exposures, and the direct role BER plays in disease prevention needs to be better defined [56].Moreover, the LQ model and its evolutions, and much of the space radiation and radiotherapy foci, have been primarily developed to explain cell survival and circumvent modeling of nonmalignant disease outside of DNA strand breaks and misrepair.As a result, there is still uncertainty as to what role subcellular dysfunction plays in whole-body effects. Conclusions Understanding how cellular components are affected by changes to their microenvironment and their role in tissue and organ dysfunction following irradiation can advance state-of-the-art space radiation protection and heavy ion radiotherapy.Recent advances in computational physics and biological sciences have contributed to the collective effort to better understand irradiation effects on cells, but each numerical and computational model has limitations.Furthermore, they all focus on nuclear DNA damage and repair without much regard for other subcellular structures.Applying these models to other subcellular damage and effects has the potential to develop a predictive model for deterministic effects and subsequently accelerate and support heavy ion radiotherapy efforts. Mechanistic mathematical modeling of radiation-induced nonmalignant diseases can help provide insight into interpreting relevant experimental results and possible quantitative predictions related to heavy ion treatment results.Current radiobiological models describing the irradiation of mammalian cells focus on cell survival, and few predictive models for radiation effects incorporate non-nuclear DNA damage and repair.These more advanced models, such as the MCDS and TLK models, can better explain stochastic effects (e.g., cancer occurrence) and omit supportive evidence for modeling deterministic diseases following ionizing radiation exposure.Radiobiological models, if actively used, are appropriate for the radiotherapy setting, where disease or tumor eradication is the focus, and there is less of a practice to use these models in predicting whole-body outcomes [32].There is a lack of experimental data following prolonged whole-body radiation exposure or a proper model that can describe the probabilistic behavior of radiation effects.More computational research and experimental data would need to be procured to better compare the damage and repair associated with ionizing radiation in nuclear versus mtDNA. Studies of radiotherapy patients, occupationally exposed individuals, and atomic bomb survivors have shown a trend between high doses of low-LET radiation and the occurrence of degenerative tissue effects [117].These nonmalignant disease occurrences following HZE nuclei exposures, like those experienced in spaceflight, are less understood.This is partly because of the type of radiation found in space and the characteristics specific to the spaceflight environment (e.g., microgravity, artificial and confined environment, additional stressors, etc.) [117].The prolonged high-LET radiation exposure that an astronaut may experience requires further study.Because of the long latency period of the effects, non-nuclear structures may play a more significant role in irradiation outcomes. Future models should consider the occurrence of nonmalignant or noncancerous disease following prolonged exposure to the GCR spectrum.Most research following the conclusion that nuclear DNA is the primary target of ionizing radiation has overlooked the role of other damaged subcellular structures.Further investigation into radiation-induced damage and the response of cellular organelles other than nuclear DNA was conducted decades after the genesis of the LQ model.The study found that each organelle within its scope has shown sensitivity to radiation and has subsequent effects [112].Therefore, it is unreasonable to omit the changes to their structures, intercellular spacing, and function from radiation-induced damage and only consider the nuclear DNA breaks and aberrations.The foundational numerical models are built on the hypotheses of Crowther and Lea, whose oversight in assessing the complexity of the mammalian cell should be re-evaluated.Nuclear DNA has been set as the primary target of interest, and there is a focus on how damage to this subcellular structure and its ability to repair affect cellular proliferation. Furthermore, different aspects of the space environment, such as microgravity and spaceflight time, may affect the cell's ability to repair damage and the severity of the damage, respectively [118].Previously conducted research found that seven genes, likely related to neuronal and endocrine signaling, which affects longevity-regulating transcription factors and dietary-restriction signaling, were suppressed during spaceflight [83].In vitro studies of cellular response to the space radiation environment found that the repair pathways of prokaryotic cells, like bacteria, and simplistic eukaryotic cells, like yeast, are not affected by microgravity.However, more complex eukaryotic cells like those studied from the Shenzhou-8 space expedition suggested an enhanced DDR under microgravity [15].This study did not find a significant change between spaceflight duration and DDR.Each of these repair mechanisms contributes to the resulting cell survival curves seen in radiobiological models.More radiobiological data supported by animal testing and additional insight into the long-term effects of space radiation exposure could improve the current radiobiological models used within the clinic.This would be an improvement that would reflect the advancements made within the field and have cascading benefits to multiple disciplines concerned with radiation effects. Figure 2 . Figure 2.Relationship between relative biological effectiveness (RBE), clonogenic cell death, and LET for mammalian cells with carbon, neon, and iron[2,22,24].Here, it can be seen that around 100 keV/micron, along the LET axis, there is a peak after which the RBE not only fails to increase but declines.Figure adapted from Sørenson [24]. Figure 2.Relationship between relative biological effectiveness (RBE), clonogenic cell death, and LET for mammalian cells with carbon, neon, and iron[2,22,24].Here, it can be seen that around 100 keV/micron, along the LET axis, there is a peak after which the RBE not only fails to increase but declines.Figure adapted from Sørenson [24]. Figure 2 . Figure 2. Relationship between relative biological effectiveness (RBE), clonogenic cell death, and LET for mammalian cells with carbon, neon, and iron [2,23,25].Here, it can be seen that around 100 keV/micron, along the LET axis, there is a peak after which the RBE not only fails to increase but declines.Figure adapted from Sørenson [25]. Figure 3 . Figure 3.Comparison of the STSH and MTSH models[12].D0 describes the slope of the curve's linear portion, and Dq gives the approximate dose range, or width, of the curve's shoulder.A linear slope on these semi-logarithmic graphs describes an exponential relationship.The STSH model, shown on the left, was expected to be seen, but a shoulder would appear in the data instead, depicted on the right.Graphs adapted from Joiner and Kogel[12]. Figure 5 . Figure 5.The linearity of the curve on the logarithmic-linear scale represents an exponential relationship between the dose and the surviving fraction[12].Densely ionizing radiation, or high LET particles such as α particles and neutrons, is the right-hand curve shown in red and is more likely to result in a linear curve.Sparsely ionizing radiation or low-LET particles such as x-rays will produce more of a shoulder to the curve, as described by Dq[2,12].Figure adapted from Hall and Giaccia[2]. Figure 5 . Figure 5.The linearity of the curve on the logarithmic-linear scale represents an exponential relationship between the dose and the surviving fraction[12].Densely ionizing radiation, or high LET particles such as α particles and neutrons, is the right-hand curve shown in red and is more likely to result in a linear curve.Sparsely ionizing radiation or low-LET particles such as x-rays will produce more of a shoulder to the curve, as described by D q[2,12]. Figure adapted from Hall and Giaccia[2]. Figure 7 . Figure 7.Comparison of different types of cells and their response to ionizing radiation.Shown is a comparison of (A) a mammalian cell line radiation response curve with that of (B) E. coli, (C) E. coli B/r (a mutation of E. coli), (D) yeast, (E) phage staph E, (F) bacillus megaterium (G) potato virus, and (H) M. radiodurans (one of the most radioresistant known organisms) [2,48].Figure adapted from Hall and Giaccia [2]. Figure 7 . Figure 7.Comparison of different types of cells and their response to ionizing radiation.Shown is a comparison of (A) a mammalian cell line radiation response curve with that of (B) E. coli, (C) E. coli B/r (a mutation of E. coli), (D) yeast, (E) phage staph E, (F) bacillus megaterium (G) potato virus, and (H) M. radiodurans (one of the most radioresistant known organisms) [2,49].Figure adapted from Hall and Giaccia [2].
17,276
2024-01-01T00:00:00.000
[ "Environmental Science", "Medicine", "Physics" ]
Effect of Wetting Conditions on the In Situ Density of Soil Using the Sand-Cone Method The sand-cone method is commonly used to measure the in situ density of compacted soils. While determining field density with this method, differences in the sand-filling process between the test hole and the calibration container can cause errors. The differences can result from various in situ conditions such as the shape and size of the test hole and the moisture conditions of the filling sand and test ground. Temporary rainfall can increase the moisture content of both in situ soils and filling sand. This study examined the effect of wetting conditions on the accuracy of the sand-cone method in a laboratory. Compacted soils with different water contents (2–16%) were prepared in a small circular container in the laboratory, and the sand-filling process was simulated for cylindrical, conical, and roof-shaped test holes with depths of 10 and 15 cm. As the water content of the compacted soils increased, the sand-cone method underestimated the volume of sand accumulated in the test holes by up to 20%, resulting in the calculated density being overestimated by an identical amount. Slightly moist sand was poured into artificial test holes. When the water content of the filling sand was below 1%, no significant error was observed in the calculated volume. Introduction Various field-density testing methods have been developed to determine the in situ density of compacted soils. They include the sand-cone, rubber-balloon, and nuclear methods. These methods are of the utmost importance to ensure that the specified dry density of soil is achieved during compaction. The method is usually chosen based on several factors that are considered important during construction, including the availability of specialized equipment and material, logistics, the complexity of the method, and the time required to perform the test [1]. The sand-cone is the most commonly used method, since it is relatively simple and inexpensive compared to other methods. Park [2] showed that errors in the sand-cone method could be minimized by preparing a test hole with a flat base and a depth of at least 20 cm, as well as by using rounded sand with a coefficient of uniformity (Cu) of less than 1.5 and mean grain size (D 50 ) of approximately 0.5 mm. He used artificial test holes of various shapes, which were possibly made in the in-situ test holes. During field compaction, temporary rainfall or an increase in the groundwater level can increase the moisture content of the compacted ground. However, the wetness of the field ground has not been considered in previous studies. Furthermore, though dry sand must be used to fill the test holes, the stored dry sand can be wetted by rain or atmospheric humidity, even if it has been dried before, resulting in increased moisture content. Therefore, this study examined the effects of wetting conditions 2 of 10 of the ground and the moist filling sand on the dry density, as determined using the sandcone method in a laboratory. A small model of the ground containing in situ soil with different water content was used for simulating moist ground conditions in the laboratory. Different proportions of kaolinite were added to the in situ soil to simulate the various soils. In addition, an artificial, cylindrical test hole with a known volume was used to determine the effect of moist sand filling. Considerations in Sand-Cone Method The calculations involved in the sand-cone method are described in ASTM D 1556-07 [3], BS 1377-9 [4], and KS F 2311 [5]. The volume of the excavated hole, V, can be determined using the following expression: where M 1 is the mass dry sand used for filling the hole and ρ d(sand) is the dry density of the sand used for the calibration. A calibration container has a known volume and a dry and smooth inner surface. The extent to which sand is packed or the test hole is filled can be influenced by the wetness of the test hole and filling sand, decreasing the reliability of the determined in situ density. Depending on the wetness of the inner surface of the hole, sand grains may stick to the surface and to each other, resulting in the formation of voids and the underestimation of the hole volume, as shown in Figures 1 and 2. Appl. Sci. 2021, 11, x FOR PEER REVIEW 2 of 10 resulting in increased moisture content. Therefore, this study examined the effects of wetting conditions of the ground and the moist filling sand on the dry density, as determined using the sand-cone method in a laboratory. A small model of the ground containing in situ soil with different water content was used for simulating moist ground conditions in the laboratory. Different proportions of kaolinite were added to the in situ soil to simulate the various soils. In addition, an artificial, cylindrical test hole with a known volume was used to determine the effect of moist sand filling. Considerations in Sand-Cone Method The calculations involved in the sand-cone method are described in ASTM D 1556-07 [3], BS 1377-9 [4], and KS F 2311 [5]. The volume of the excavated hole, V, can be determined using the following expression: where M1 is the mass dry sand used for filling the hole and ρd(sand) is the dry density of the sand used for the calibration. A calibration container has a known volume and a dry and smooth inner surface. The extent to which sand is packed or the test hole is filled can be influenced by the wetness of the test hole and filling sand, decreasing the reliability of the determined in situ density. Depending on the wetness of the inner surface of the hole, sand grains may stick to the surface and to each other, resulting in the formation of voids and the underestimation of the hole volume, as shown in Figures 1 and 2. Figure 1 presents a schematic of the sand-filling process in test holes with dry and wet surfaces, and Figure 2 shows photographs of real sand-filled dry-and wet-compacted grounds and hole surfaces after the removal of the sand. The soils had water contents of 2% and 16%, respectively, and a dry unit weight (density) of 16.59 kN/m 3 (1.69 g/cm 3 ). Detailed soil properties are presented in the test-procedure section, and fill sand was painted in red to show the color difference from the ground. Thus, the difference in sandfilling processes between dry and wet test-hole surfaces was examined. In addition, the use of moist sand to fill a test hole may also cause sand grains to stick to the hole surface and each other. Therefore, these two factors (moist compacted ground and moist filling sand) were investigated in this study. Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 10 Figure 1 presents a schematic of the sand-filling process in test holes with dry and wet surfaces, and Figure 2 shows photographs of real sand-filled dry-and wet-compacted grounds and hole surfaces after the removal of the sand. The soils had water contents of 2% and 16%, respectively, and a dry unit weight (density) of 16.59 kN/m 3 (1.69 g/cm 3 ). Detailed soil properties are presented in the test-procedure section, and fill sand was painted in red to show the color difference from the ground. Thus, the difference in sandfilling processes between dry and wet test-hole surfaces was examined. In addition, the use of moist sand to fill a test hole may also cause sand grains to stick to the hole surface and each other. Therefore, these two factors (moist compacted ground and moist filling sand) were investigated in this study. Test Sand and Calibration The sand used to fill the test hole has to adhere to the standards. ASTM D 1556-07 [3] recommends the use of sand with a Cu less than 2.0, a maximum particle size smaller than 2 mm, and that less than 3% of the weight pass through a 0.25 mm sieve. KS F 2311 [5] specifies the use of sand that can pass through a 2-mm sieve but is retained on a 0.075-mm sieve. Jumunjin sand was used in this study, and the results of the sieve test and its particle-size distribution curve are shown in Table 1 and Figure 3, respectively. The physical properties of the sand are shown in Table 2. The specific gravity (Gs) was 2.65, and the diameters through which 10%, 30%, and 60% of the particles passed (D10, D30, and D60) were 0.31 mm, 0.45 mm, and 0.62 mm, respectively. The Cu was 1.98, and the coefficient of curvature (Cc) was 1.05. Based on the results, the sand was classified as poorly-graded (SP) according to the Unified Soil Classification System (USCS). Therefore, Jumunjin sand satisfied the specifications of both ASTM and KS. Before the sand-cone test, the dry density of the test sand had to be determined. This determination process is known as calibration. The dry density of the test sand was calculated in accordance with ASTM D 1556-07 [3] by using a compaction mold with a diameter Test Sand and Calibration The sand used to fill the test hole has to adhere to the standards. ASTM D 1556-07 [3] recommends the use of sand with a Cu less than 2.0, a maximum particle size smaller than 2 mm, and that less than 3% of the weight pass through a 0.25 mm sieve. KS F 2311 [5] specifies the use of sand that can pass through a 2-mm sieve but is retained on a 0.075-mm sieve. Jumunjin sand was used in this study, and the results of the sieve test and its particle-size distribution curve are shown in Table 1 and Figure 3, respectively. The physical properties of the sand are shown in Table 2. The specific gravity (Gs) was 2.65, and the diameters through which 10%, 30%, and 60% of the particles passed (D 10 , D 30 , and D 60 ) were 0.31 mm, 0.45 mm, and 0.62 mm, respectively. The C u was 1.98, and the coefficient of curvature (C c ) was 1.05. Based on the results, the sand was classified as poorly-graded (SP) according to the Unified Soil Classification System (USCS). Therefore, Jumunjin sand satisfied the specifications of both ASTM and KS. Before the sand-cone test, the dry density of the test sand had to be determined. This determination process is known as calibration. The dry density of the test sand was calculated in accordance with ASTM D 1556-07 [3] by using a compaction mold with a diameter of 15 cm, a height of 17.5 cm, and a volume of 3093 cm 3 . The dry unit weight (density) of the Jumunjin sand used in this study was calculated to be 13.40 kN/m 3 (1.37 g/cm 3 ). the Jumunjin sand used in this study was calculated to be 13.40 kN/m 3 (1.37 g/cm 3 ). Artificial Ground and Test Procedure To examine the effect of a moist ground on the sand-cone method, artificial test holes with three shapes (cylindrical, conical, and roof-shaped) and with depths of 10 and 15 cm were used. Detailed sizes and shapes of the holes are shown in Figure 4. The soil was obtained from a construction site located at Youngchun, Kyungbuk Province, Korea, and compacted into a test vessel with a diameter of 41.5 cm and a height of 17 cm. The soil properties are shown in Table 3. The soil was thoroughly mixed with a specific amount of water (2, 7, 12, and 16% of the weight of the soil). The wetness of a compacted ground was simulated based on the field conditions of road and railway embankments in Korea; wetness levels of 2, 7, 12, and 16% were defined as slightly wet, wet, moderately wet, and completely wet, respectively. Subsequently, the moist soil was compacted in a vessel whose weight and volume were known. As shown in Figure 5a,b, a plate with the dimensions 14.5 cm × 14.5 cm × 2 cm was placed on the soil, and three-layer compaction was performed using a 2.5-kg rammer, with a drop height of 30 cm and 15 blows per layer. The small rammer was used to compact the areas which were not in contact with the plate, especially the edges. After compaction, the entire test vessel was weighed to determine the bulk density of the compacted soil. The compacted soil was dug to form the same size hole as the artificial hole. The soil was dug until it fitted into the hole frame by putting the frame into the digging hole. Subsequently, the sand-cone method was used (Figure 5c), and the test hole was filled with Artificial Ground and Test Procedure To examine the effect of a moist ground on the sand-cone method, artificial test holes with three shapes (cylindrical, conical, and roof-shaped) and with depths of 10 and 15 cm were used. Detailed sizes and shapes of the holes are shown in Figure 4. The soil was obtained from a construction site located at Youngchun, Kyungbuk Province, Korea, and compacted into a test vessel with a diameter of 41.5 cm and a height of 17 cm. The soil properties are shown in Table 3. The soil was thoroughly mixed with a specific amount of water (2, 7, 12, and 16% of the weight of the soil). The wetness of a compacted ground was simulated based on the field conditions of road and railway embankments in Korea; wetness levels of 2, 7, 12, and 16% were defined as slightly wet, wet, moderately wet, and completely wet, respectively. Subsequently, the moist soil was compacted in a vessel whose weight and volume were known. As shown in Figure 5a,b, a plate with the dimensions 14.5 cm × 14.5 cm × 2 cm was placed on the soil, and three-layer compaction was performed using a 2.5-kg rammer, with a drop height of 30 cm and 15 blows per layer. The small rammer was used to compact the areas which were not in contact with the plate, especially the edges. After compaction, the entire test vessel was weighed to determine the bulk density of the compacted soil. the sand (Figure 5d). Meanwhile, kaolinite was mixed with the soil to investigate whether voids due to the sand sticking appeared, regardless of the soil type. The mixture was compacted to make an artificial ground that has the same shape as Figure 2. The kaolinite used in this study was the same kaolinite used by Park and Nong [6], and the liquid limit (LL) and plastic limit (PL) of the kaolinite were 60% and 40%, respectively. It was mixed at a ratio of 0.2, 0.4, 0.6, 0.8, and 1.0 to the soil, and the moisture contents of compacted grounds used were 16% at a ratio of 0.2 and 0.4 and 20% at 0.6, 0.8, and 1.0, respectively. grounds used were 16% at a ratio of 0.2 and 0.4 and 20% at 0.6, 0.8, and 1.0, respectively. The compacted soil was dug to form the same size hole as the artificial hole. The soil was dug until it fitted into the hole frame by putting the frame into the digging hole. Subsequently, the sand-cone method was used (Figure 5c), and the test hole was filled with the sand (Figure 5d). Meanwhile, kaolinite was mixed with the soil to investigate whether voids due to the sand sticking appeared, regardless of the soil type. The mixture was compacted to make an artificial ground that has the same shape as Figure 2. The kaolinite used in this study was the same kaolinite used by Park and Nong [6], and the liquid limit (LL) and plastic limit (PL) of the kaolinite were 60% and 40%, respectively. It was mixed at a ratio of 0.2, 0.4, 0.6, 0.8, and 1.0 to the soil, and the moisture contents of compacted grounds used were 16% at a ratio of 0.2 and 0.4 and 20% at 0.6, 0.8, and 1.0, respectively. The effect of moist sand filling was investigated by adding moisture to the test sand. The moisture content was gradually increased to 0.8% of the weight of dry sand. A 10-cm deep cylindrical test hole made of a galvanized iron sheet was used, and its volume was determined using the water-filling method [2]. The sand-cone method was used to determine the mass of the moist sand required to fill the test hole, and the volume and density of the sand were then calculated. Table 4 summarizes the results of the sand-cone method for four different moist ground conditions. It compares the effect of a moist ground for test holes with different shapes and depths. The exact dry unit weight (γ d(exact) ) was calculated with the weight and volume of the vessel, which included the compacted soil before digging the artificial hole. The experiment dry unit weight (γ d(experiment) ) is the result of the sand-cone method. The dry unit weight determined from the sand-cone method (γ d(experiment) ) increased as the water content of compacted soil increased from 2% to 16%, regardless of the shape and depth of the test hole, as shown in Figure 6. In general, the unit weight of soil increased until the moisture content reached the optimum moisture content (OMC). The increase in the unit weight of soil, as shown in Figure 6, also illustrated this trend. The variation in the calculated dry unit weight and its error, depending on the water content of the compacted ground, were compared for the different types of holes ( Figure 7). As the base shape of the test hole was changed from flat to sharp, and as the moisture content of the compacted soil increased, the error became increasingly severe. As the hole depth increased from 10 to 15 cm, the error generally decreased, regardless of the test-hole shape and the water content in the ground. In particular, the result for the cylindrical test hole with a flat base and a depth of 15 cm under slightly wet conditions (2%) had the lowest error, 1.04%. By contrast, the value for the conical test hole with a sharp base and a depth of 10 cm under fully wet conditions (16%) showed the highest error, i.e., 22.54%. When the density degree of soil was expressed as loose and dense in previous studies [7][8][9], the difference in dry unit weight (density) between loose and dense appeared as 2-3 kN/m 3 (0.20-0.31 g/cm 3 ). In this study, the result of the conical test hole, showing the highest dry unit-weight (density) error, showed a difference of 2.31 kN/m 3 (0.24 g/cm 3 ), which can cause a very large defect when judging the compaction condition of the soil. On the other hand, the results of the cylindrical test hole with a depth of 15 cm showed a difference of less than 0.491kN/m 3 (0.05 g/cm 3 ). The variation in the calculated dry unit weight and its error, depending on the water content of the compacted ground, were compared for the different types of holes ( Figure 7). As the base shape of the test hole was changed from flat to sharp, and as the moisture content of the compacted soil increased, the error became increasingly severe. As the hole depth increased from 10 to 15 cm, the error generally decreased, regardless of the test-hole shape and the water content in the ground. In particular, the result for the cylindrical test hole with a flat base and a depth of 15 cm under slightly wet conditions (2%) had the lowest error, 1.04%. By contrast, the value for the conical test hole with a sharp base and a depth of 10 cm under fully wet conditions (16%) showed the highest error, i.e., 22.54%. When the density degree of soil was expressed as loose and dense in previous studies [7][8][9], the difference in dry unit weight (density) between loose and dense appeared as 2-3 kN/m 3 (0.20-0.31 g/cm 3 ). In this study, the result of the conical test hole, showing the highest dry unit-weight (density) error, showed a difference of 2.31 kN/m 3 (0.24 g/cm 3 ), which can cause a very large defect when judging the compaction condition of the soil. On the other hand, the results of the cylindrical test hole with a depth of 15 cm showed a difference of less than 0.491kN/m 3 (0.05 g/cm 3 ). The variation in the calculated dry unit weight and its error, depending on the water content of the compacted ground, were compared for the different types of holes ( Figure 7). As the base shape of the test hole was changed from flat to sharp, and as the moisture content of the compacted soil increased, the error became increasingly severe. As the hole depth increased from 10 to 15 cm, the error generally decreased, regardless of the test-hole shape and the water content in the ground. In particular, the result for the cylindrical test hole with a flat base and a depth of 15 cm under slightly wet conditions (2%) had the lowest error, 1.04%. By contrast, the value for the conical test hole with a sharp base and a depth of 10 cm under fully wet conditions (16%) showed the highest error, i.e., 22.54%. When the density degree of soil was expressed as loose and dense in previous studies [7][8][9], the difference in dry unit weight (density) between loose and dense appeared as 2-3 kN/m 3 (0.20-0.31 g/cm 3 ). In this study, the result of the conical test hole, showing the highest dry unit-weight (density) error, showed a difference of 2.31 kN/m 3 (0.24 g/cm 3 ), which can cause a very large defect when judging the compaction condition of the soil. On the other hand, the results of the cylindrical test hole with a depth of 15 cm showed a difference of less than 0.491kN/m 3 (0.05 g/cm 3 ). Figure 8 shows several wet, compacted grounds filled with dry colored sand. Similar behavior was observed, regardless of soil types, as shown in Figure 8a-d. Therefore, the field density could be incorrectly evaluated due to the ground wetness. When investigating the dry unit weight with the sand-cone method in the field, the use of a deep hole in a cylindrical shape can minimize the error due to ground wetting. Figure 8 shows several wet, compacted grounds filled with dry colored sand. Similar behavior was observed, regardless of soil types, as shown in Figure 8a-d. Therefore, the field density could be incorrectly evaluated due to the ground wetness. When investigating the dry unit weight with the sand-cone method in the field, the use of a deep hole in a cylindrical shape can minimize the error due to ground wetting. Additionally, the water content was measured by drying in a microwave oven for 15 min. The measured value was similar to the value measured after drying in an oven for over 24 h. The measurement of soil moisture content using a microwave oven was based on ASTM D 4643-08 [10], which has been reported to yield reliable values. For example, Park et al. [11] also tested various soils (sand, kaolinite, and organic soils) and showed similar results. The microwave oven is also portable and readily available [12]. Effect of Moist Sand The presence of high humidity at some sites is unfavorable and can render sand calibration unreliable in density tests involving the sand-cone method [13]. The calculated dry unit weight of moist test sand is presented in Table 5. For the cylindrical artificial test hole, the calculated volume of the test hole decreased as the water content of the sand increased from 0% to 0.8% (Figure 9). With an increase in the soil moisture, a smaller mass of sand filled the test hole because of the bulking of sand, and, therefore, the test-hole volume was underestimated. Moisture creates a thin film around sand particles and causes them to move away from each other, increasing the sand volume. This phenomenon is known as bulking of sand. The percentage of bulking is inversely proportional to the particle size [14]. In addition, the test sand clumped together at moisture content of 1% (and above), thereby rendering the sand calibration difficult. At 1% moisture content, the clumped sand blocked the valve, and the sand did not reach the hole. Additionally, the water content was measured by drying in a microwave oven for 15 min. The measured value was similar to the value measured after drying in an oven for over 24 h. The measurement of soil moisture content using a microwave oven was based on ASTM D 4643-08 [10], which has been reported to yield reliable values. For example, Park et al. [11] also tested various soils (sand, kaolinite, and organic soils) and showed similar results. The microwave oven is also portable and readily available [12]. Effect of Moist Sand The presence of high humidity at some sites is unfavorable and can render sand calibration unreliable in density tests involving the sand-cone method [13]. The calculated dry unit weight of moist test sand is presented in Table 5. For the cylindrical artificial test hole, the calculated volume of the test hole decreased as the water content of the sand increased from 0% to 0.8% (Figure 9). With an increase in the soil moisture, a smaller mass of sand filled the test hole because of the bulking of sand, and, therefore, the test-hole volume was underestimated. Moisture creates a thin film around sand particles and causes them to move away from each other, increasing the sand volume. This phenomenon is known as bulking of sand. The percentage of bulking is inversely proportional to the particle size [14]. In addition, the test sand clumped together at moisture content of 1% (and above), thereby rendering the sand calibration difficult. At 1% moisture content, the clumped sand blocked the valve, and the sand did not reach the hole. Conclusions In this study, the effect of wetting conditions on the determination of the density of compacted soil with the sand-cone method was investigated for a moist ground and moist filling sand. The unit weight of soil was determined in the laboratory by using a small model of the ground. The ground model contained field soil classified as CL and was compacted in a cylindrical container (with a diameter of 41.5 cm and height of 17 cm). The water content of the ground surface varied between 2% and 16%. Furthermore, artificial test holes with cylindrical, conical, and roof shapes, and with depths of 10 and 15 cm, were considered. For these conditions, the test hole volume was determined by the sand-cone method and the dry unit weight was calculated. The results obtained can be summarized as follows: 1. As the water content of the compacted ground increased from 2% to 16%, the calculated volume of the test hole decreased and the dry unit weight was overestimated by up to 20%. The accumulation of moisture increased the number of voids in the test hole, and, therefore, the calculated volume was lower than the actual volume. 2. For the cylindrical hole, the dry unit weight obtained was similar to the actual dry unit weight. When a sand-cone method is conducted after heavy rainfall at a site, appropriate precautions should be taken since the presence of moisture can lead to the relative compaction of the ground being overestimated. The error in the calculated density can be minimized by considering the wetting condition of the ground. Conclusions In this study, the effect of wetting conditions on the determination of the density of compacted soil with the sand-cone method was investigated for a moist ground and moist filling sand. The unit weight of soil was determined in the laboratory by using a small model of the ground. The ground model contained field soil classified as CL and was compacted in a cylindrical container (with a diameter of 41.5 cm and height of 17 cm). The water content of the ground surface varied between 2% and 16%. Furthermore, artificial test holes with cylindrical, conical, and roof shapes, and with depths of 10 and 15 cm, were considered. For these conditions, the test hole volume was determined by the sand-cone method and the dry unit weight was calculated. The results obtained can be summarized as follows: 1. As the water content of the compacted ground increased from 2% to 16%, the calculated volume of the test hole decreased and the dry unit weight was overestimated by up to 20%. The accumulation of moisture increased the number of voids in the test hole, and, therefore, the calculated volume was lower than the actual volume. 2. For the cylindrical hole, the dry unit weight obtained was similar to the actual dry unit weight. When a sand-cone method is conducted after heavy rainfall at a site, appropriate precautions should be taken since the presence of moisture can lead to the relative compaction of the ground being overestimated. The error in the calculated density can be minimized by considering the wetting condition of the ground. The
6,781.6
2021-01-13T00:00:00.000
[ "Geology", "Agricultural And Food Sciences" ]
Members of the GADD45 Protein Family Show Distinct Propensities to form Toxic Amyloid-Like Aggregates in Physiological Conditions The three members (GADD45α, GADD45β, and GADD45γ) of the growth arrest and DNA damage-inducible 45 (GADD45) protein family are involved in a myriad of diversified cellular functions. With the aim of unravelling analogies and differences, we performed comparative biochemical and biophysical analyses on the three proteins. The characterization and quantification of their binding to the MKK7 kinase, a validated functional partner of GADD45β, indicate that GADD45α and GADD45γ are strong interactors of the kinase. Despite their remarkable sequence similarity, the three proteins present rather distinct biophysical properties. Indeed, while GADD45β and GADD45γ are marginally stable at physiological temperatures, GADD45α presents the Tm value expected for a protein isolated from a mesophilic organism. Surprisingly, GADD45α and GADD45β, when heated, form high-molecular weight species that exhibit features (ThT binding and intrinsic label-free UV/visible fluorescence) proper of amyloid-like aggregates. Cell viability studies demonstrate that they are endowed with a remarkable toxicity against SHSY-5Y and HepG2 cells. The very uncommon property of GADD45β to form cytotoxic species in near-physiological conditions represents a puzzling finding with potential functional implications. Finally, the low stability and/or the propensity to form toxic species of GADD45 proteins constitute important features that should be considered in interpreting their many functions. Introduction The growth arrest and DNA damage-inducible 45 (GADD45) gene family encodes three strictly related proteins, denoted as GADD45α, GADD45β, and GADD45γ [1][2][3][4][5]. Although the three members of the GADD45 protein family share quite similar sequences, with identities falling in the range 55-57%, they play distinct, albeit generally crucial roles in cell life. Indeed, literature surveys indicate that GADD45 proteins are implicated in a countless number of physio-pathological processes [6]. These include DNA repair, cell cycle control, senescence and genotoxic stress, and tumorigenesis. GADD45 proteins accomplish these diversified functions by interacting with a multitude of biological partners [4]. The most important functions of GADD45α, the best-characterized member of the family, are related to the induction of growth arrest and DNA repair, indicating a crucial role in maintaining genomic stability, in the DNA damage response, and in cancer transformation. Although direct data on the biochemical mechanisms underlying GADD45α functions are still poor, this protein mostly operates by promoting proteinprotein interactions or by sequestering specific partners. These include p53, cyclin dependent kinase 1 (CDK1), and cyclin B1. It is also important to note that the basal expression GADD45α is very low, and that it is regulated by a multitude of external factors that include both physical and biochemical signals, such as ultra violet irradiation, X-rays, γ-irradiation, and DNA toxins. Although GADD45β and GADD45γ have not been extensively characterized yet, several reports have highlighted their important functional roles. Recent studies have shown that GADD45β plays an important role in multiple myeloma onset and maintenance. Extensive cellular and molecular investigations have demonstrated that this protein suppresses the pro-apoptotic JNK signaling through a direct inhibition of the upstream kinase mitogen-activated protein kinase 7 (MKK7) in this type of cancer [7]. Since the GADD45β/MKK7 complex is critical for the NF-κBdriven survival, it is a promising therapeutic target in multiple myeloma [8,9]. Moreover, GADD45β was recently associated with colorectal carcinoma, and could be used as a prognostic and predictive biomarker of the disease [10]. GADD45γ, which is the most evolutionary conserved protein in the family, is clearly defined as a pro-apoptotic and cell cycle arrest-inducing protein. Very recently, it was shown that this protein is selectively silenced in acute myeloid leukemia, thus providing insight into the design of novel therapeutic strategies used to combat this disease [11]. The involvement of the members of the GADD45 family in diversified biological processes, despite their overall sequence similarity, prompted us to undertake further biochemical and biophysical studies in order to highlight analogies and differences. In this framework, we assessed the ability of all GADD45 members in specifically binding the MKK7 kinase. Moreover, the thermal stability and the propensity of GADD45α, GADD45β, and GADD45γ to form amyloidlike cytotoxic aggregates have been unraveled. Production of the Recombinant Proteins and Initial Biophysical Characterizations Recombinant GADD45α, GADD45β, and GADD45γ were obtained in amounts sufficient for performing all subsequent characterizations. The proteins were initially purified by affinity chromatography exploiting the presence of the histidine tag located at the N-terminus. SEC was used to remove aggregates or residual contaminants. All proteins had purities higher than 95% as estimated from by SDS-PAGE analysis ( Figure S1). The molecular weights of the three recombinant proteins were in close agreement with the calculated values (Table S1). In order to assess their folding state, the three proteins were analyzed by far-UV CD spectroscopy. The resulting CD spectra of GADD45α, GADD45β, and GADD45γ clearly indicate that all of them were well folded ( Figure 1a). Indeed, the occurrence in the spectra of a maximum (at about~198 nm) and of two minima (at~208 nm and~222 nm) suggest that the three isoforms contain both αand β-structure elements [12][13][14] as expected on the basis of the reported three-dimensional structures of GADD45α [15] and GADD45γ [16,17], and of the sequence similarities among the different isoforms ( Figure S2). The oligomeric state of GADD45α, GADD45β, and GADD45γ was evaluated by static light scattering analyses. As shown in Figure 1b, in the experimental conditions (5 mM DTT in 20 mM Tris-HCl buffer-pH 7.5), GADD45α and GADD45β are monomeric, while GADD45γ presents a dimeric organization. Binding of GADD45 Proteins to MKK7 Once assessed that recombinant GADD45α, GADD45β, and GADD45γ were properly folded, we comparatively evaluated their ability to bind the kinase domain o MKK7, so far described only for GADD45β [7,9,18]. Binding of the three isoforms to MKK7_KD was investigated by Bio-Layer Interferometry. The approach was initially validated quantifying the GADD45β-MKK7_KD affinity that was previously determined by surface plasmon resonance (SPR). We found that GADD45β bound MKK7_KD with a KD of 2.0 x 10 −9 M, a value in line with the one previously determined by SPR (6.0 x 10 − M) [9] (Figure 2b). Interestingly, we also found that the other two GADD45 isoforms bound the kinase domain in a dose-dependent manner (Figure 2a,c). In particular, the KD values of the complexes GADD45α/MKK7_KD and GADD45γ/MKK7_KD were 2.3 x 10 − and 1.5 x 10 −8 , respectively. These values indicate a significant reduced affinity of this isoforms for MKK7_KD compared to GADD45β. These findings indicate that al GADD45 isoforms share the ability to bind the MKK7 kinase domain, although GADD45α and GADD45γ present a 10-fold reduced affinity compared to GADD45β. Binding of GADD45 Proteins to MKK7 Once assessed that recombinant GADD45α, GADD45β, and GADD45γ were properly folded, we comparatively evaluated their ability to bind the kinase domain of MKK7, so far described only for GADD45β [7,9,18]. Binding of the three isoforms to MKK7_KD was investigated by Bio-Layer Interferometry. The approach was initially validated quantifying the GADD45β-MKK7_KD affinity that was previously determined by surface plasmon resonance (SPR). We found that GADD45β bound MKK7_KD with a KD of 2.0 × 10 −9 M, a value in line with the one previously determined by SPR (6.0 × 10 −9 M) [9] (Figure 2b). Interestingly, we also found that the other two GADD45 isoforms bound the kinase domain in a dose-dependent manner (Figure 2a,c). In particular, the KD values of the complexes GADD45α/MKK7_KD and GADD45γ/MKK7_KD were 2.3 × 10 −8 and 1.5 × 10 −8 , respectively. These values indicate a significant reduced affinity of this isoforms for MKK7_KD compared to GADD45β. These findings indicate that all GADD45 isoforms share the ability to bind the MKK7 kinase domain, although GADD45α and GADD45γ present a 10-fold reduced affinity compared to GADD45β. , and GADD45γ (c). Lines represent different concentra whose color code is reported under the graph, of MKK_KD protein. Thermal Stability of GADD45 Proteins The analysis of the stability against temperature was performed by a CD sign 222 nm in the temperature range 20-100 °C. In all cases, structural transitions wer served at temperatures much lower than 100 °C. Moreover, the strong CD signal Thermal Stability of GADD45 Proteins The analysis of the stability against temperature was performed by a CD signal at 222 nm in the temperature range 20-100 • C. In all cases, structural transitions were observed at temperatures much lower than 100 • C. Moreover, the strong CD signals detected after the transition indicates that all proteins remained soluble in the explored temperature interval. Despite these analogies, the thermal denaturation experiments highlighted significant differences among GADD45 proteins (Figure 3a). Indeed, the monitoring of the CD signal indicated that GADD45α is a rather stable protein with a melting temperature Tm of 53 • C. On the other hand, GADD45β and GADD45γ are barely stable at temperatures close to the physiological ones as they both present Tm values of around 40 • C. A deep inspection of the melting curves indicated that the CD signal of GADD45β and GADD45γ presented significant variations from the starting value, even at rather low temperatures (~30 • C) ( Figure 3b). This is particularly evident for GADD45β, whose unfolding is essentially noncooperative. These observations indicate that the folded state of GADD45β is somewhat heat-labile and prone to undergo structural transitions. Interestingly, the analysis of the structural properties of GADD45α, GADD45β, and GADD45γ states obtained upon temperature treatment unravels peculiar and sequencespecific behaviors. As expected, the CD spectra of the three proteins recorded at 100 • C ( Figure 3b) are radically different from those collected at room temperature ( Figure 1a). The CD spectrum of GADD45γ presents a minimum at 205 nm and a shoulder at 220 nm that are indicative of a denatured protein with a non-negligible portion of residual secondary structure. When GADD45γ samples are cooled back at room temperature, the protein virtually recovers its native structure (Figure 3c). A rather different scenario emerges from the characterization of the thermally treated samples of GADD45α and GADD45β. For these isoforms, the secondary structure undergoes major variations upon heating (Figures 1a and 3b). In particular, the CD spectra collected at 100 • C present a unique minimum located at about 210 nm, indicative of β-rich structures, and their overall appearance is preserved when samples are cooled at room temperature ( Figure 3c). highlighted significant differences among GADD45 proteins ( Figure 3a). Indeed, the monitoring of the CD signal indicated that GADD45α is a rather stable protein with a melting temperature Tm of 53 °C. On the other hand, GADD45β and GADD45γ are barely stable at temperatures close to the physiological ones as they both present Tm values of around 40 °C. A deep inspection of the melting curves indicated that the CD signal of GADD45β and GADD45γ presented significant variations from the starting value, even at rather low temperatures (~30 °C) ( Figure 3b). This is particularly evident for GADD45β, whose unfolding is essentially non-cooperative. These observations indicate that the folded state of GADD45β is somewhat heat-labile and prone to undergo structural transitions. Interestingly, the analysis of the structural properties of GADD45α, GADD45β, and GADD45γ states obtained upon temperature treatment unravels peculiar and sequence-specific behaviors. As expected, the CD spectra of the three proteins recorded at 100 °C ( Figure 3b) are radically different from those collected at room temperature (Figure 1a). The CD spectrum of GADD45γ presents a minimum at 205 nm and a shoulder at 220 nm that are indicative of a denatured protein with a non-negligible portion of residual secondary structure. When GADD45γ samples are cooled back at room temperature, the protein virtually recovers its native structure (Figure 3c). A rather different scenario emerges from the characterization of the thermally treated samples of GADD45α and GADD45β. For these isoforms, the secondary structure undergoes major variations upon heating (Figures 1a,3b). In particular, the CD spectra collected at 100 °C present a unique minimum located at about 210 nm, indicative of β-rich structures, and their overall appearance is preserved when samples are cooled at room temperature (Figure 3c). Characterization of the β-Rich States Formed by GADD45α/GADD45β A number of experiments were designed and performed to investigate the structural features of GADD45α/GADD45β states generated by the thermal treatment. To gain insights into the structural features of the β-rich aggregates formed by GADD45α and GADD45β, we evaluated their ability to interact with ThT, a probe widely used to assess the formation of amyloid-like assemblies [19]. ThT emits a specific fluorescence at ~480 nm when excited at ~440 nm upon binding to these structures. As shown in Figure 4a, no ThT fluorescence was detected for the three isoforms in their native state. Interestingly, the thermally treated samples of GADD45α and GADD45β presented a strong ThT fluorescence signal with a maximum at 485 nm (Figure 4b). On the contrary, the thermally treated sample of GADD45γ, which does not form β-rich aggregates after denaturation, did not show ThT fluorescence emission. These findings suggested that, upon heating, Characterization of the β-Rich States Formed by GADD45α/GADD45β A number of experiments were designed and performed to investigate the structural features of GADD45α/GADD45β states generated by the thermal treatment. To gain insights into the structural features of the β-rich aggregates formed by GADD45α and GADD45β, we evaluated their ability to interact with ThT, a probe widely used to assess the formation of amyloid-like assemblies [19]. ThT emits a specific fluorescence at~480 nm when excited at~440 nm upon binding to these structures. As shown in Figure 4a, no ThT fluorescence was detected for the three isoforms in their native state. Interestingly, the thermally treated samples of GADD45α and GADD45β presented a strong ThT fluorescence signal with a maximum at 485 nm (Figure 4b). On the contrary, the thermally treated sample of GADD45γ, which does not form β-rich aggregates after denaturation, did not show ThT fluorescence emission. These findings suggested that, upon heating, GADD45α and GADD45β form amyloid-like assemblies. To corroborate this observation, we also evaluated the ability of these assemblies to emit the intrinsic blue fluorescence exhibited by amyloid-like assemblies. As shown in Figure 4c, none of the three native GADD45 isoforms presented the intrinsic blue fluorescence emission when excited at 370 nm. In line with the ThT experiments, only the thermally treated samples of GADD45α and GADD45β presented the emission of fluorescence with a maximum at 422 nm and a shoulder at about 455 nm. Finally, dynamic light scattering (DLS) experiments clearly indicated that, upon heating, both GADD45α and GADD45β were able to form assemblies with large hydrodynamic diameters ( Figure 5). Indeed, the assemblies formed by GADD45α and GADD45β exhibited a diameter of 35 ± 3 and 86 ± 11 nm, respectively. Similar results were obtained following the size distribution either by volume ( Figure 5) or by intensity ( Figure S3). GADD45α and GADD45β form amyloid-like assemblies. To corroborate this observation, we also evaluated the ability of these assemblies to emit the intrinsic blue fluorescence exhibited by amyloid-like assemblies. As shown in Figure 4c, none of the three native GADD45 isoforms presented the intrinsic blue fluorescence emission when excited at 370 nm. In line with the ThT experiments, only the thermally treated samples of GADD45α and GADD45β presented the emission of fluorescence with a maximum at 422 nm and a shoulder at about 455 nm. Finally, dynamic light scattering (DLS) experiments clearly indicated that, upon heating, both GADD45α and GADD45β were able to form assemblies with large hydrodynamic diameters ( Figure 5). Indeed, the assemblies formed by GADD45α and GADD45β exhibited a diameter of 35 ± 3 and 86 ± 11 nm, respectively. Similar results were obtained following the size distribution either by volume ( Figure 5) or by intensity ( Figure S3). Toxicity of GADD45α/GADD45β Amyloid-Like Aggregates Since amyloid-like aggregates are frequently cytotoxic, we monitored the effects exerted by exposing human neuroblastoma (SHSY-5Y) and hepatoma (HepG2) cells to GADD45α/GADD45β aggregates and to the three proteins in the native states. Possible dose-response effects were evaluated at different protein concentrations (1.25, 2.5, 5, and 10 µM) monitoring cell viability at 24 and 48 h. As shown in Figure S4, none of the proteins exerted significant toxic effects on either SHSY-5Y or HepG2 cells after 24 h. After 48 h, a clear distinction between aggregated and non-aggregated forms emerged. Indeed, a high level of mortality was induced by the thermally treated forms of GADD45α/GADD45β, whereas no significant variation of the cell viability was produced by their native forms. On the other hand, neither native nor thermally treated GADD45γ, which does not form amyloid-like aggregates, induced significant alterations of cell viability ( Figure 6). Toxicity of GADD45α/GADD45β Amyloid-Like Aggregates Since amyloid-like aggregates are frequently cytotoxic, we monitored the effects exerted by exposing human neuroblastoma (SHSY-5Y) and hepatoma (HepG2) cells to GADD45α/GADD45β aggregates and to the three proteins in the native states. Possible dose-response effects were evaluated at different protein concentrations (1.25, 2.5, 5, and 10 μM) monitoring cell viability at 24 and 48 h. As shown in Figure S4, none of the proteins exerted significant toxic effects on either SHSY-5Y or HepG2 cells after 24 h. After 48 h, a clear distinction between aggregated and non-aggregated forms emerged. Indeed, a high level of mortality was induced by the thermally treated forms of GADD45α/GADD45β, whereas no significant variation of the cell viability was produced by their native forms. On the other hand, neither native nor thermally treated GADD45γ, which does not form amyloid-like aggregates, induced significant alterations of cell viability ( Figure 6). Discussion In its traditional version, the function-structure paradigm states that proteins acquire their functionality upon folding in a well-defined three-dimensional state. However, fundamental discoveries made in the last decade have progressively reshaped this concept. Indeed, it has been demonstrated that proteins frequently assume, in their Discussion In its traditional version, the function-structure paradigm states that proteins acquire their functionality upon folding in a well-defined three-dimensional state. However, fundamental discoveries made in the last decade have progressively reshaped this concept. Indeed, it has been demonstrated that proteins frequently assume, in their functional form, an ensemble of states rather than a single structure. Moreover, it was found that significant portions of proteins are intrinsically disordered, and operate dynamically [19]. Finally, it has been shown that polypeptide chains have an intrinsic and unexpectedly strong propensity to self-assemble in misfolded states in which they assume a cross-β structure that is characteristic of amyloid aggregates [20,21]. In this scenario, here we performed a comparative biophysical characterization of the three members of the GADD45 protein family that are involved in a myriad of diversified biological processes. We found that, despite their remarkable sequence similarity, GADD45 proteins present rather distinct biochemical and biophysical properties. The analysis of the binding of these proteins to MKK7 indicates that, despite the general ability of recognizing the kinase by all members of the family, the affinity exhibited by GADD45α and GADD45γ is around one order of magnitude lower compared to that shown by GADD45β. This finding is in line with the observation that GADD45β binds the kinase through its loop 2 (residues 103-117), the region showing the most significant sequence variability among GADD proteins ( Figure S2) [9]. Nevertheless, binding data suggest MKK7 as a potential biological partner also of GADD45α and GADD45γ. The findings emerged from the biophysical characterization of the proteins are particularly surprising as they demonstrate that GADD45β and GADD45γ are only marginally stable at physiological temperature, whereas GADD45α presents a Tm value expected for a protein isolated from a mesophilic organism. Even more intriguing are the structural properties of the species formed at high temperatures. While GADD45γ retains a significant level of secondary structure at high temperature and is able to significantly regain the original folding upon cooling, GADD45α and GADD45β when heated form aggregated species enriched in β-structure, despite the remarkable content of α-helix in their native structures. Moreover, the spectroscopic characterization of these aggregates clearly indicates that they possess amyloid-like features as they bind the dye ThT and present the characteristic intrinsic UV/blue fluorescence emission [22][23][24]. As found for many amyloid-like species, the aggregates formed by GADD45α and GADD45β present a remarkable toxicity against SHSY-5Y and HepG2 cells. The concomitant occurrence in GADD45β of a very limited thermal stability and of a remarkable propensity to form cytotoxic amyloid-like assemblies may appear a puzzling observation. Based on these observations, we hypothesize that the toxic species formed by GADD45β at physiological temperatures might represent a protection mechanism toward the pro-survival role played by this protein in several tumor tissues [25]. Similarly, the relative stability even at high temperatures of GADD45α, which has mostly pro-apoptotic features, might reflect a form of protection against undesired growth properties acquired by some cell types. Considering the frequent tendency of the polypeptide chains to undergo amyloid-like aggregation, the folding of proteins is no longer seen as mere mean to provide the correct spatial arrangement of residues for function and interaction but it might represent an effective way to promote or limit the formation of harmful species. In line with this concept, it was recently demonstrated-in a genomic-scale survey-that protein stability is an effective way to prevent aggregation [26]. Moreover, the extra stability of proteins isolated from mesophilic organisms has been linked to the presence in their sequences of amyloidogenic regions [27]. The rare propensity of GADD45β to form potentially cytotoxic species in nearphysiological conditions may also be connected to some other functional features of the protein. In particular, it can be hypothesized that the observed propensity of GADD45β to establish biological partnerships and its limited expression levels may be effective ways to avoid the formation amyloid-like toxic assemblies. Alternatively, its tendency to form aggregates might represent a signal for its clearance when it is no longer needed or when its concentration exceeds harmful levels. Indeed, intracellular protein aggregates can be efficiently removed by a number of distinct mechanisms that include autophagy or secretion outside of the cells [28]. In conclusion, the low stability and/or the propensity to form toxic species, as here revealed though biophysical/biochemical studies might represent crucial properties of GADD45 proteins that must be considered in interpreting their many biological functions. The present finding also represent a stimulus for further investigations aimed at detecting the potential aggregation of these proteins in vivo. Therefore, the analysis of the GADD45β aggregation, which could also be modulated using inhibitors of amyloid-like aggregation [29][30][31], in cell model systems, overexpressing the protein, might represent a valuable mean to evaluate the biological consequences of this process. Moreover, since most proteins assemble into amyloid-like fibrils in vitro under extreme conditions, it has been pointed out that the study of the rare aggregation-prone species that form amyloids under physiologically relevant conditions might represent an important challenge that can provide interesting insights into protein aggregation [32]. In this scenario, further biophysical investigations on the GADD45α/GADD45βaggregation protein may provide remarkable results also in this field. Cloning, Expression, and Purification of GADD45 Proteins The genes of human GADD45α and GADD45γ were purchased from Sigma Aldrich and cloned into the pETM-13 expression vector using Bam-H1 and XhoI enzymes. Human GADD45β (hereafter only GADD45β) was cloned into a pET-28a+ vector as previously reported. GADD45α, GADD45β, and GADD45γ were expressed using E. coli BL21(DE3) cells, which were grown at 37 • C until 0.6-0.8 OD. Protein expression was induced by the addition of 0.5 mM isopropyl-D-thiogalactoside for 16 h at 22 • C. The cultures were harvested by centrifugation for 15 min at 4 • C and 6000 rpm. The pellets were re-suspended in lysis buffer (50 mM Tris-HCl pH 8, 500 mM NaCl, 5 mM DTT, 5% glycerol) and sonicated for 20 min (Misonix Sonicator 3000, Misonix Inc, NY, USA). Supernatants were harvested by centrifugation, for 30 min at 4 • C at 16,500 rpm. Soluble fractions were loaded on a Ni 2+ -NTA resin (Qiagen, Milano, Italy) previously equilibrated with lysis buffer. Proteins were eluted increasing the imidazole concentration. GADD45α, GADD45β, and GADD45γ were loaded onto a Superdex 200 16/60 column (GE Healthcare, Chicago, IL, USA) connected to a AKTA Purifier system (Fast Protein Liquid Chromatography, GE Healthcare, Chicago, IL, USA). The column was equilibrated in a buffer containing 20 mM Tris-HCl pH 7.5, 100 mM NaCl, 5 mM DTT, 5% glycerol. Yield and the purity of proteins were assessed by 15% SDS-PAGE analysis. Concentration was determined by the Bradford assay using BSA as reference. The recombinant kinase domain of human MKK7 (hereafter MKK7_KD) was obtained as His-fusion protein as previously reported [7,18,33]. Scattering (LS) and Dynamic Light Scattering (DLS) Light scattering measurements were performed using both a size exclusion chromatography coupled with light scattering (SEC-LS) and dynamic light scattering (DLS). SEC-LS was conducted using a semi-preparative SEC column (Superdex S200 10/30, GE Healthcare, Chicago, IL, USA) coupled to a light scattering detector (miniDAWN TREOS, Wyatt Technology Corporation, CA, USA) and to a differential refractive index detector (Shodex RI-101, Wyatt Technology Corporation, California, USA). The purification was conducted loading 0.5-1.0 mg of homogeneous sample on the column equilibrated in 20 mM Tris-HCl pH 7.5, 100 mM NaCl, 5 mM DTT, (v/v) 5% glycerol as running buffer, at a flow rate of 0.5 mL/min. Data were recorded and analyzed using the Astra software (version 5.3.4, Wyatt Technology Corporation, California, USA). Molecular size dispersion as a function of the hydrodynamic radius (Rh) and the aggregation status were determined by DLS. These experiments were performed at 25 • C using a Zetasizer Nano ZS (Malvern Instruments, Westborough, MA, USA) equipped with a 173 • backscatter detector. Measurements were performed in triplicate using recombinant proteins at a concentration of 10 µM before and after thermal denaturation. Data were analyzed using the OMNISIZE software (Viscotek, Malvern Instruments, Westborough, MA, USA). CD Spectroscopy Far-UV circular dichroism (CD) spectra of the proteins were recorded on a J-810 spectropolarimeter equipped with a Peltier temperature control system (Model PTC-423-S, Jasco Europe, Cremella, Italy). In all experiments, the protein concentration was 10 µM. Measurements were carried out in the 198-260 nm range at 20 • C using a 0.1 cm optical path length cell with the following parameters: 4 s as time constant, a 2 nm bandwidth, and a scan rate of 20 nm min −1 . Three acquisitions for each spectrum were accumulated and averaged. Thermal denaturation curves were recorded at 222 nm between 20 • C and 100 • C, with a scan rate of 1 • C/min using tightly closed quartz cuvettes to prevent solvent evaporation. All data were expressed as molar ellipticity per residue (θ). Bio-Layer Interferometry (BLI) Bio-Layer Interferometry (BLI) was used to quantify the affinity and the selectivity of binding of the GADD45 proteins to MKK7_KD. The BLI experiments were carried out using an Octet ® RED96 System (AlfaTest) equipped with an array of eight independent probing BLI needles. Briefly, GADD45 proteins were dialyzed in 0.01 M HEPES pH 7.4, 0.15 M NaCl, 3 mM EDTA, 0.005% v/v Surfactant P20, 2 mM DTT (running buffer) and opportunely biotinylated using EZ-Link Sulfo-NHS-LC-Biotinylation Kit (Thermo Fisher, Waltham, MA, USA) following the manufacturer instructions. Subsequently, the biotinylated proteins were efficiently immobilized on the super streptavidin chip (SSA biosensor, ForteBio Pall) achieving a similar immobilization level on all sensors (2.5 nm). Dose-response assays were simultaneously carried out using MKK7_KD solutions at concentrations ranging between 9 and 300 nM, opportunely diluted in the running buffer. Data were analyzed using the Octet Data Analysis software and fitted by a 1:1 binding model. Thioflavin T (ThT) Assay Thioflavin T (ThT) fluorescence assays were carried out using a Varian Cary Eclipse spectrofluorometer (Varian, Palo Alto, CA, USA). The protein samples were placed in a quartz cell of 10 mm path-length at 10 µM concentration. A 440 nm radiation was used for the excitation while the emission was recorded in the interval 450-600 nm. MTT Assays Human SH-SY5Y neuroblastoma cells and human HepG2 hepatocellular carcinoma cells (American Type Culture Collection, Manassas, VA, USA) were grown at 37 • C in a humidified atmosphere with 5% CO 2 in the Dulbecco's modified medium (DMEM) supplemented with 10% fetal bovine serum, 1% glutamine, 100 U/mL penicillin, and 100 µg/mL streptomycin (EuroClone, Italy). For the MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assays 10 4 SH-SY5Y and HepG2 cells were seeded in 96-well plates. Pre-heated GADD45α, GADD45β, and GADD45γ were added at 37 • C to the cells at different concentrations (1.25, 2.5, 5.0, and 10 µM) for either 24 or 48 h. Cell viability was assessed by the MTT reduction assay [34]. In brief, SH-SY5Y and HepG2 cells were washed with PBS and incubated at 37 • C for 4 h with a 0.5 mg/mL solution of MTT dissolved in DMEM. Subsequently, these cells were lysed with an isopropanol solution containing 10% (v/v) Triton-X100 and 8% (v/v) of a 1 M HCl solution. Absorbance of blue formazan was determined at 570 nm. Cell viability was expressed as the percentage of MTT reduction in treated cells compared to that observed in untreated cells.
6,701.8
2021-10-01T00:00:00.000
[ "Biology" ]
Maximum product of spacings prediction of future record values A spacings-based prediction method for future upper record values is proposed as an alternative to maximum likelihood prediction. For an underlying family of distributions with continuous cumulative distribution functions, the general form of the predictor as a function of the estimator of the distributional parameters is established. A connection between this method and the maximum observed likelihood prediction procedure is shown. The maximum product of spacings predictor turns out to be useful to predict the next record value in contrast to likelihood-based procedures, which provide trivial predictors in this particular case. Moreover, examples are given for the exponential and the Pareto distributions, and a real data set is analyzed. Introduction A general procedure for estimating parameters, termed maximum product of spacings estimation, was proposed independently by Cheng and Amin (1983) and Ranneby (1984) as an alternative to maximum likelihood estimation in particular situations of continuous, univariate distributions. In the sequel, the estimation method was applied, extended and further theoretically studied in, e.g., Ekström (1998Ekström ( , 2008, Shao and Hahn (1999) and Anatolyev and Kosenok (2005). Here, we adopt the method of Suppose that X 1 , X 2 , . . . is an infinite sequence of independent and identically distributed (i.i.d.) continuous random variables with cumulative distribution function (cdf) F. An observation X j is called an (upper) record value provided it is greater than all previously observed values. More specifically, defining the record times as L(1) = 1, L(n + 1) = min{ j > L(n) | X j > X L(n) }, n ∈ N, the sequence (R n ) n∈N = (X L(n) ) n∈N is referred to as the sequence of (upper) record values based on (X n ) n∈N [see Arnold et al. (1998); Nevzorov (2001)]. The study of record values dates back to Chandler (1952) providing a natural model for the sequence of successive extremes in an i.i.d. sequence of random variables. The structure of record values also appears in the context of minimal repair of a system, and there is a close connection to occurrence times of a non-homogeneous Poisson process (NHPP); namely, under mild conditions, the epoch times of an NHPP and upper record values are equal in distribution [see Gupta and Kirmani (1988)]. We are concerned with the problem of predicting the occurrence of a future record value R s based on the first r , r < s, (observed) record values R = (R 1 , . . . , R r ). This prediction problem has been studied by several authors. Here, we focus on non-Bayesian prediction. In the one-sample case, Raqab (2007) derived the best linear unbiased predictor, the best linear equivariant predictor, the maximum likelihood predictor as well as the conditional median predictor of the s-th record value R s from a Type-II left censored sample with a two-parameter exponential distribution. His findings supplement and generalize results of Ahsanullah (1980), Basak and Balakrishnan (2003) and Nagaraja (1986, Sect. 4). Awad and Raqab (2000) provide a comparative study of several predictors of the s-th record value R s based on the first r observed record values from a one-parameter exponential distribution. Linear unbiased prediction of future Pareto record values is discussed in Paul and Thomas (2016), and maximum likelihood prediction of future Pareto record values is studied in Raqab (2007). Since the model of record values is contained in the generalized order statistics model [see Kamps (1995Kamps ( , 2016], all results pertaining to prediction of future generalized order statistics can be specialized to solve the prediction problem for record values [see, e.g., Burkschat (2009)]. Bayesian prediction methods for future record values were first discussed by Dunsmore (1983) and have subsequently been applied to various distribution families [cf. Madi and Raqab (2004); Ahmadi and Doostparast (2006); Nadar and Kızılaslan (2015)]. It should be noted that, under exponential as well as under Pareto distributions, maximum likelihood prediction of the subsequent record value R r +1 becomes trivial, since the respective predictor is given by R r , i.e., the predictor coincides with the last observed record value in the model. However, by construction, record values are strictly ordered; thus, the maximum likelihood predictor of the (r + 1)th record value based on the first r record values yields a useless prediction in practical situations, e.g., when aiming at predicting the next record claim in an insurance company. In the following, a prediction principle, referred to as the maximum product of spacings prediction, will be introduced and studied to overcome this shortcoming [see also Volovskiy (2018) for further details]. A similar approach has been mentioned by Raqab et al. (2019) for records from a Weibull distribution, recently. Volovskiy and Kamps (2020) [see also Volovskiy (2018)] have introduced a new general likelihood-based prediction procedure, the so-called maximum observed likelihood prediction method, and applied it to predict future record values; although such a predictor may outperform the respective maximum likelihood predictor in terms of both criteria, mean squared error and Pitman closeness, it may also share the same drawback when predicting the very next record value. For the proposed prediction procedure by means of maximizing the geometric mean of spacings of suitably transformed and normalized record values data, a general representation of the predictor as a function of an estimator of the underlying distributional parameters is established. Furthermore, its relation to the maximum observed likelihood predictor is demonstrated via a heuristic approximation argument. It is pointed out that the spacings-based method retains the desirable properties of the likelihoodbased procedure while at the same time avoids its deficiency of not being able to produce a useful prediction for the next record value. The prediction procedure is illustrated by deriving predictors of future exponential and Pareto record values. A real data example is shown under the assumption of an underlying Pareto distribution. Maximum product of spacings prediction procedure The prediction procedure we are about to present derives its motivation from the maximum product of spacings estimation method introduced independently by Cheng and Amin (1983) and Ranneby (1984) as an alternative to maximum likelihood estimation. The heuristics underlying the maximum product of spacings estimation method are as follows. Let F = {F θ | θ ∈ Θ}, Θ ⊆ R d , be a parameterized family of continuous cumulative distribution functions on R with Lebesgue density functions { f θ | θ ∈ Θ}. Furthermore, let X 1 , . . . , X n be i.i.d. random variables with cdf F θ 0 ∈ F, where the parameter vector θ 0 ∈ Θ is unknown. Now, observe that the spacings with F θ 0 (X 0:n ) := 0, are distributed as spacings of an ordered sample U 1:n , . . . , U n:n of size n from the standard uniform distribution [see David and Nagaraja (2003)]. Since, in expectation, the sample U 1:n , . . . , U n:n induces an equidistant partition of the unit interval, obtaining an estimate for θ 0 by tuning the parameter vector such that the spacings (1) become as equal as possible seems a plausible way to go. The maximum product of spacings estimation procedure achieves this by maximizing the geometric mean of the spacings, i.e. the function with respect to θ ∈ Θ, where F θ 0 (X n+1:n ) := 1. For further details on this estimation method, we refer the reader to the respective articles referred to in the introduction. In order to apply the above reasoning to the problem of predicting future record values, several adjustments to the procedure in the estimation set-up will be necessary, which primarily are due to the non-i.i.d. structure of the data at hand as well as the structure of the inferential task. In what follows, E x p(1) denotes the standard exponential distribution. Let (R n ) ∞ n=1 be the sequence of record values in a sequence of i.i.d. random variables with continuous cdf F θ 0 ∈ F. In what follows, we are primarily concerned with the problem of predicting R s based on where (R n ) ∞ n=1 is the sequence of record values in an i.i.d. sequence of standard exponential random variables, and where, for n ∈ N, Arnold et al. (1998, p. 9)]. Apart from this fact, the following result, a proof of which can be found in Nevzorov (2001, pp. 12-13) will prove crucial for the following discussion. Combining the distributional identity (2) and Lemma 1, we conclude that where H θ 0 (R 0 ) = U 0:(s−1) := 0 and U s:(s−1) := 1. In light of the discussion of the maximum product of spacings estimation method, the distributional identity (3) motivates the following definition. For a distribution with cdf F, let α(F) and ω(F) denote, respectively, the left and right endpoints of the support of the distribution. In what follows, for n ∈ N, we define with x 0 = −∞, and where we use the notational convention that, for an interval I ⊆ R and n ∈ N, I n < = {(x 1 , . . . , x n ) ∈ I n | x 1 < x 2 < · · · < x n }. The set Z n is a collection of all admissible combinations of the parameter vector θ and the record values sample x 1 , . . . , x n of size n. In what follows, for a subset B ⊂ R n , B n |B will denote the restriction of the Borel σ -algebra B n on B. and then ν s−r (R) is called a maximum product of spacings predictor (MPSP) of R s based on R. Any such predictor will be denoted by π (s) M P S P . Next, we establish the general form of the MPSP as a function of the underlying estimator of the parameter vector. It turns out that the estimator is obtained by maximizing the function θ → P r (θ, x ). In what follows, the quantile function of a cdf F will be denoted by F −1 . exists with the property that, for any fixed θ ∈ Θ, we have with P r as defined in (4), then a maximum product of spacings predictor of R s based on R is given by Proof We have that, for any fixed x = (x 1 , . . . , x r ) ∈ R r < , and a suitable constant c depending only on r and s, and where in the second line we used that, for fixed θ , x r and x s , with ν 0 = −∞. Now, using the well-known expression for the mode of the probability density function of a beta distribution with parameters s − r + 1 and r + 1, as well as the continuity of F θ for all θ ∈ Θ, we obtain that, for θ ∈ Θ, the function possesses at least one maximizing point, and any of these can be obtained as a solution of the equation with respect to x s ∈ (x r , ω(F θ )). A particular solution, say x s (θ, x ), of this equation is given by . Moreover, we have that l θ (x s (θ, x )) is independent of θ . Thus, combining Eqs. (9) and (10) as well as using property (6) ofθ and the equality (8), we conclude thatθ and the function ν defined by satisfy (5). Hence, by Definition 1, the (s − r )th coordinate function of ν composed with R yields the MPSP. Remark 1 (i) The maximum product of spacings prediction procedure produces predictions consistently in the following sense: In determining a prediction value ν s−r (x ) for R s based on a sample x of R 1 , . . . , R r , by the definition of the prediction procedure, one is also required to produce values ν 1 (x ), . . . , ν s−r −1 (x ) such that (5) is satisfied for ν(x ) = (ν 1 (x ), . . . , ν s−r (x )). It is then tempting to take ν 1 (x ), . . . , ν s−r −1 (x ) as prediction values for R r +1 , . . . , R s−1 and ask how these prediction values relate to those one would obtain by computing prediction values according to Definition 1. Since the values ν 1 (x ), . . . , ν s−r (x ) are available in closed form via formula (11), it is obvious that, fors such that r <s < s, π (s) M P S P (x ) = νs −r (x ), i.e. taking νs −r (x ) as a prediction value for Rs amounts to predicting Rs via Definition 1. (ii) When predicting the very next record (s = r + 1), the MPSP does not become trivial in general, i.e. π (r +1) M P S P will exceed R r . (iii) Since k-th record values [see Dziubdziela and Kopociński (1976) Relation to maximum observed likelihood prediction Recently, Volovskiy and Kamps (2020) introduced the so-called maximum observed likelihood prediction procedure (MOLP) and used it to derive predictors for future record values. More specifically, the MOLP derives a predictor of a random variable Y based on a possibly vector-valued random variable X with joint pdf f X ,Y θ by maximizing the observed predictive likelihood function L obs defined by with respect to θ and y. In the case of predicting R s based on R = (R 1 , . . . , R r ), the maximum observed likelihood predictor takes on the form [see Volovskiy and Kamps (2020, Theorem 3.3), Volovskiy (2018, Theorem 5.3)] which is quite similar to the form of π (s) M P S P in (7), although the procedures to derive these predictors seem to be totally different. Here, the case s = r + 1 does not lead to a useful predictor, in general. In particular situations, the MOLP was shown to outperform a respective maximum likelihood predictor in terms of mean squared error and Pitman closeness. In (12), the functionθ is such that where the function Ψ is given by Assuming that the cdfs F θ , θ ∈ Θ, have a common finite left endpoint of the support, say x 0 = α(F θ ), and using the approximation Thus, under the assumption of the finiteness of a common left endpoint of the support of the underlying family of distributions, the objective functions used to estimate the distributional parameters in the maximum observed likelihood and the maximum product of spacings prediction method, respectively, are approximately proportional to each other. This as well as the fact that, for large s, s−1 r ≈ s r , implies that Note that the above rather heuristic analysis does not imply any statement about the quality of this approximation. A comparison of the functional forms of the predictors reveals that while the MOLP yields the last observed value as prediction value for the next observation and, hence, cannot be considered a sensible prediction method in this particular setting, the maximum product of spacings method produces a prediction value different from the last observation. At the same time, both prediction procedures share the desirable properties of allowing to derive the general form of the predictor [see Theorem 1 and (12)] as well as the simplicity of deriving the predictors for specific distribution families, as is illustrated by the examples in the following section. Examples The MPSP-approach is illustrated for exponential and Pareto distributions. Exponential distribution Assume that (R n ) n∈N is the sequence of record values in a sequence of i.i.d. twoparameter exponential random variables. The density, cumulative distribution and quantile functions of the exponential distribution E x p(μ, σ ) with location parameter μ ∈ R and scale parameter σ ∈ R + are given by where θ = (μ, σ ) ∈ R × R + . As far as likelihood-based prediction of future record values is concerned, the MLP of R s based on R = (R 1 , . . . , R r ), r < s, was derived by Gupta and Kirmani (1989) [see also Basak and Balakrishnan (2003)] and takes on the form π (s) while the MOLP of R s based on R was computed by Volovskiy and Kamps (2020) [see also Volovskiy (2018)] and equals π (s) Note that both the MLP and the MOLP yield the prediction R r for R s if s = r +1, and, hence, cannot be considered reasonable prediction methods in this particular situation. In view of Theorem 1, in order to determine an MPSP of R s based on R, it suffices, for any x ∈ R r < , to solve the maximization problem max θ∈Θ: the maximization has to effectively be performed with respect to the location parameter μ only. Because the function f (x) = x/(x +c) r , x ∈ [0, ∞), where c is some positive constant, possesses a unique maximum point, which is given by x = c/(r −1), settinĝ andθ = (μ,σ ), whereσ : R r < → R + is some arbitrary function, we conclude thatθ satisfies (6). Consequently, the unique MPSP of R s based on R is given by Thus, it turns out that in this particular setting the MPSP coincides with the BLUP [see Ahsanullah (1980)]. If the location parameter is known, function P r is independent of the distributional parameters, which considerably simplifies the derivation of the MPSP. In this set-up, the MPSP takes on the form π (s) and, by the results of Basak and Balakrishnan (2003), again is seen to coincide with the BLUP. Pareto distribution We assume that (R n ) n∈N is the sequence of record values in a sequence of i.i.d. Pareto random variables. The density, cumulative distribution and quantile functions of the Pareto distribution Pareto(α, β) with scale parameter α ∈ R + and shape parameter β ∈ R + are given by where θ = (α, β) ∈ R 2 + . Maximum likelihood and maximum observed likelihood prediction of R s based on R = (R 1 , . . . , R r ), 2 ≤ r < s, were discussed in Volovskiy (2018), where it was shown that the respective predictor takes on the form Again, from the expressions of the MLP and the MOLP, it is evident that both likelihood-based prediction methods produce R r as predictor for R s if s = r + 1. Next, we determine the MPSP of R s based on R. Function P r takes on the form For a positive constant c, the function f (x) = − ln(x)/(− ln(cx)) r , x ∈ (0, 1), possesses the unique maximum point x = c 1/(r −1) . Hence, settinĝ and choosing an arbitrary functionβ : (0, ∞) r < → R + , we obtain thatθ = (α,β) satisfies (6). Thus, by Theorem 1, the unique MPSP of R s based on R is given by In view of the fact that the Pareto distribution often allows for adequate modeling of quantities spanning many orders of magnitude, it seems natural to evaluate the M P S P )) = 0. Real data example In this section, we illustrate the practical applicability of the proposed prediction procedure on a dataset of water level measurements. Extreme water levels may have a major environmental impact and, due to potential flood situations, pose a serious threat to the human population. For our analysis, we consider data collected by the German Federal Office of Hydrology (FOH) in its role as a scientific advisor to the Federal Waterways and Shipping Administration, publicly available at https://www. pegelonline.wsv.de/gast/start. For measurements data older than 30 days, one has to contact the FOH directly (www.bafg.de). The data set contains hourly measurements (in cm) of water level for the time period from January 1918 to February 2019 collected at the measurement site Cuxhaven-Steubenhöft located at the river Elbe. In order to approximately meet the i.i.d.-assumption in our record model, we calculated the weekly maximum water levels based on the hourly data, which then served as the basis for prediction. In addition, we retained only those measurements exceeding 690 cm. We show in Fig. 1 the histogram of the full dataset of weekly maximum water levels as well as the histogram of the weekly maximum water levels above 690 cm. To assess the distributional properties of the dataset of water levels above 690 cm, we use a Pareto Q-Q plot (see Fig. 2). Apart from the last few points, the Pareto Q-Q plot is more or less linear indicating a reasonable fit, at least for the purpose of this illustrative example, of the Pareto distribution to the tail of the weekly maximum water levels. The maximum likelihood estimate of the shape parameter isβ = 16.9. In Fig. 1b, the Pareto density function with parameterβ is plotted. The sequence of record values extracted from the dataset of weekly maximum water levels exceeding 690 cm is given by 713, 781, 880, 885, 901, 914, 915, 993, 1010. We applied the maximum product of spacings prediction procedure for Pareto record values (see Sect. 4.2) to predict the subsequent record water level R r +1 based on the preceding r observed record water levels by successively increasing the sample size r from 2 up to 9. The results are reported in Table 1. From the results we observe that the MPSP is able to capture the magnitude of the observed record water levels, and this even more so, the larger the sample size. appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
5,252.8
2020-02-06T00:00:00.000
[ "Mathematics" ]
Evaluation of Biological Response of STRO-1/c-Kit Enriched Human Dental Pulp Stem Cells to Titanium Surfaces Treated with Two Different Cleaning Systems Peri-implantitis—an infection caused by bacterial deposition of biofilm—is a common complication in dentistry which may lead to implant loss. Several decontamination procedures have been investigated to identify the optimal approach being capable to remove the bacterial biofilm without modifying the implant surface properties. Our study evaluated whether two different systems—Ni-Ti Brushes (Brush) and Air-Polishing with 40 µm bicarbonate powder (Bic40)—might alter the physical/chemical features of two different titanium surfaces—machined (MCH) and Ca++ nanostructured (NCA)—and whether these decontamination systems may affect the biological properties of human STRO-1+/c-Kit+ dental pulp stem cells (hDPSCs) as well as the bacterial ability to produce biofilm. Cell morphology, proliferation and stemness markers were analysed in hDPSCs grown on both surfaces, before and after the decontamination treatments. Our findings highlighted that Bic40 treatment either maintained the surface characteristics of both implants and allowed hDPSCs to proliferate and preserve their stemness properties. Moreover, Bic40 treatment proved effective in removing bacterial biofilm from both titanium surfaces and consistently limited the biofilm re-growth. In conclusion, our data suggest that Bic40 treatment may operatively clean smooth and rough surfaces without altering their properties and, consequently, offer favourable conditions for reparative cells to hold their biological properties. Introduction The use of dental implants in daily clinical practice is currently widespread and in huge growth: the modern implant therapy allows in fact not only to offer a biological and functional advantage for many patients, compared with fixed or removable prosthetic solutions but also to obtain excellent long term results, as confirmed by previous studies reporting survival rates of 95.7% and 92.8% after 5 and 10 years, Titanium Surface Characterization Scanning electron microscopy (SEM) analysis carried out on MCH surfaces from each experimental group is shown in Figure 1A. At lower magnifications, MCH control surface displayed concentric irregularities according to our previous findings [15]. The polishing treatment with Ni-Ti brush was characterized by grooves oriented in all the directions through the entire surface of the disks, as reported in either lower and higher magnifications. On the MCH titanium surfaces treated with Bic40, the presence of slight alterations of the whole surface was observed. Particularly, newly formed irregularities were detected on the entire treated disk ( Figure 1A). Furthermore, atomic force microscopy (AFM) analysis was performed to evaluate the roughness of MCH disks following the different polishing treatments. In particular, as shown in Figure 1B, Ra, Rpv and Rsm were determined for each experimental group. With regard to Rpv parameter, higher values were recorded in MCH Brush group, in comparison to control MCH and MCH Bic40, although these differences were not statistically significant ( Figure 1B). At the same time, SEM analysis was carried out on NCA surfaces from the three experimental groups. Data are reported in Figure 2A. According to previous data from our group [15], control NCA were characterized by homogeneous irregularities spread through the whole analysed area. The polishing treatment with Brush induced a notable modification of the surface: particularly, a flattening of the peaks typical of NCA surface was observed at lower and higher magnifications. Conversely, the air-polishing treatment with Bic40 did not induce any relevant alteration of the nanotopography of the surface. These observational data were not confirmed by AFM analyses, as a matter of fact, Ra, Rpv and Rms parameters did not differ among the three experimental groups. Likely, this evidence might be due to the AFM instrumental sensitivity ( Figure 2B). Taken together, data on surface roughness of MCH and NCA surfaces treated with Bic40 did not show any significant difference, in terms of nanotopography, from both MCH and NCA controls. Values represent mean ± SD of three independent experiments. No statistically significant difference was detected among the groups; one-way ANOVA followed by the Newman-Keuls post-hoc test. Stem Cells Morphology and Proliferation on Titanium Surfaces after Polishing Treatments The hDPSCs morphology was evaluated by confocal microscopy, as shown in Figures 3 and 4. Cells were stained with phalloidin and DAPI. After 7 days of culture under standard conditions on MCH surfaces, hDPSCs displayed a fibroblast-like morphology with cells being arranged parallel to the surface grooves without showing significant differences following the Brush and Bic40 polishing treatments. With regard to the distribution of cells through the surface area, we noticed that hDPSCs cultured on MCH Brush oriented not only along the grooves due to the industrial fabrication but also along the scratches created after the Brush cleaning. No differences were observed in hDPSCs seeded on MCH Bic40 when compared to the control group ( Figure 3A). Also, no differences in terms of proliferation rate were observed among the three experimental groups as indicated by histograms ( Figure 3B). Cell proliferation on titanium disks was measured by counting cell nuclei after DAPI staining. Histograms show cell numbers after 7 days of culture on titanium surfaces from the three experimental groups. Values represent mean ± SD. No statistically significant difference was detected among the groups; one-way ANOVA followed by the Newman-Keuls post-hoc test. Scale bar: 100 µm. Figure 4 shows the morphology, distribution and proliferation of hDPSCs after 7 days of culture on NCA surfaces following Brush and Bic40 treatments. As formerly described, the culture on NCA disks induced a morphology alteration in hDPSCs. In particular, cells were homogeneously spread through the whole area and showed an irregular shape with reduction of the average cell area. The same features were observed in hDPSCs cultured on NCA Brush disks. Interestingly, when NCA disks were cleaned with Bic40 hDPSCs grew still homogeneously although recovering their typical fibroblast-like morphology, as reported when cultured on MCH disks. This shift in morphology was reflected also by an increased proliferation rate and by values of average cell area of hDPSCs cultured on NCA Bic40, with respect to NCA Ctrl and NCA Brush (* p < 0.05, Figure 4B). Expression of Stemness Markers Human DPSCs were immune-selected against STRO-1 and c-Kit. After 7 days of culture on MCH and NCA surfaces, immunofluorescence analysis was performed in order to evaluate the maintenance of their biological properties. The expression of STRO-1 and c-Kit, two typical mesenchymal stem cells markers, were investigated in MCH and NCA after cleaning treatments. As reported in Figure 5, hDPSCs cultured on MCH Ctrl showed the expression of both mesenchymal markers. These markers were still observed in hDPSCs cultured either on MCH Brush and MCH Bic40. On the contrary, we noticed a reduction in STRO-1 and c-Kit expression in hDPSCs grown on NCA Ctrl and NCA Brush after 7 days of culture. Conversely, the expression of these markers was evident in hDPSCs cultured on NCA Bic40. The morphology data in association with stemness evaluation indicate that after cleansing with Bic40, the NCA surface appeared more favourable/suitable to the growth of hDPSCs in their physiological microenvironment. Microbial Biofilm Formation onto Titanium Disks Based on the biological results concerning morphology, proliferation and stemness markers, it was then evaluated whether the polishing treatment with Bic40 on NCA surface may affect the microbial growth. Thus, Pseudomonas aeruginosa (10 6 /mL) was seeded at time 0, in two sets of wells, containing or not the titanium disks; then, the plate was incubated at 35 • C for 24 h and microbial growth was assessed by bioluminescent analysis. As shown in Figure 6A, a superimposable trend was observed in the two groups of wells (control: no disks; sample: with disks); moreover, at 24 h, comparably high levels of total microbial load were achieved, being 2.6 × 10 8 /mL and 3.4 × 10 8 /mL the CFU/mL detected in control and sample groups, respectively. These data indicated that the presence of titanium disks did not affect microbial growth. To assess the occurrence of biofilm onto such disks, the latter were exposed to bacteria for 24 h, washed twice with PBS to eliminate the non-adherent microbial cells and then the residual bioluminescent signal was evaluated, as measure of formed biofilm. The obtained bioluminescent signal was converted in CFU and indicated that biofilm was produced at amounts as high as 1.95 × 10 8 CFU/mL (data not shown). Figure 6. P. aeruginosa growth and biofilm formation on titanium disks, treated or not with Bic40 procedure, as assessed by a bioluminescent (BLI) bacterial strain. (A) P. aeruginosa growth was not affected by the presence of titanium disks. BLI-Pseudomonas (10 6 /mL) in TSB plus 2% sucrose was seeded in 96 black well-plate and incubated at 35 • C in the presence (orange) or absence (green) of titanium disks (1/well). The plates were then incubated by Fluoroskan and the bioluminescent signal was recorded in real time (h), up to 24 h. By the calibration curve, the RLUs were converted in CFU/mL + SEM (Standard Error Mean), as indicated in the Y axis. (B) Effects of the cleaning procedure on titanium disks-associated biofilm. Microbial biofilm produced onto titanium disks (by 24 h incubation, as above) was exposed or not to the Bic40 cleaning procedure (treated vs. untreated group); the residual bioluminescent signal was measured and converted, by the calibration curve, in CFU/mL ± SEM, as indicated in the Y axis. (C) Real time monitoring of P. aeruginosa re-growth on treated and untreated titanium disks. The microbial re-growth, occurring in Bic40-treated and untreated disks, was evaluated in real time, for additional 24 h. Then, the bioluminescent signal was measured and converted in CFU, returning values of 3.6 × 10 8 /mL and 1.2 × 10 7 /mL, in untreated and treated groups, respectively. Microbial Biofilm Removal from Titanium Disks To assess the efficacy of the Bic40 cleaning procedure against a preformed biofilm, 4 disks containing a 24 h-old biofilm were treated with Bic40 procedure, as detailed above (treated group), while 3 disks, used as controls were not treated (untreated group); then, each disk was transferred in a new well with fresh medium and incubated for additional 24 h. As shown in Figure 6B, when compared to the untreated groups, the Bic40-treated disks displayed an about 3 logs drop in terms of CFU/mL. Microbial Re-Growth on Treated and Untreated Titanium Disks As detailed in Figure 6C, the microbial load in the untreated group remained almost constant during the subsequent 24 h range. In contrast, treated disks showed a delayed and time-related increase in CFU. In particular, the major differences between treated and untreated disks occurred within the first 5-6 h; then, microbial load on treated disks reached a plateau level, that consistently remained about 1 log below the values observed in the untreated disks until at 24 h. Discussion The peri-implant disease is due to the colonization of the implant surfaces by pathogens which constitute a biofilm [25]. Bacterial adhesion and biofilm formation play a fundamental role not only into the pathogenesis of peri-implantitis but also into the implant survival [12,13,26]. In a previous study [27] we observed that the biofilm occurred regardless of the degree of surface roughness. Therefore, the removal of the biofilm from the implant surface, indistinctly smooth or rough, is the primary objective. It is well known that the response of cells and tissues to a biomaterial depends on the properties of the material itself such as surface topography, chemical composition and capability to interact with body fluids [15,28]. Following bacterial contamination, the procedures used to decontaminate the implant surface can cause alterations of its topography and its chemical composition. To this regard and to the best of our knowledge the optimal cleansing approach is expected to effectively remove the bacteria biofilm without altering the chemical and physical properties of the implant and consequently the biological properties of cells involved in the osseointegration process. In this study we analysed two different titanium surfaces, before and after the treatment with two cleansing approaches, mimicking what physiologically occurs in terms of cells/implant interactions after decontamination procedures. Qualitative analysis of surface morphology revealed how in both MCH and NCA groups the surfaces treated with Ni-Ti brushes are morphologically different from the untreated surfaces, in agreement with Park et al. [18]. In fact, SEM analysis showed the presence of deep grooves heterogeneously distributed over the MCH surface and flattened area over the NCA surface. Slight although not statistically significant differences were revealed by Ra, Rpv and Rsm physical parameters. Conversely, the treatment with Air-polishing system with Bic40 produced not significant alterations of both the MCH and NCA surfaces in terms of physical parameters. Subsequently, the aim of the study was to evaluate the biological features of stem cells that embryologically participate to the osseointegration process. The use of dental pulp stem cells represents a suitable choice in terms of stemness properties and ease of isolation. Although hDPSCs are a heterogeneous population, we used a stem cell population enriched for the expression of the stemness markers STRO-1 and c-Kit, which represent a strictly mesenchymal origin able to differentiate in bone, adipose and myogenic lineages. To the best of our knowledge, we noticed that on MCH titanium surface before and after both the cleansing treatments, hDPSCs maintained their fibroblast-like morphology without any alteration of cell proliferation. The only difference observed concerned the hDPSCs distribution pattern through the MCH Brush surface, in fact cells were spread along the grooves produced by Ni-Ti brush. On the contrary, hDPSCs grown on MCH Bic40 and MCH ctrl surfaces did not show any difference. Regarding NCA surface, a change in cell morphology was observed in NCA ctrl and NCA brush, in accordance with our previous findings [15]. After the treatment with Bic40 hDPSCs showed their typical cell morphology and also demonstrated a statistically significant higher proliferation rate, when compared to NCA ctrl and NCA Brush surfaces: this phenomenon might be attributed to the interaction of calcium ions incorporated on the NCA surface with bicarbonate ions from the cleansing treatment with Bic40. These observations were confirmed by the evaluation of stemness markers on MCH and NCA surfaces before and after the cleansing treatments. In particular, whereas no differences were detected in any MCH group, we noticed that STRO-1 and c-Kit expression were maintained in NCA Bic40 group. As a matter of fact, the maintenance of stemness is a primary requirement to preserve the biological properties including self-renewal and differentiation capabilities and immunomodulatory properties and consequently to avoid cell senescence. Based on these results, Bic40 might represent the most suitable cleansing treatment. To further confirm the efficacy of Bic40, we also performed microbiology assays. Using a recently established model precious in assessing microbial biofilm formation onto medical devices [29], here we showed that BLI-Pseudomonas has the ability to adhere to the titanium disks and to form a consistent biofilm on their surface. Moreover, the cleaning treatment with BIC 40 µm is capable of reducing the biofilm of about 99% with respect to untreated control group (100% vs. 0.05%, respectively). In particular, the microbial load, evaluated as RLU and converted in CFU/mL, has been reduced of more than 3 logs in the treated groups as compared to their controls. Furthermore, microbial re-growth in treated disks remains consistently below the control values (difference of about 1 log). We may hypothesize that the combination of the physical treatment (dry spray) and the hypertonic condition (sodium bicarbonate accumulation onto titanium disks) negatively impact on microbial cell-viability. Furthermore, the enrichment of air-polishing powders with antimicrobial fillers such as Ciprofloxacin and/or mucosal defensive agents such as Zinc L-carnosine might improve the antibacterial action of the cleaning tool and its biocompatibility towards soft tissues [30]. In conclusion, we demonstrated that Bic40 provides a suitable cleansing approach either on smooth and rough surfaces. Human DPSCs Isolation and Immune Selection This study was carried out in compliance with the recommendations of Comitato Etico Provinciale-Azienda Ospedaliero-Universitaria di Modena (Modena, Italy), which provided the approval of the protocol (ref. number 3299/CE; 5 September 2017). Human DPSCs were isolated from third molars of adult subjects (n = 3; 30-35 years) undergoing routine dental extraction. All subjects gave written informed consent in accordance with the Declaration of Helsinki. Cells were isolated from dental pulp as previously described [23]. Briefly, dental pulp was harvested from the teeth and underwent enzymatic digestion by using a digestive solution, (3 mg/mL type I collagenase plus 4 mg/mL dispase in α-MEM). Pulp was then filtered onto 100 µm Falcon Cell Strainers, in order to obtain a cell suspension. Cell suspension was then plated in 25 cm 2 culture flasks and expanded in standard culture medium [α-MEM supplemented with 10% heat inactivated foetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, 100 µg/mL streptomycin] at 37 • C and 5% CO2. Following cell expansion, human DPSCs were immune-selected by using MACS ® separation kit according to manufacturers' instructions. The immune-selections were performed by using primary antibodies: mouse IgM anti-STRO-1 and rabbit IgG anti-c-Kit (Santa Cruz, Dallas, TX, USA). The following magnetically labelled secondary antibodies were used: anti-mouse IgM and anti-rabbit IgG (Miltenyi Biotec, Bergisch Gladbach, Germany). The immune-selection resulted in the isolation of a homogeneous hDPSCs population expressing STRO-1 and c-Kit. All the experiments were performed using STRO-1 + /c-Kit + hDPSCs. Titanium Surfaces Characterization A total of 50 titanium disks (MegaGen Co. Ltd., Daegu, South Korea) measuring 13 mm in diameter and 3 mm in thickness were used in this study. Particularly, two different titanium surfaces were used: machined (MCH) and Ca2 + incorporated (NCA). The treatment processes are hold by the manufacturer. For this study, titanium surfaces were divided into 3 different experimental groups: (1) control surfaces (Ctrl), (2) surfaces cleaned with Ni-Ti brushes (Brush) and (3) surfaces cleaned by air-polishing with NaHCO 3 40 µm (Bic40). Briefly, the "I.C.T." (Implant Cleaning Technique) nickel-titanium brushes, made up of about 40 super elastic filaments with a diameter of 0.07-0.13 mm, were used at 400 rpm and 600 rpm, respectively, for two sequential rounds of 45 s each. The total duration of each surface treatment was 90 s and a 25 g pressure calibrated on an electronic scale was used, with 100 N of torque. All the treatments were performed by the same operator under irrigation with buffer saline solution (0.9% NaCl). The "Combi-Touch" air polishing system with sodium bicarbonate particles (ø 40 µm) was used for 30 s at a distance of 5 mm. In particular, the operating principle of "Combi Touch" air-polishing system consists in the mechanical action of compressed air spreading an accelerated flow of particles onto the titanium surface. When the particles hit the surface, their kinetic energy is dissipated almost completely, thus producing a gentle although effectively cleansing action. The cleaning treatment is completed by a water jet that is arranged in the form of a bell around the main flow and that uses the pressure drop originated around the nozzle to prevent the powder cloud from bouncing and being dispelled and, at the same time, to dissolve the powder by washing the surface. After the cleansing treatments, surface morphology for each experimental group was qualitatively evaluated through Scanning Electron Microscopy (EVO MA 10-Carl Zeiss, Oberkochen, Germany) working at 25 keV. Moreover, surface roughness was determined by Atomic Force Microscopy (Nanoscope IIIa, Veeco, Santa Barbara, CA, USA) and Ra, Rpv and Rms parameters were obtained. Ra (Roughness average) measures the average surface roughness considering the peaks and the valley means. The Rpv (peak to valley distance) describes the maximum observed range in a sample area and it is given by the distance between the highest peak and the lowest valley on a measured surface. Rms parameter describes the density of micropores on the surface. Cell Morphology and Proliferation Undifferentiated STRO-1 + /c-Kit + hDPSCs at passage 1 were seeded at a density of 2.5 × 10 3 cell/cm 2 on titanium disks in 12-multiwell units and expanded under standard culture conditions. After 7 days of culture, cells were fixed in ice-cold paraformaldehyde 4% for 15 min without dissociating them from the titanium disks. The cells were subsequently permeabilized with 0.1% Triton X-100 in PBS for 5 min, stained with AlexaFluor546 Phalloidin (Thermo Fisher Scientific) and rinsed with PBS 1%. Nuclei were stained with 1 µg/mL 4 ,6-diamidino-2-phenylindole (DAPI) in PBS 1%. Titanium disks were mounted with DABCO anti-fading medium on cover glasses. Cell proliferation and morphology were assessed using confocal microscopy (Nikon A1), as formerly described by Bianchi et al. [24]. Cell proliferation was measured by counting the DAPI-positive nuclei on 10 randomly selected fields measuring 2.85 × 10 5 µm 2 for each disk by a blind operator. At the same time, average cell area was measured on hDPSCs from 10 randomly selected fields, measuring 2.85 × 10 5 µm 2 , on 3 disks for each experimental group. Evaluation of Stemness Markers in hDPSCs Cultured on Titanium Disks After 1 week of culture on each disk, cells were fixed in 4% ice-cold paraformaldehyde in PBS for 15 min and then processed as previously described [31]. The following primary Abs diluted 1:100 were used: mouse IgM anti-STRO-1 and rabbit IgG anti-c-Kit (Santa Cruz, Dallas, TX, USA). Secondary Abs (goat anti-mouse IgM Alexa488, goat anti-rabbit Alexa546) were diluted 1:200 (Thermo Fisher Scientific, Waltham, MA, USA). Nuclei were stained with 1 µg/mL 4 ,6-diamidino-2-phenylindole (DAPI) in PBS 1%. The multi-labelling immunofluorescence experiments were carried out avoiding cross-reactions between primary and secondary Abs. Confocal imaging was performed using a Nikon A1 confocal laser scanning microscope, as previously described [32]. The confocal serial sections were processed with ImageJ software to obtain 3-dimensional projections and image rendering was performed by Adobe Photoshop Software. Microbial Strain We used the bioluminescent Pseudomonas aeruginosa strain (P1242) (BLI-Pseudomonas) previously engineered in order to express the luciferase gene and substrate under the control of a constitutive P1 integron promoter 2 [33]; thus, live cells constitutively produce a detectable bioluminescent signal. To quantify the bioluminescence emission by BLI-Pseudomonas in the experimental groups, a calibration curve was created allowing to express such values in terms of colony forming units (CFU)/mL; in particular, serial dilutions (starting from 1 × 10 8 /mL) of a bacterial suspension in Tryptic Soy Broth (TSB) (OXOID, Milan, Italy) with 2% sucrose were prepared and a volume of 100 µL of each dilution was seeded in a black-well microtiter plate. The plate was immediately read by using a Fluoroskan Luminescence reader (Thermo Fisher Scientific, Waltham, MA, USA). Biofilm Formation onto Titanium Disks In order to allow biofilm formation onto Ca-structured titanium disks, 180 µL of overnight cultures of BLI-Pseudomonas (10 6 /mL) in TSB plus 2% sucrose were seeded in 96 black well-plate, containing 1 disc/well. In parallel, BLI-Pseudomonas was seeded in wells without the titanium disks. The plates were then incubated at 35 • C for 24 h, into the Fluoroskan reader and the bioluminescence was detected at every hour to evaluate in real time, the total microbial load. After incubation, the disks were washed twice with phosphate buffered saline (PBS) (EuroClone, Wetherby, UK) at room temperature (RT), transferred into new wells and the bioluminescence signal was again measured; the obtained values were referred to the amounts of biofilm formed onto disk surfaces. Through the calibration curve, the relative luminescent units (RLU) obtained in each experiment were converted in CFU/mL. Biofilm Re-Growth onto Treated and Untreated Titanium Disks Following biofilm formation, the disks were split in two groups and the cleaning treatment was performed, as detailed above; then, controls (untreated) and cleaned (exposed to Bic40 µm for 30 s/surface) were transferred into new wells containing fresh medium and further assessed for microbial residual load and time-related re-growth. Briefly, treated and untreated disks were analysed by Fluoroskan reader, immediately (residual biofilm) and at any hour during a further 24 h incubation at 35 • C. Through the calibration curve, the RLU were converted in CFU/mL. Statistical Analysis All experiments were performed in triplicate. Data were expressed as mean ± standard deviation (SD). Differences between two experimental conditions were analysed by paired, Student's t-test. Differences among three or more experimental samples were analysed by ANOVA followed by Newman-Keuls post hoc test (GraphPad Prism Software version 5 Inc., San Diego, CA, USA). In any case, p value < 0.05 was considered statistically significant. Conflicts of Interest: The authors declare no conflict of interest.
5,619.6
2019-04-01T00:00:00.000
[ "Materials Science", "Biology" ]
Neuroprotection of GluR5-containing Kainate Receptor Activation against Ischemic Brain Injury through Decreasing Tyrosine Phosphorylation of N-Methyl-d-aspartate Receptors Mediated by Src Kinase* Previous studies indicate that cerebral ischemia breaks the dynamic balance between excitatory and inhibitory inputs. The neural excitotoxicity induced by ionotropic glutamate receptors gain the upper hand during ischemia-reperfusion. In this paper, we investigate whether GluR5 (glutamate receptor 5)-containing kainate receptor activation could lead to a neuroprotective effect against ischemic brain injury and the related mechanism. The results showed that (RS)-2-amino-3-(3-hydroxy-5-tert-butylisoxazol-4-yl) propanoic acid (ATPA), a selective GluR5 agonist, could suppress Src tyrosine phosphorylation and interactions among N-methyl-d-aspartate (NMDA) receptor subunit 2A (NR2A), postsynaptic density protein 95 (PSD-95), and Src and then decrease NMDA receptor activation through attenuating tyrosine phosphorylation of NR2A and NR2B. More importantly, ATPA had a neuroprotective effect against ischemia-reperfusion-induced neuronal cell death in vivo. However, four separate drugs were found to abolish the effects of ATPA. These were selective GluR5 antagonist NS3763; GluR5 antisense oligodeoxynucleotides; CdCl2, a broad spectrum blocker of voltage-gated calcium channels; and bicuculline, an antagonist of γ-aminobutyric acid A (GABAA) receptor. GABAA receptor agonist muscimol could attenuate Src activation and interactions among NR2A, PSD-95 and Src, resulting the suppression of NMDA receptor tyrosine phosphorylation. Moreover, patch clamp recording proved that the activated GABAA receptor could inhibit NMDA receptor-mediated whole-cell currents. Taken together, the results suggest that during ischemia-reperfusion, activated GluR5 may facilitate Ca2+-dependent GABA release from interneurons. The released GABA can activate postsynaptic GABAA receptors, which then attenuates NMDA receptor tyrosine phosphorylation through inhibiting Src activation and disassembling the signaling module NR2A-PSD-95-Src. The final result of this process is that the pyramidal neurons are rescued from hyperexcitability. Brain functions are based on the dynamic balance between excitatory and inhibitory inputs. Cerebral ischemia breaks this balance, and the neural excitotoxicity takes over, which induces delayed neuronal cell death. Glutamate, as the primary excitatory neurotransmitter in the central nervous system, has been given widespread attention in previous studies. Ionotropic glutamate receptors, which play an important part in ischemic excitotoxicity, are divided into N-methyl-D-aspartate (NMDA), 3 ␣-amino-3-hydroxy-5-methyl-4-isoxazole propionate (AMPA), and kainate (KA) receptors (1,2). The NMDA receptor, as a type of ligand-gated ion channel, attracts the most attention in research of cerebral ischemia. It is composed of three types of subunits: NR1, NR2 (NR2A to -D), and NR3 (NR3A and -B) (3)(4)(5). Among these subunits, transient global ischemia increases tyrosine phosphorylation of NR2A and NR2B (6). During ischemia-reperfusion, excessive glutamate release induces the influx of Na ϩ and Ca 2ϩ ions through NMDA receptors (7), and activated Src kinase can mediate tyrosine phosphorylation of NR2A and NR2B (8) and increase the activity of NMDA receptors (9). Our previous study suggested that cerebral ischemia induced autophosphorylation of Src, and then the activated Src could phosphorylate NMDA receptors in pyramidal neurons. PSD-95 (postsynaptic density protein 95) was involved in the event through forming signaling module NR2A-PSD-95-Src (10,11). In contrast to NMDA receptors, little is known about the role of KA receptors in ischemia-reperfusion. KA receptors are composed of five subunits: GluR5, GluR6, GluR7, KA1, and KA2 (1). A number of interesting experimental results turned our attention to the GluR5 and GluR6 subunits; GluR6-deficient mice exhibited resistance to neurotoxic effects induced by kainate (12). Knock-out of GluR6 prevented kainate-induced epileptiform bursts, whereas ablation of GluR5 led to a higher susceptibility of epileptogenic effects of kainate (13). An agonist of GluR5-containing KA receptors had antiepileptic effects (14). The results suggest that GluR6 and GluR5 may play opposing roles in brain excitability. Recently, we found that GluR6mediated c-Jun N-terminal kinase activation is responsible for ischemic brain injury (15). We want to know whether GluR5 also plays opposing roles to GluR6 in cerebral ischemia and, if so, to identify the related mechanism. Corresponding to their distinct functions, GluR6 and GluR5 have different distribution in the hippocampus. The excitatory pyramidal cells express primarily GluR6, whereas GluR5 is expressed primarily within the inhibitory interneurons of the hippocampus (16,17). Compared with excitatory glutamate neurotransmission, ␥-aminobutyric acid (GABA), as the primary inhibitory neurotransmitter in central nervous system, has received relatively little attention in the area of ischemic brain injury (18). GABAergic inhibitory interneurons (also called local circuit neurons) of hippocampus are the regulators of pyramidal neuron excitability (19), keeping the dynamic balance between excitatory and inhibitory inputs. Recent studies have shown that activation of GluR5-containing KA receptors facilitates GABA release from interneurons and increases tonic inhibition of pyramidal neurons (17, 20 -22). In addition, GABA performs a function through acting on the GABA receptors, which can be divided into three subclasses: GABA A , GABA B , and GABA C receptors (23). Among them, GABA A receptor, which directly controls a chloride ion channel, is situated in the postsynaptic membrane. Activation of GABA A receptors enhances the influx of chloride ions, inducing hyperpolarization and attenuating cell excitability (24). In cerebral ischemia research, it was reported that GABA A receptors also take part in neuroprotection against ischemia-reperfusion (25)(26)(27). Taken together, there is a raised possibility that activation of GluR5-containing KA receptors facilitates GABA release from interneurons. The released GABA activates postsynaptic GABA A receptors, which suppress the ischemic depolarization and then decrease the activation of NMDA receptors, performing a neuroprotective function against ischemic brain injury through enhancing inhibitory inputs and then getting excitation and inhibition back into the proper dynamic balance. Animal Model of Ischemia-Adult male SD rats weighing 250 -300 g were used (Shanghai Experimental Animal Center, Chinese Academy of Science). The experimental procedures were approved by local legislation for ethics of experiments on animals. Transient cerebral ischemia was induced by four-vessel occlusion, as described previously (29). Briefly, under anesthesia with chloral hydrate (300 mg/kg, intraperitoneal), vertebral arteries were electrocauterized, and common carotid arteries were exposed. On the following day, both carotid arteries were occluded with aneurysm clips to induce cerebral ischemia. After 15 min of occlusion, the aneurysm clips were removed for reperfusion. Rectal temperature was maintained at about 37°C throughout the procedure. Rats that lost their righting reflex and whose pupils were dilated and unresponsive to light were selected for the experiments. Rats with seizures were discarded. An electroencephalography was monitored to ensure isoelectricity after carotid artery occlusion. Sham controls were performed using the same surgical procedures, except that the carotid arteries were not occluded. Sample Preparation-Rats were decapitated immediately after 6 h of reperfusion, and then the hippocampal CA1 region was isolated and quickly frozen in liquid nitrogen. Tissues were homogenized in an ice-cold homogenization buffer containing 50 mM MOPS (pH 7.4), 100 mM KCl, 320 mM sucrose, 50 mM NaF, 0.5 mM MgCl 2 , 0.2 mM dithiothreitol, 1 mM EDTA, 1 mM EGTA, 1 mM Na 3 VO 4 , 20 mM sodium pyrophosphate, 20 mM ␤-phosphoglycerol, 1 mM p-nitrophenyl phosphate, 1 mM benzamidine, 1 mM phenylmethylsulfonyl fluoride, and 5 g/ml each leupeptin, aprotinin, and pepstatin A. The homogenates were centrifuged at 800 ϫ g for 10 min at 4°C. Supernatants were collected, and protein concentration was determined by the method of Lowry et al. (30). Samples were stored at Ϫ80°C and were thawed only once just before use. Immunoprecipitation and Immunoblotting-Tissue homogenates (400 g of protein) were diluted 4-fold with 50 mM HEPES buffer (pH 7.4) containing 10% glycerol, 150 mM NaCl, 1% Triton X-100, 0.5% Nonidet P40 (Nonidet P-40), and 1 mM each of EDTA, EGTA, phenylmethylsulfonyl fluoride, and Na 3 VO 4 . Samples were preincubated for 1 h with 20 l of protein A-Sepharose CL-4B (Amersham Biosciences) at 4°C and then centrifuged to remove protein adhered nonspecifically to protein A. The supernatants were incubated with primary antibodies for 4 h or overnight at 4°C. Protein A (20 l) was added to the tube, and incubation was continued for another 2 h. Samples were centrifuged at 10,000 ϫ g for 2 min at 4°C, and the pellets were washed three times with immunoprecipitation buffer. Bound protein was eluted by boiling at 100°C for 5 min in SDS-PAGE loading buffer and then isolated by centrifugation. The supernatants were separated on polyacrylamide gels and then electrotransferred onto a nitrocellulose membrane (Amersham Biosciences). After blocking for 3 h in Tris-buffered saline with 0.1% Tween 20 (TBST) and 3% bovine serum albumin, membranes were incubated overnight at 4°C with primary antibodies in TBST containing 3% bovine serum albumin. Membranes were then washed and incubated with alkaline phosphatase-conjugated secondary antibodies in TBST for 2 h and developed using nitro blue tetrazolium/5-bromo-4chloro-3-indolyl-phosphate color substrate. The densities of the bands on the membrane were scanned and analyzed with an image analyzer (LabWorks Software; UVP, Upland, CA). Histology and Immunohistochemistry-Rats were perfusionfixed with 4% paraformaldehyde in 0.1 M sodium phosphate buffer (pH 7.4) under anesthesia after 5 days of brain ischemiareperfusion. Brains were removed quickly and further fixed with the same fixation solution at 4°C overnight. Postfixed brains were embedded in paraffin, followed by preparation of coronal sections, 5 m thick, using a microtome (Leica RM2155; Nussloch, Germany). The paraffin-embedded brain sections were deparaffinized with xylene and rehydrated by ethanol at graded concentrations of 100 -70% (v/v), followed by washing with water. The sections were stained with 0.1% (w/v) cresyl violet and examined using light microscopy. The number of surviving hippocampal CA1 pyramidal cells per 1 mm of length was counted as the neuronal density. Immunoreactivity was determined by the avidin-biotin-peroxidase method. Briefly, sections were deparaffinized with xylene and rehydrated by ethanol at graded concentrations and distilled water. High temperature antigen retrieval was performed in 1 mM citrate buffer. To block endogenous peroxidase activity, sections were incubated for 30 min in a solution of 0.1% H 2 O 2 in phosphate-buffered saline. After being blocked with 5% (v/v) normal goat serum in phosphate-buffered saline for 1 h at 37°C, sections were incubated with mouse monoclonal antibody against phospho-Src (Tyr(P)-416) at 4°C for 3 days. These sections were then incubated with biotinylated goat anti-mouse secondary antibody made up of 0.1% bovine serum albumin, 0.3% Triton X-100, and 1% normal goat serum in phosphatebuffered saline overnight and subsequently incubated with avidin-conjugated horseradish peroxidase for 1 h at 37°C. Finally, sections were incubated with the peroxidase substrate diaminobenzidine until the desired stain intensity developed. They were then examined by light microscopy. Cell Culture-Hippocampal neuronal cultures were prepared from 18-day-old Sprague-Dawley rat embryos as described previously (31). Briefly, hippocampi were meticu-lously isolated in ice-cold high glucose Dulbecco's modified Eagle's medium (Invitrogen). Hippocampal cells were dissociated by trypsinization (0.25% (w/v) trypsin and 0.02% (w/v) EDTA in Ca 2ϩ -and Mg 2ϩ -free Hanks' balanced salt solution), at 37°C for 15 min, followed by gentle swing in plating medium (high glucose Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum and 10% horse serum; Invitrogen). Cells were seeded onto poly-L-lysine (Sigma)coated wells or coverslips at a density of 1 ϫ 10 5 cells/cm 2 and incubated at 37°C in 5% CO 2 atmosphere. After 24 h, cultural medium was replaced by neurobasal medium supplemented with B-27 (Invitrogen) and 0.5 mM glutamine and then halfreplaced twice every week. Cultures were used after 14 days in vitro for patch clamp recording. Patch Clamp Recording-Electrophysiological recording was performed in the conventional whole-cell patch clamp recording configuration under voltage-clamp conditions. Patch pipettes were pulled from glass capillaries with an outer diameter of 1.5 mm on a two-stage puller (PP-830; Narishige). The resistance between the recording electrode filled with pipette solution (140 mM CsCl, 10 mM tetraethylammonium, 10 mM Hepes, 5 mM EGTA, 1 mM MgCl 2 , 4 mM ATP, pH 7.2), and the reference electrode was 3-5 megaohms. Membrane currents were measured using a patch clamp amplifier AxonPatch 700B (Axon Instruments, Foster City, CA), filtered at 1 kHz, sampled, and analyzed using a DigiData 1322A interface and a computer with the pClamp 9.0 system (Axon Instruments). The series resistance was compensated automatically. The membrane potential was held at Ϫ60 mV throughout the experiment. All experiments were carried out at room temperature (22-25°C). Statistical Evaluation-Values were expressed as mean Ϯ S.D. and were obtained from no fewer than five independent rats. Statistical analysis of the results was carried out by oneway analysis of variance, followed by Duncan's new multiple range method or the Newman-Keuls test. p values of Ͻ0.05 were considered significant. RESULTS ATPA Attenuates Tyrosine Phosphorylation of Src, NR2A, and NR2B in the Hippocampal CA1 Region-Our previous study showed that tyrosine phosphorylation of NR2A and NR2B and interactions between Src and NMDA receptors reach their peak level at 6 h of reperfusion after ischemia (11,32). In order to investigate the effects of ATPA on tyrosine phosphorylation of Src, NR2A, and NR2B, we injected ATPA to rats before ischemia and got samples at 6 h of reperfusion after ischemia. Antibody to NR2B phosphorylated on Tyr-1472 was used, because NR2B Tyr-1472 is the major phosphorylation site mediated by Src family kinases (33). Since Src is activated by autophosphorylation of tyrosine residues (Tyr-416) (34), we used antibody against Tyr(P)-416 of Src to investigate Src activation. The samples were immunoprecipitated with antibody against NR2A and then immunoblotted with antibody against phosphotyrosine. As shown in Fig. 1, A-D, for ATPA-treatment group, there was a large decline in the increased tyrosine phosphorylation of Src, NR2A, and NR2B but not for vehicle of ATPA, whereas the protein level of Src, NR2A, and NR2B did not change significantly. Among the results, two bands were detected in immunoblots with anti-p-Src or anti-Src antibody. To further confirm that ATPA performs a function through activating GluR5, we administered selective GluR5 antagonist NS3763 or GluR5 AS-ODNs, which can suppress the expression of GluR5, to rats combined with ATPA before ischemia. As shown in Fig. 1, A-D, for the ATPA ϩ NS3763 and ATPA ϩ GluR5 AS-ODNs treatment groups, we found that NS3763 and GluR5 AS-ODNs could reverse the effects of ATPA on the tyrosine phosphorylation of Src and NMDA receptor, whereas GluR5 MS-ODNs and the vehicles of the above drugs had no significant effects on the phosphorylation states. However, when only NS3763 was administered before ischemia, although the tyrosine phosphorylation of Src and NMDA receptor and the interactions among Src, PSD-95, and NR2A increased slightly, the change was not statistically significant ( Fig. 1, E-H). Taken together, our results suggest that activated GluR5-containing KA receptor can down-regulate ischemia-induced Src activation and then attenuate the NMDA receptor tyrosine phosphorylation. CdCl 2 Suppresses the Inhibitory Effect of ATPA on Tyrosine Phosphorylation of Src, NR2A, and NR2B in the Hippocampal CA1 Region-To further examine whether the voltage-gated calcium channels play a role in the decreased phosphorylation of Src, NR2A, and NR2B in pyramidal cells of the hippocampal CA1 region induced by ATPA, we administered CdCl 2 , a broad spectrum blocker of voltage-gated calcium channels, to rats 30 min before ischemia. As shown in Fig. 2, for the ATPA ϩ CdCl 2 treatment group, CdCl 2 could abolish the effect of ATPA on tyrosine phosphorylation of Src, NR2A, and NR2B. We also only injected CdCl 2 into the rats before ischemia and then found that it had no significant effect on tyrosine phosphorylation of Src, NR2A, and NR2B (data not shown). Thus, our results suggest that the voltage-gated calcium channel is involved in the effect of activated GluR5-containing KA receptor on tyrosine phosphorylation of Src and the NMDA receptor. The GABA A Receptor Is Involved in the Effect of ATPA on Tyrosine Phosphorylation of Src, NR2A, and NR2B in the Hip- pocampal CA1 Region-It was reported that GluR5-containing KA receptors facilitate GABA release from interneurons and increase tonic inhibition of pyramidal neurons (17, 20 -22). Therefore, we hypothesized that if released GABA, which was induced by activated GluR5-containing KA receptor, could affect tyrosine phosphorylation of Src, NR2A, and NR2B in postsynaptic pyramidal neurons, GABA A receptors on postsynaptic cells should be involved in the process. So we injected a GABA A receptor agonist Mus to rats before ischemia to see whether activated GABA A receptors can attenuate the tyrosine phosphorylation of Src, NR2A, and NR2B. The results confirmed the hypothesis (Fig. 3, A and B). The results suggest that the effects of activated GluR5-containing KA receptors on postsynaptic neurons may be mediated by GABA A receptors. To further confirm this, bicuculline, an antagonist of the GABA A receptors, combined with ATPA was administered to the rats 30 min before ischemia. The effect of ATPA on tyrosine phosphorylation of Src, NR2A, and NR2B was abolished by bicuculline (Fig. 3, C and D). Similar to the other drugs we used, single injection of bicuculline was also conducted before ischemia, and no significant change of tyrosine phosphorylation of Src, NR2A, and NR2B was found (data not shown). In summary, our results suggest that the effect of activated GluR5-containing KA receptor on tyrosine phosphorylation of Src and NMDA receptor is based on GABA A receptor activation. GluR5 Regulates Tyrosine Phosphorylation of NMDA Receptors I NMDA . GABA A receptor activation mediated by Mus induced an intracellular current, and co-application of Mus with NMDA evoked the current with an amplitude (83.6 Ϯ 3.3%) that was significantly lower than the expected sum of currents evoked by Mus (I Mus ) and NMDA (I NMDA ) independently. The results suggested that there was a cross-inhibition between GABA A receptors and NMDA receptors (Fig. 4A, B and C). Moreover, GABA A receptor activator application was followed by NMDA application to further investigate the relationship between GABA A receptors and NMDA receptors (Fig. 4, D and E). Following a single application of the Mus, I NMDA (66.8% Ϯ 3.1%) became much less than the control I NMDA . ATPA Disturbs Interactions among PSD-95, Src, and NR2A after Ischemia, and GABA A Receptors Are Involved in the Process-Our previous studies showed that the interaction between Src and NMDA receptors reaches its peak level at 6 h of reperfusion after ischemia (11), and PSD-95 is involved in the events (10). To further investigate whether the effect of ATPA on attenuating tyrosine phosphorylation of NMDA receptors and Src is related to the interactions among PSD-95, NR2A, and OCTOBER 24, 2008 • VOLUME 283 • NUMBER 43 JOURNAL OF BIOLOGICAL CHEMISTRY 29361 Src, immunoprecipitation and immunoblotting were used to examine the association among NR2A, Src, and PSD-95 after 15 min of ischemia followed by 6 h of reperfusion. Reciprocal immunoprecipitation was carried out to confirm the results. As shown in the ATPA treatment group of Fig. 5, A and B, administration of ATPA before ischemia diminished the increased interactions among NR2A, Src, and PSD-95; meanwhile, the protein levels of NR2A, Src, and PSD-95 were not altered. In addition, to confirm that the neuroprotective role of ATPA is mediated by the activation of GABA A receptors, we injected bicuculline combined with ATPA to the rats before ischemia. As shown in the ATPA ϩ bicuculline treatment group of Fig. 5, A and B, the effect of ATPA on the interactions of NR2A, Src, and PSD-95 was inhibited by blocking GABA A receptors. Moreover, GABA A receptor agonist Mus treatment can also suppress the interactions among NR2A, Src, and PSD-95 (Fig. 5, C and D). Thus, our data suggest that the GluR5-induced disassembly of the triplicate complex NR2A-PSD-95-Src may be mediated by GABA A receptors. When immunoprecipitated with nonspecific IgG, no significant bands corresponding to PSD-95, Src, and NR2A were detected (Fig. 5E). Immunohistochemistry Also Reveals the Mechanism Underlying the Effect of Activated GluR5-containing KA Receptor-Further immunohistochemical analysis also showed the neuroprotective effects of ATPA on the tyrosine phosphorylation of Src. In the sham treatment group (Fig. 6, a and b), p-Src immunoreactivity was barely detectable in the CA1 pyramidal neuron; however, the immunoreactivity significantly increased at 6 h of reperfusion (Fig. 6, c and d), and the inhibitory effect of ATPA was obvious with weak immunostaining (Fig. 6, m and n) in contrast to vehicle saline (Fig. 6, e and f). In addition, the inhibitory effect of ATPA on tyrosine phosphorylation of Src was suppressed by NS3763 (Fig. 6, q and r), CdCl 2 (Fig. 6, s and t), or bicuculline (Fig. 6, o and p), whereas the vehicle of NS3763 DMSO (Fig. 6, i and j), CdCl 2 H 2 O (Fig. 6, k and l), or bicuculline saline (Fig. 6, g and h) had no significant effect on p-Src immunoreactivity. Taken together, our results suggest that ATPA performs a neuroprotective function by acting on GluR5. The effect of activated GluR5-containing KA receptors on Src tyrosine phosphorylation is mediated by voltage-gated calcium channels and GABA A receptors. Neuroprotective Effect of ATPA against Ischemic Brain Injury in Vivo-It is well known that cerebral ischemia induces delayed neuronal death in pyramidal neurons of the hippocampal CA1 region. To investigate whether ATPA has a protective role against ischemia-reperfusion-induced neuronal cell death, cresyl violet staining was performed to examine the survival of pyramidal neurons of the hippocampal CA1 region. Rats were injected with ATPA, bicuculline, NS3763, or CdCl 2 before ischemia. Results indicated that normal pyramidal cells in the sham treatment group (Fig. 7, a and b) showed round and pale stained nuclei, whereas dead cells in the ischemia-reperfusion and vehicle treatment group showed pyknotic nuclei (Fig. 7, c-l). ATPA treatment group (Fig. 7, m and n) showed a significant decrease in neuronal degeneration. However, the protective effect of ATPA disappeared when the rats were treated with bicuculline, NS3763, or CdCl 2 , respectively, in addition to ATPA (Fig. 7, o-t). The surviving pyramidal cells counted within a 1-mm length of the CA1 region were 205 Ϯ 24, 19 Ϯ 7, 20 Ϯ 6, 21 Ϯ 6, 22 Ϯ 7, 18 Ϯ 6, 113 Ϯ 14, 24 Ϯ 7, 27 Ϯ 8, and 23 Ϯ 7, corresponding to Fig. 7, b, d, f, h, j, l, n, p, r, and t, respectively. Taken together, our results suggest that activation of GluR5-containing KA receptors has a neuroprotective effect against ischemia-reperfusion-induced neuronal cell death in vivo, and furthermore voltage-gated calcium channels and GABA A receptors are involved in the event. DISCUSSION In the present study, we report for the first time that activation of GluR5-containing KA receptors has a neuroprotective effect against ischemic brain injury by suppressing Src activation and interactions among NR2A, PSD-95, and Src and then decreasing NMDA receptor activation by attenuating NMDA receptor tyrosine phosphorylation, which are Ca 2ϩ -dependent and GABA A receptor-mediated. In virtue of in situ hybridization, it was found that KA receptor subunit GluR6 is expressed primarily within the excitatory pyramidal cells of the hippocampus, whereas the inhibitory interneurons express primarily GluR5 (16,17). The different distribution of these KA receptor subunits may mean that they have distinct functions in central nervous system. In epilepsy research, GluR6-deficient mice exhibited resistance to neurotoxic effects induced by kainate (12). Knock-out of GluR6 prevented kainate-induced epileptiform bursts, whereas ablation of GluR5 led to a higher susceptibility of epileptogenic effects of kainate (13), and an agonist of GluR5-containing KA receptors had antiepileptic effects (14). According to above, we infer that GluR6 and GluR5 may play opposing roles in excitability of the brain. In previous articles concerning cerebral ischemia, we found that GluR6-mediated c-Jun N-terminal kinase activation is responsible for ischemic brain injury (15). Based on this, we can hypothesize that GluR5 activation may have a neuroprotective effect against ischemic brain injury by playing opposing roles to GluR6 during ischemia-reperfusion. Our present study confirmed the hypothesis. The results show that activated GluR5-containing KA receptors have a neuroprotective effect against ischemic brain injury in vivo (Fig. 7) and can suppress the tyrosine phosphorylation of Src, NR2A, and NR2B (Fig. 1, A-D). However, when we administered a GluR5 antagonist NS3763, instead of ATPA, to rats, the tyrosine phosphorylation of Src and NMDA receptors and the interactions among PSD-95, NR2A, and Src increased slightly over the vehicle control groups, but the changes were not statistically significant (Fig. 1, E-H). This may be due to the fact that GluR5 becomes much more activated than normal when ATPA binds to it. Thus, the inhibitive effect of the GluR5 antagonist on activated GluR5 is more marked than on inactivated GluR5 at a low level of activity. An additional factor may be the limited quantity of total protein of Src in pyramidal cells. Since ischemia-reperfusion induces Src activation, if nearly all Tyr-416 of Src is phosphorylated during ischemia-reperfusion, the antagonist NS3763 treatment will not further make more Src activated. Then the increase of NMDA receptor tyrosine phosphorylation and the interactions will subsequently be affected. GABAergic inhibitory interneurons of the hippocampus are the regulators of pyramidal neuron excitability (19). As described previously, interneurons in the hippocampal CA1 region express high levels of GluR5-containing KA receptors, whereas in hippocampal pyramidal cells expression of GluR5 is barely detectable (16,17). However, as shown in our results, Src tyrosine phosphorylation (Fig. 6) and the protective role of ATPA against ischemia-reperfusion in vivo (Fig. 7) are performed in the pyramidal cells of the hippocampus. So, how does activation of GluR5-containing KA receptors on interneurons affect postsynaptic pyramidal cells? Recently, it was reported that GluR5-containing KA receptors can facilitate GABA release and increase tonic inhibition of pyramidal neurons (17, 20 -22). Based on this information, we thought that the neuro- induced by activating GluR5, such as suppressing Src activation and interactions among NR2A, PSD-95, and Src then decreasing NMDA receptor activation through attenuating NMDA receptor tyrosine phosphorylation. These results suggest that activated GluR5-containing KA receptors may facilitate GABA release from interneurons. However, previous studies reported different results, which suggest that activation of KA receptors on GABAergic presynaptic terminals leads to a decrease of GABA release (28,35), and in cerebral ischemia research, corresponding results were also reported (36). Thus, two opposite hypotheses, "overinhibition" and "disinhibition," were presented (21). Some experiments set out to investigate the inconsistency and then showed that low concentrations of GluR5-containing KA receptor agonists facilitate, whereas high concentrations suppress, GABAergic transmission in interneuron-to-pyramidal cell synapses (37). The results of the other group in our laboratory indicated that the appropriate concentration of ATPA (1-2.5 nmol) ensures that neuroprotection is performed on postsynaptic pyramidal neurons, 4 which provides a basis for the dose of ATPA used in our present study. The possible mechanism underlying the concentration-dependent effect is as follows. Since KA receptor subunits can be divided into two groups on the basis of their affinity for [ 3 H]kainate (the low affinity subunits (GluR5-GluR7) and the high affinity subunits (KA1 and KA2)) (38), a GluR5/KA2 and GluR5/GluR6 subunit combination could mediate the facilitation and inhibition of GABAergic transmission, respectively (37). The exact intracellular processes induced by activated KA receptors remain to be determined. The influx of calcium ions into a GABAergic terminal causes release of the neurotransmitter GABA into the synaptic cleft. Previous studies showed that facilitation of GABAergic transmission by activating KA receptors did not require activation of the voltage-gated calcium channels but could be mediated by calcium influx through Ca 2ϩ -permeable GluR5-containing KA receptors (37). However, there was also a report that blockage of calcium influx by Cd 2ϩ inhibited the KA receptor-mediated increase in GABA release (22). In the present study, our data suggest that ischemia-reperfusion GABA release, which is due to activating GluR5containing KA receptors, requires activation of voltage-gated calcium channels, but we could not rule out the possibility of the involvement of GluR5-containing KA receptors in the entry of calcium ions. Activated GABA A receptors play a neuroprotective role in ischemia (25)(26)(27). Our data show that activation of GABA A receptors and GluR5-containing KA receptors could lead to similar results. Moreover, bicuculline, an antagonist of GABA A receptors, could suppress the change induced by GluR5 activation in postsynaptic neurons. Thus, our results suggest that activated GluR5-containing KA receptors may facilitate GABA release from interneurons. Released GABA can activate postsynaptic GABA A receptors. Then the influx of Cl Ϫ through GABA A receptors can counter postsynaptic membrane depolarization and limit calcium entry (39). Moreover, according to the results of patch clamp recording in our present study, we directly proved that GABA A receptor activation could significantly inhibit the NMDA receptor-mediated whole-cell currents. In summary, ATPA-induced GABA release could inhibit NMDA receptors and attenuate the increased NMDA receptor tyrosine phosphorylation by suppressing Src activation and interactions among PSD-95, Src, and NR2A. Excitotoxicity is the main mechanism of ischemic brain injury. NMDA receptors are involved in this process. Cerebral ischemia causes excessive glutamate release, which induces the influx of excessive Na ϩ and Ca 2ϩ ions through NMDA receptors (7) and increases tyrosine phosphorylation of NR2A and NR2B (6). Src kinase can increase the activity of NMDA receptors (9) and also bind to NMDA receptors with its Src homology 2 domain (40). It can mediate tyrosine phosphorylation of NR2A and NR2B and the increased tyrosine phosphorylation, resulting in the potentiation of the NMDA ion channel (8,41,42). As a result, more Ca 2ϩ ions enter postsynaptic pyramidal cells through NMDA receptors, which further leads to Src kinase activation. The positive feedback mechanisms induce delayed neuronal death. Our previous study showed that not only the tyrosine phosphorylation of NR2A and NR2B but also the association of NMDA receptors and Src reach their peak level at 6 h of reperfusion after ischemia (11,32), and activated Src induces enhanced NMDA receptor tyrosine phosphorylation by means of the signaling module NR2A-PSD-95-Src dur-4 Q. Lv, Y. Liu, J. Xu, and G. -Y. Zhang, unpublished data. FIGURE 8. Scheme summarizing the proposed mechanisms underlying neuroprotection of GluR5-containing kainate receptor activation against ischemic brain injury. Ischemia-reperfusion induces entry of excessive Na ϩ and Ca 2ϩ ions through NMDA receptors. Activated Src phosphorylates NMDA receptors, which is mediated by PSD-95. The results enhance the NMDA receptor function and further promote the influx of Ca 2ϩ . The positive feedback mechanism induces the delayed neuronal death. Our present results suggest that activation of GluR5containing KA receptors induces GABA release from the synaptic terminal of interneurons, which is Ca 2ϩ -dependent. Released GABA activates postsynaptic GABA A receptors. Then the influx of Cl Ϫ ions through the GABA A receptors can counter postsynaptic membrane depolarization and limit calcium entry and then suppress Src activation. Then the interactions among Src, PSD-95, and NR2A, decrease the activity of NMDA channels. ing ischemia-reperfusion (10). In this study, our data suggest that activated GluR5-containing KA receptors could suppress Src activation and interactions among NR2A, PSD-95, and Src and then attenuate NMDA receptor-mediated excitotoxicity by attenuating NMDA receptor tyrosine phosphorylation. In conclusion, as depicted in Fig. 8, activated GluR5-containg KA receptors play a neuroprotective role against ischemic brain injury through an interesting pathway. Brain functions are based on the dynamic balance between excitatory and inhibitory inputs. Cerebral ischemia upsets the balance, and the neural excitotoxicity gains the upper hand. The process underlying the neuroprotective effect of GluR5 activation renders us a possible delicate regulative mechanism, which may exist not only in the pathological conditions of exitotoxicity, such as stroke, but also in normal physiological conditions; the excitatory transmitter glutamate can regulate the balance between excitatory and inhibitory synaptic inputs by activating GluR5-containing KA receptors. Since activated GluR5-containing KA receptors have a neuroprotective effect against ischemic brain injury in vivo, the signaling pathway suggests a potential new therapy for ischemia.
7,231
2008-10-24T00:00:00.000
[ "Medicine", "Biology" ]
Rapid and Sensitive Detection of Verticillium dahliae from Soil Using LAMP-CRISPR/Cas12a Technology Cotton Verticillium wilt is mainly caused by the fungus Verticillium dahliae, which threatens the production of cotton. Its pathogen can survive in the soil for several years in the form of microsclerotia, making it a destructive soil-borne disease. The accurate, sensitive, and rapid detection of V. dahliae from complex soil samples is of great significance for the early warning and management of cotton Verticillium wilt. In this study, we combined the loop-mediated isothermal amplification (LAMP) with CRISPR/Cas12a technology to develop an accurate, sensitive, and rapid detection method for V. dahliae. Initially, LAMP primers and CRISPR RNA (crRNA) were designed based on a specific DNA sequence of V. dahliae, which was validated using several closely related Verticillium spp. The lower detection limit of the LAMP-CRISPR/Cas12a combined with the fluorescent visualization detection system is approximately ~10 fg/μL genomic DNA per reaction. When combined with crude DNA-extraction methods, it is possible to detect as few as two microsclerotia per gram of soil, with the total detection process taking less than 90 min. Furthermore, to improve the method’s user and field friendliness, the field detection results were visualized using lateral flow strips (LFS). The LAMP-CRISPR/Cas12a-LFS system has a lower detection limit of ~1 fg/μL genomic DNA of the V. dahliae, and when combined with the field crude DNA-extraction method, it can detect as few as six microsclerotia per gram of soil, with the total detection process taking less than 2 h. In summary, this study expands the application of LAMP-CRISPR/Cas12a nucleic acid detection in V. dahliae and will contribute to the development of field-deployable diagnostic productions. Introduction Cotton is an important economic crop worldwide, contributing approximately 35% of the world's total natural fiber to the textile industry.Additionally, it serves as a source of edible oil and livestock feed [1].Verticillium wilt is a significant vascular soil-borne disease affecting cotton.On average, it results in yield losses ranging from approximately 10% to 35% [2].In China, cotton production has suffered substantial economic losses due to the widespread occurrence of Verticillium wilt, affecting approximately 2.5 million hectares of cotton annually [3].The primary causal agent of cotton Verticillium wilt is the soil-borne pathogenic fungus Verticillium dahliae [4].The fungus V. dahliae produces a dormant structure called microsclerotia, which can survive in the soil for several years [5].Microsclerotia serve as primary infection sources of cotton Verticillium wilt; when encountering a suitable host and environment, they germinate and produce infection hyphae to infect the roots of cotton plants.Once the fungus successfully invades cotton roots, its mycelium reaches the plant's vascular tissue [5].Subsequently, the mycelium produces a large number of spores, facilitating rapid vertical transmission within the vascular bundles.Simultaneously, the mycelium achieves swift horizontal transmission through the pits between the xylem vessels [6].Therefore, rapid, sensitive, and accurate detection of V. dahliae in the soil is paramount for early monitoring, warning, and management of cotton Verticillium wilt [7,8]. The rapid diagnosis of cotton Verticillium wilt is crucial for its prediction and control.Various approaches have been reported for V. dahliae detection, including Droplet Digital PCR (ddPCR) [9], Quantitative Real-time PCR (qRT-PCR) [9][10][11], and Loop-Mediated Isothermal Amplification (LAMP) [12].However, these methods also have their disadvantages.For example, ddPCR is costly and requires professionally trained technicians.Additionally, LAMP exhibits a high false-positive rate.Currently, there is a lack of a low-cost, easy-to-operate field detection methods for detection of the V. dahliae in soil. The development of isothermal amplification techniques, such as Loop-Mediated Isothermal Amplification (LAMP), recombinase polymerase amplification (RPA), and rolling circle amplification (RCA), have successfully eliminated the requirement for thermocycling instruments and can be applied for real-time detection, thereby facilitating field and point-of-care testing.However, these methods may result in false-positive detection due to nonspecific amplification, cross-contamination, or primer dimerization [13,14].As results, there remains a necessity to create innovative diagnostic platforms that enable the rapid, highly sensitive, and extremely specific detection of nucleic acids. Recently, nucleic acid detection methods utilizing the clustered regularly interspaced short palindromic repeats (CRISPR)-associated endonucleases (CRISPR/Cas) systems have been developed [15][16][17][18][19].In the type V-A CRISPR system, CRISPR/Cas12a (Cpf1) exhibits trans-cleavage activity in addition to its specific cleavage of double-stranded DNA (dsDNA) using a single guide RNA (sgRNA), resulting in cleavage of single-stranded probes for nucleic acid detection [17].This approach has been implemented in SHERLOCK (Cas13) [16,20,21], DETECTR (Cas12 or Cas14) [15,22], and HOLMES (Cas12) [23].The discovery of trans-cleavage activity has paved the way for a new generation of rapid nucleic acid detection techniques.By combining this with the isothermal amplification RPA assay, the CRISPR system has been utilized for V. dahliae detection [24].However, the high cost of the RPA assay hinders its widespread application in field detection.Currently, there is no method available for integrating LAMP isothermal amplification technology with the CRISPR system for detecting V. dahliae.Proper primer design and selection play a crucial role in optimizing LAMP reactions [25].Although LAMP amplification technology offers the advantage of high sensitivity, it also presents the drawback of a high falsepositive rate.However, combining LAMP with the CRISPR system (LAMP-CRISPR/cas12a) can effectively mitigate this limitation.The combination of LAMP and CRISPR/Cas12a technology has been widely applied in the detection of plant pathogens [26][27][28][29][30], indicating that this technology has become quite mature. A paper-based lateral flow strips (LFS) method, which combines the colloidal goldbased nanoparticles with conventional chromatographic separation, provides a new direction for rapid point-of-care (POC) diagnostics.These methods offer the advantages of being user-friendly, fast, and requiring no complicated instrumentation [31][32][33][34]. In this study, based on LAMP-CRISPR/cas12a technology, we developed a fluorescence visualization detection system, suitable for laboratory use, that requires only ultraviolet flashlight irradiation to enable the observation of results.Additionally, this study innovated a visual on-site detection system for V. dahliae by integrating field nucleic acid extraction, LAMP-CRISPR/cas12a and a portable lateral flow strip (LFS).Both devised systems attributes of economic feasibility, operational simplicity, heightened sensitivity, and specificity, along with visualization properties [35,36], all of which significantly to real-time monitoring and prompt establishment of control strategies against V. dahliae. Candidate DNA Fragment-Specific Analysis To find a candidate target genomic DNA sequence of V. dahliae for LAMP-CRISPR/ Cas12a detection, a 916 bp DNA sequence (CP010981.1:12,621-13,536)in V. dahliae was aligned with the whole genome sequences of 43 strains of V. dahliae and its closely related species in the NCBI database using BIO EDIT [37].The results revealed the presence of a 178 bp sequence (CP010981.1:12,621-12,798)specifically in V. dahliae (Figure 1a).Furthermore, this sequence was validated in all 43 strains of V. dahliae by PCR using PCR-F/R primer pair (Supplemental Table S1) amplification from 16 representative strains of V. dahliae (Figure 1b).Based on this DNA segment, LAMP primers and CRISPR RNA (crRNA) were designed. Candidate DNA Fragment-Specific Analysis To find a candidate target genomic DNA sequence of V. dahliae for LAMP-CRISPR/Cas12a detection, a 916 bp DNA sequence (CP010981.1:12,621-13,536)in V. dahliae was aligned with the whole genome sequences of 43 strains of V. dahliae and its closely related species in the NCBI database using BIO EDIT [37].The results revealed the presence of a 178 bp sequence (CP010981.1:12,621-12,798)specifically in V. dahliae (Figure 1a).Furthermore, this sequence was validated in all 43 strains of V. dahliae by PCR using PCR-F/R primer pair (Supplemental Table S1) amplification from 16 representative strains of V. dahliae (Figure 1b).Based on this DNA segment, LAMP primers and CRISPR RNA (crRNA) were designed. Specificity Test of LAMP-CRISPR/Cas12a Fluorescence Visualization Detection System The specificity of the established LAMP-CRISPR/Cas12a assay was evaluated using representative Verticillium spp., bacterial and nine other fungi strains (Verticillium longisporum, Verticillium alfalfa, Verticillium nubilum, Verticillium nigrescens, Verticillium alboatrum, and Verticillium nonalfalfae; and Xanthmonas citri subsp.malvacearum, Rhizoctonia solani, Trichoderma harzianum, Fusarium graminearum, Phytophthora capsici, Phytophthora sojae, Trichoderma asperellum, Fusarium oxysporum f. sp.Cubense, and Magnaporthe oryzae).The results showed that the fluorescent signal could be detected from reactions of V. dahliae, but not from other bacterial and fungi strains (Figure 2a,b), indicating the specificity of the established LAMP-CRISPR/Cas12a system for V. dahliae detection.Subsequently, we validated the specificity of the established closed-tube detection method, and the results demonstrated its highly identical specificity to the open-tube detection method; the results of this method also showed that the fluorescent signal could be observed from reactions of V. dahliae, but not from other bacterial and fungi strains. of the established LAMP-CRISPR/Cas12a system for V. dahliae detection.Subsequently, we validated the specificity of the established closed-tube detection method, and the results demonstrated its highly identical specificity to the open-tube detection method; the results of this method also showed that the fluorescent signal could be observed from reactions of V. dahliae, but not from other bacterial and fungi strains . Sensitivity Test of LAMP-CRISPR/Cas12a Fluorescence Visualization Detection System To test the sensitivity of LAMP-CRISPR/Cas12a, reaction was determined with a 10fold serially diluted template at concentrations of 1 ng, 100 pg, 10 pg, 1 pg, 100 fg, 10 fg, 1 fg, and 100 ag of the genomic DNA template.The results showed that the developed LAMP-CRISPR/Cas12a system can detect less than 10 fg of the genomic DNA template (Figure 2c,d).Subsequently, sensitivity of the closed-tube detection method was validated; the results indicated that the optimized closed-tube detection method maintained the same sensitivity as the open-tube detection method . Detection of V. dahliae in Complex Soil Samples Using LAMP-CRISPR/Cas12a Fluorescence Visualization Detection System Since microsclerotia of V. dahliae are a primary source of fungal inoculum in field soil, it is imperative to evaluate the system for the detection of V. dahliae in soil using the established LAMP-CRISPR/Cas12a closed-tube fluorescence detection system.A serial dilution of soil containing one to 10 microsclerotia per 0.5 g of soil was prepared.The results showed that the LAMP-CRISPR/Cas12a fluorescence detection system was able to detect as few as 1 microsclerotium per 0.5 g of soil, indicating that the LAMP-CRISPR/Cas12a fluorescence detection system provided ultra-sensitive detection for complex soil samples. Sensitivity Test of LAMP-CRISPR/Cas12a Fluorescence Visualization Detection System To test the sensitivity of LAMP-CRISPR/Cas12a, reaction was determined with a 10-fold serially diluted template at concentrations of 1 ng, 100 pg, 10 pg, 1 pg, 100 fg, 10 fg, 1 fg, and 100 ag of the genomic DNA template.The results showed that the developed LAMP-CRISPR/Cas12a system can detect less than 10 fg of the genomic DNA template (Figure 2c,d).Subsequently, sensitivity of the closed-tube detection method was validated; the results indicated that the optimized closed-tube detection method maintained the same sensitivity as the open-tube detection method. Detection of V. dahliae in Complex Soil Samples Using LAMP-CRISPR/Cas12a Fluorescence Visualization Detection System Since microsclerotia of V. dahliae are a primary source of fungal inoculum in field soil, it is imperative to evaluate the system for the detection of V. dahliae in soil using the established LAMP-CRISPR/Cas12a closed-tube fluorescence detection system.A serial dilution of soil containing one to 10 microsclerotia per 0.5 g of soil was prepared.The results showed that the LAMP-CRISPR/Cas12a fluorescence detection system was able to detect as few as 1 microsclerotium per 0.5 g of soil, indicating that the LAMP-CRISPR/Cas12a fluorescence detection system provided ultra-sensitive detection for complex soil samples.This fluorescence visualization detection system was also capable of detecting V. dahliae in natural disease soil in Xinjiang, China (Supplemental Figure S1). Development of a LAMP-CRISPR/Cas12a On-Site Detection System for V. dahliae in Soil After extracting DNA using the aforementioned second soil DNA-extraction method (without requiring any additional equipment), we amplified the DNA using the established LAMP-CRISPR/Cas12a technology but replacing the ssDNA FQ probe with the ssDNA FB probe.The reaction was conducted using a smart thermos cup (HONGPA, Huawei, Shenzhen, China), which was connected to a smartphone through an app.The reaction procedure was as follows: LAMP amplification at 62 • C for 45 min followed by in vitro cleavage by CRISPRCas12a at 37 • C for 30 min.After the reaction, the portable lateral flow strip (LFS) (Warbio, Nanjing, China) was then inserted into the Cas12a reaction mixture for result interpretation (Figure 3).The specificity and sensitivity of the on-site LAMP-CRISPR/Cas12a system was also evaluated for the detection of V. dahliae.The results showed the fluorescent signal could only be detected for reactions with V. dahliae (Figure 4a,b).In addition, the developed LAMP-CRISPR/Cas12a on-site detection system can detect up to 1 fg of the genomic DNA (Figure 4c).Furthermore, the system possesses the sensitivity to detect as few as three microsclerotia per 0.5 g of soil.The entire process from DNA extraction to obtaining the test results takes less than 2 h.The established on-site detection system can also detect V. dahliae from natural disease soil in Xinjiang, China (Supplemental Figure S2). This fluorescence visualization detection system was also capable of detecting V. dahliae in natural disease soil in Xinjiang, China (Supplemental Figure S1). Development of a LAMP-CRISPR/Cas12a On-Site Detection System for V. dahliae in Soil After extracting DNA using the aforementioned second soil DNA-extraction method (without requiring any additional equipment), we amplified the DNA using the established LAMP-CRISPR/Cas12a technology but replacing the ssDNA FQ probe with the ssDNA FB probe.The reaction was conducted using a smart thermos cup (HONGPA, Huawei, Shenzhen, China), which was connected to a smartphone through an app.The reaction procedure was as follows: LAMP amplification at 62 °C for 45 min followed by in vitro cleavage by CRISPRCas12a at 37 °C for 30 min.After the reaction, the portable lateral flow strip (LFS) (Warbio, Nanjing, China) was then inserted into the Cas12a reaction mixture for result interpretation (Figure 3).The specificity and sensitivity of the onsite LAMP-CRISPR/Cas12a system was also evaluated for the detection of V. dahliae.The results showed the fluorescent signal could only be detected for reactions with V. dahliae (Figure 4a,b).In addition, the developed LAMP-CRISPR/Cas12a on-site detection system can detect up to 1 fg of the genomic DNA (Figure 4c).Furthermore, the system possesses the sensitivity to detect as few as three microsclerotia per 0.5 g of soil.The entire process from DNA extraction to obtaining the test results takes less than 2 h.The established onsite detection system can also detect V. dahliae from natural disease soil in Xinjiang, China (Supplemental Figure S2). Discussion V. dahliae is a ubiquitous soil-borne fungal pathogen, capable of persisting in the soil in the form of microsclerotia for several years.Upon encountering suitable hosts or suitable environments, it can germinate and infect plants, causing Verticillium wilt and leading to significant yield decline in many economic crops.Although various nucleic acid detec- Discussion V. dahliae is a ubiquitous soil-borne fungal pathogen, capable of persisting in the soil in the form of microsclerotia for several years.Upon encountering suitable hosts or suitable environments, it can germinate and infect plants, causing Verticillium wilt and leading to significant yield decline in many economic crops.Although various nucleic acid detection techniques have been developed to detect V. dahliae in complex soil environments, the limitations of these methods hinder their widespread application.For instance, droplet digital PCR and LAMP technology exhibit high false-positive rates, while real-time fluorescence quantitative PCR is costly and complicated operation.Despite the high specificity and sensitivity of RPA-CRISPR/Cas12a, the high cost of RPA detection restricts field deployment.Currently, there remains a lack of a low-cost detection technology with strong specificity, high sensitivity, and suitability for on-site detection to detect V. dahliae in soil. CRISPR/Cas12a-based nucleic acid detection technology is renowned for its exceptional specificity and sensitivity, rendering it suitable for pathogen detection in intricate soil samples.Although previous researchers have developed RPA-CRISPR/Cas12a detection techniques for V. dahliae in soil, the integration of LAMP with Cas12a remains unexplored.The LAMP technology necessitates four primers to identify six different regions on the target, representing a remarkably sensitive nucleic acid amplification method that employs bst polymerase, an enzyme highly resistant to PCR amplification inhibitors.By combining CRISPR/Cas12a, this approach transforms into an exceedingly specific and sensitive method for detecting soil-borne pathogens.Compared to RPA-CRISPR/Cas12a, the LAMP-CRISPR/Cas12a method exhibits reduced reliance on enzymes, requires fewer manual steps, and proves relatively more cost-effective, making it particularly suitable for on-site detection. In this study, we developed a fluorescence detection system, specifically for V. dahliae in soil, based on a specific and intraspecific conserved sequence of V. dahliae combined with LAMP and CRISPR/Cas12a methods.For this system, we employed a method for the rapid extraction of V. dahliae DNA from soil using laboratory equipment.Additionally, by adding trehalose into the reaction mixture containing LAMP and Cas12a in the same tube, falsepositive reactions were effectively avoided.Furthermore, to enhance the observation of nucleic acids in the CRISPR/Cas12a reaction, we screened probes with stronger fluorescence signals [27], making the system suitable for laboratory-based detection.The sensitivity of this system enables the detection of a single microsclerotium of V. dahliae in soil in less than 90 min from DNA extraction to detection. Additionally, to facilitate on-site detection, we employed a self-established method for in situ extraction of V. dahliae DNA from soil; this involved integrating rapid DNA extraction and visualization with LAMP-CRISPR/Cas12a detection using an LFS device, thereby establishing an on-site detection system.The entire system can be operated in the field using a simple portable device, enabling the complete detection process to be completed within two hours.Although its sensitivity is slightly lower than that of the developed fluorometric detection system (which detects a minimum of three microsclerotia), it still fulfills the requirements for on-site testing.This study represents the first development of an on-site detection method for V. dahliae in soil. Previously, pathogen DNA in soil was extracted either using expensive kits or through cumbersome and time-consuming steps requiring laboratory instruments, thus making on-site DNA extraction impossible.For the first time, this study established a low-cost and simple-operation LAMP-CRISPR/Cas12a detection method for rapid detection of V. dahliae in soil that meets all necessary requirements.Furthermore, this study provides valuable insights into fast detection methods for other soil-borne pathogens. Materials and Nucleic Acid Extraction The V. dahliae strains used in the experiment were provided by Xinjiang Academy of Agricultural Sciences and Jiangsu Academy of Agricultural Sciences, China (Supple-mental Table S2).V. longisporum strains were provided by Gansu Academy of Agricultural Sciences.V. alfalfae and V. nubilum strains were provided by Northwest A&F University, China.V. nigrescens, V. alboatrum, V. nonalfalfae and R. solani strains were provided by Chinese Academy of Agricultural Sciences, China.The remaining plant pathogenic strains, including T. harzianum, F. graminearum, P. capsici, Xanthmonas citri subsp.malvacearum (XCM), P. sojae, T. asperellum, Fusarium oxysporum f. sp.Cubense (FOC) and M. oryzae are all preserved at Hainan University, China.The genomic DNA of the fungal strains was extracted using the Fungal DNA Kit (OMEGA BIOTEK, Norcross, GA, USA), while the bacterial strains underwent genomic DNA extraction via the TIANamp Bacteria DNA Kit (TIANGEN, Beijing, China). All soil samples that did not V. dahliae and had no history of wilt disease were collected from the agricultural base of Hainan University in Hainan Province, China.The microsclerotia of V. dahliae was prepared according to the method reported by Pérez-Artés, et al. ( 2004) [38], and different quantities of microsclerotia were mixed into clean soil to prepare artificial infested soil.The natural disease soil used in this study was taken from around diseased cotton plants in Xinjiang, China, at a depth of approximately 20 cm. Two methods of fungal DNA extraction from soil samples were used in this study.The first method requires the use of laboratory equipment; the details of the procedure were as follows: 0.5 g soil sample was added to a 2 mL centrifuge tube, which contained 600 µL lysis solution (100 mmol•L −1 Tris-HCl, 100 mmol•L −1 EDTA•Na 2 , 100 mmol•L −1 Na 3 PO 4 , 200 mmol•L −1 NaCl, 2% PVPP, 0.5% SDS, pH 8.0) and 2 approximately 5 mm steel beads.The tube was then shaken in a bead beater (60 hz, 100 s), centrifuged at 15,000 rpm/min for 5 s, and then filter paper strips (The Whatman No. 1 test paper (Whatman, Maidstone, UK) were used; before use, they were processed by first immersing half of the test paper into molten paraffin to form a hydrophobic zone.After the paraffin solidified, the partially paraffin-coated filter paper was cut into a 44 mm wide rectangle, with approximately 40 mm coated with paraffin and 4 mm uncoated.Then, this rectangle was cut into strips approximately 2 mm wide (forming test paper with a 2 × 4 mm nucleic acid binding region and a 2 × 40 mm handle) that were used to touch the supernatant three times, each time for 1-2 s, to bind the nucleic acids.The filter paper strips from the previous step were then dipped into 2 mL wash solution (10 mM Tris [pH 8.0], 0.1% Tween-20) three times, and after washing, the strips were dipped into PCR tubes containing 40 µL nuclease-free water three times, each time for 1-2 s.After mixing the 40 µL system, 1.5 µL was taken as the template for the LAMP-CRISPR/Cas12a reaction.The second method does not require the use of laboratory equipment and can allow for on-site soil DNA extraction: a 0.5 g soil sample was added to a 2 mL centrifuge tube, which contained 800 µL of the above lysis solution and steel beads.The tube was vigorously shaken for approximately 30 s for disruption, and then left to stand until 150-200 µL of supernatant appeared (this process takes about 10-30 min).The supernatant was then transferred to a new 1.5 mL centrifuge tube and left to stand for about 5 min for the precipitation of excess impurities, followed by using filter paper strips following the above method. PCR Primers, LAMP Primers, and Reporter Probes After conducting an analysis of a DNA sequence spanning 916 bp, it was revealed that a segment of 178 bp is unique to the V. dahliae.Comparing this segment to the DNA sequences of the same species and related species on NCBI, it was found that the 178 bp sequence varies significantly between different species of Verticillium but remains highly conserved within the species of V. dahliae.Firstly, based on the 916 bp DNA sequence, a PCR primer pair PCR-F/R with a target fragment of 600 bp was designed and used to amplify the genomic DNA of 16 representative strains of V. dahliae from Xinjiang and Jiangsu.The LAMP primers were designed based on the 178 bp sequence using Primer Explorer V5 and synthesized by Bgi Tech Solutions (Liu He) Co., Ltd.(Beijing, China).Five sets of primers were designed to amplify the genomic DNA of V. dahliae.One primer set was found to have good amplification efficiency within 45 min, the LAMP primer results are shown in Table S1.For the fluorescent detection system, an ssDNA FQ probe (labeled with FAM and BHQ1) was designed.For the immune-assay strip detection reaction, an ssDNA FB probe (labeled with FAM and biotin) was designed; both probes were synthesized by Bgi Tech Solutions (Liu He) Co., Ltd. Guide RNA Design and sgRNA Synthesis The crRNA of Cas12a requires a specific TTTN PAM (protospacer adjacent motif), and, based on the selected LAMP sequences as targets, relevant crRNAs were designed using the Benchling online platform.Two high-scoring crRNAs sequences were chosen.Bgi Tech Solutions (Liu He) Co., Ltd.(Beijing, China) was commissioned to synthesize the corresponding Oligo short chains with T7 promoter, repeat sequences, and spacer sequences, which served as templates for the guide RNA.After synthesizing crRNA using the T7 Quick High Yield RNA Synthesis Kit (New England Biolabs, Cat#E2050S) and purifying it using the RNA Cleanup Kit (New England Biolabs, Cat#T2040S), the purified crRNA was diluted and stored at −80 • C for future use. Reaction The LAMP reaction was conducted by adding 2.5 µL of 10× ThermoPol Buffer, 1.5 µL of 100 mM MgSO 4 , 1.4 µL of 25 mM dNTPs, 40 µM of LAMP-FIP primer and LAMP-BIP primer, 5 µM of LAMP-F3 primer and LAMP-B3 primer, 10 µM of LAMP-LF primer and LAMP-LB primer each 1 µL, 1 µL of 8000 U/mL Bst 2.0 WarmStart DNA Polymerase (New England Biolabs, Ipswich, MA, USA), 1 µL of 5 mM betaine, and 1 µL of template DNA, with sterilized ultrapure water added to make up to 25 µL.The reaction was held at 62 • C for 45 min, and the LAMP products were analyzed by 2% agarose gel electrophoresis. LAMP-CRISPR/Cas12a Fluorescence Visualization Detection System First, the reaction system combining LAMP and CRISPR/Cas12a cleavage technology was established.The LAMP amplification technology was used to amplify the genome DNA of the bacterium in a PCR reaction tube.The LAMP system included 2.5 µL of 10× ThermoPol Buffer, 1.5 µL of 100 mM MgSO 4 , 1.4 µL of 25 mM dNTP, 4 µM of LAMP-FIP primer and LAMP-BIP primer, 5 µM of LAMP-F3 primer and LAMP-B3 primer, 10 µM of LAMP-LF primer and LAMP-LB primer each 1 µL, 1 µL of 8000 U/mL Bst 2.0 WarmStart DNA Polymerase,1 µL of 5 mM betaine, and 1 µL of template DNA, with sterilized ultrapure water added to make up to 25 µL.The Cas12a cleavage system was placed in another reaction tube, which included 3 µL 10× NEBuffer r2.1, 3 µL of 300 nM crRNA, 1 µL of 1 µM EnGen Lba Cas12a (Cpf1) (New England Biolabs), 0.5 µL of RNase inhibitor (TaKaRa, Dalian, China), 1 µL of ssDNA FQ probe group, and supplemented with enzyme-free water to a total volume of 27 µL.After the LAMP amplification was completed, 1 µL of LAMP product was taken and transferred to the Cas12a cleavage system for targeting DNA cleavage.Fluorescence intensity was measured using an enzyme maker (Infinite M200 PRO).The fluorescence signal of ssDNA-FQ report probe was visualized with a UV flashlight.Positive results (containing the genome DNA of the V. dahliae) can be observed as green fluorescence under UV flashlight, while negative results show no fluorescence. However, the process of opening the lid of reaction tubes containing LAMP amplification during the reaction can lead to aerosol contamination, due to the large amount of LAMP products, which may result in false-positive outcomes.To address this issue, a strategy involving the addition of trehalose (Solarbio, Beijing, China) to the Cas12a system was employed to enhance the thermal instability of the Cas12a protein.The LAMP-CRISPR/Cas12a closed tube detection system is as follows: The LAMP system is placed at the bottom of the tube and the Cas12a cleavage system at the top.After the LAMP amplification is complete, the Cas12a system is centrifuged or manually shaken from the tube cap to mix the two systems.It is then incubated at 37 • C for 30 min in a constant-temperature water bath. Figure 3 .Figure 3 . Figure 3.The operational workflow of the LAMP-CRISPR/Cas12a on-site detection system is delineated as follows: Step 1, an extraction of total DNA from soil is carried out; Step 2, the LAMP-CRISPR/Cas12a reaction is facilitated within a smart thermos cup; Step3, using a lateral flow strip (LFS) for result readout Figure 3.The operational workflow of the LAMP-CRISPR/Cas12a on-site detection system is delineated as follows: Step 1, an extraction of total DNA from soil is carried out; Step 2, the LAMP-CRISPR/Cas12a reaction is facilitated within a smart thermos cup; Step3, using a lateral flow strip (LFS) for result readout.Int.J. Mol.Sci.2024, 25, x FOR PEER REVIEW 6 of 11 Figure 4 . Figure 4. Establishment of the LAMP-CRISPR/Cas12a on-site detection system.(a,b): Specificity of this system, using a lateral flow strip (LFS) for result readout.(c): Sensitivity of this system, using a lateral flow strip (LFS) for result readout. Figure 4 . Figure 4. Establishment of the LAMP-CRISPR/Cas12a on-site detection system.(a,b): Specificity of this system, using a lateral flow strip (LFS) for result readout.(c): Sensitivity of this system, using a lateral flow strip (LFS) for result readout.
6,314
2024-05-01T00:00:00.000
[ "Environmental Science", "Biology", "Agricultural and Food Sciences" ]
Efficient logging and querying for blockchain-based cross-site genomic dataset access audit Background Genomic data have been collected by different institutions and companies and need to be shared for broader use. In a cross-site genomic data sharing system, a secure and transparent access control audit module plays an essential role in ensuring the accountability. A centralized access log audit system is vulnerable to the single point of attack and also lack transparency since the log could be tampered by a malicious system administrator or internal adversaries. Several studies have proposed blockchain-based access audit to solve this problem but without considering the efficiency of the audit queries. The 2018 iDASH competition first track provides us with an opportunity to design efficient logging and querying system for cross-site genomic dataset access audit. We designed a blockchain-based log system which can provide a light-weight and widely compatible module for existing blockchain platforms. The submitted solution won the third place of the competition. In this paper, we report the technical details in our system. Methods We present two methods: baseline method and enhanced method. We started with the baseline method and then adjusted our implementation based on the competition evaluation criteria and characteristics of the log system. To overcome obstacles of indexing on the immutable Blockchain system, we designed a hierarchical timestamp structure which supports efficient range queries on the timestamp field. Results We implemented our methods in Python3, tested the scalability, and compared the performance using the test data supplied by competition organizer. We successfully boosted the log retrieval speed for complex AND queries that contain multiple predicates. For the range query, we boosted the speed for at least one order of magnitude. The storage usage is reduced by 25%. Conclusion We demonstrate that Blockchain can be used to build a time and space efficient log and query genomic dataset audit trail. Therefore, it provides a promising solution for sharing genomic data with accountability requirement across multiple sites. Background With the rapid development of biomedical and computational technologies, a large amount of genomic data sets have been collected and analyzed in national and international projects such as Human Genome Project [1] , the HapMap project [2] and the Genotype-Tissue Expression (GTEx) project [3], which yielded invaluable research data and extended the boundary of human knowledge. Thanks to the advance of computer technology, the cost of genomic testing is dropping exponentially. Nowadays, the testing price ranges from under $100 to more than $2,000, depending on the nature and complexity of the test [4]. One can test her gene easily and cheaply by using services from DNA-testing companies such as Ancestry and 23andMe. Given the above, genomic data sets have been scattered around the world in different institutions and companies. On the other hand, the potential business value of genomic data and privacy concerns [5][6][7] hinder the sharing of cross-sites genomic data. Notably, the General Data Protection Regulation (GDPR) restricts the exchange of personal data. Under GDPR, such sensitive data only could be accessed after obtaining the consent of data subjects (i.e., the one who owns the data) and providing accountability audit. This requires that any cross-site genomic data sharing system should be equipped with a secure and transparent access control module. Blockchain technology has received increasing attention because it provides a new paradigm of value exchange. Although it stems from cryptocurrency, many studies have investigated the adoption of blockchain in different application scenarios beyond financial domain that typically involve multiple parties with conflict of interests such as personal data sharing [8][9][10], supply chain [11][12][13], identity management [14,15] and medical data management [16][17][18][19][20][21][22]. They show that using blockchain technology can reduce friction and increase transparency. A blockchain system has several notable features: decentralization, immutability and transparency. These are achieved by cryptographic hash, consensus algorithm and many other innovations from previously unrelated fields such as cryptography and distributed computation [23]. Due to the space limitation, we do not introduce more details of blockchain technologies and refer interested readers to surveys on blockchain [24][25][26][27][28][29][30][31]. Several studies investigated blockchain-based access log audit [32][33][34] (we introduce them in the next section). They focus on how to achieve the immutability of the log. However, none of them investigated the efficiency of logging and querying for a blockchain system at the application layer. On the other hand, a few recent studies [35][36][37][38] from database community consider a blockchain system as a distributed database, and attempt to improve the performance of such system by exploring new designs of bottom layers (such as storage or transaction processing) of the system. However, without considering the application characteristics, such modifications on the back-end engine of the system may not have the desired performance improvement on every application or even cause unexpected side effects. The 2018 iDASH competition first track, "Blockchainbased immutable logging and querying for cross-site genomic dataset access audit trail", provides us with an opportunity to explore a light-weight and widely compatible access audit module for existing blockchain platforms. Our submitted solution won the third place of the competition. In this paper, we report the system design and technical details in our solution. The competition task [39] The goal of iDASH competition 2018 first track is to develop blockchain-based ledgering solutions to log and query the user activities of accessing genomic datasets across multiple sites. Concretely, given a genomic data access log file in which each entry includes seven attributes including Timestamp, Node, ID, Ref − ID, User, Activity, Resource, the task is to design a time/space efficient data structure and mechanisms to store and retrieve the logs based on Multichain version 1.0.4 [40]. Competition setup and requirement. It is required that each entry in the data access log must be saved individually as one transaction (i.e., participants cannot save the entire file in just one transaction), and all log data and intermediate data (such as index or cache) must be saved on-chain (no off-chain data storage allowed). Competition participants can determine how to represent and store each log entry in transactions. It does not need to be a plain text copy of the log entry. Also, the query implementation should allow a user to search the log using any field of one log entry (i.e., node, id, user, resource, activity, timestamp, and a "reference id" referring to the id of the original resource request), any "AND" combination (e.g., node AND id AND user AND resource), and any timestamp range (e.g., from 1522000002418 to 1522000011441) using a command-line interface. Also, the user should be able to sort the returning results in ascending/descending order with any field (e.g., timestamp). There will be 4 nodes in the blockchain network, and 4 log files to be stored. Users should be able to query the data from any of the 4 sites. Participants can implement any algorithms to store, retrieve, and present the log data correctly and efficiently. Evaluation Criteria. The logging/querying system needs to demonstrate good performance (i.e., accurate query results) by using a testing dataset, which is different from the one provided for the participants. The speed, storage/memory cost, and scalability of each solution will be evaluated. The competition organizer used the binary version of Multichain 1.0.4 on 64-bit Ubuntu 14.04 with the default parameters as the test bed for fairness. No modification of the underlying Multichain source code is allowed. The submitted executable binaries should be non-interactive (i.e., depend only on parameters with no input required while it works), and should contain a readme file to specify the parameters. The organizer tested all submissions using 4 virtual machines, each with 2-Core CPU, 8GB RAMs and 100GB storage. Related work The closest line of work to this competition is blockchainbased access log audit. Suzuki et al., [32] proposed a method using blockchain as an audit-able communication channel. This study is motivated by a similar problem studied in this paper: in a client-server system, the logging on either server-side or client-side does not provide strict means of auditing, because the host of the logging system could tamper the log. They implemented a proof-of-concept system on top of Bitcoin by encoding the messages (i.e., API calls from clients and Replies from the server) between clients and the server into the transactions of bitcoin. Since the transactions are publicly available, they can be retrieved and verified by an auditor as needed. The proposed system is easy to use and convenient for a client-server system. However, answering the audit query using the proposed system may be time-consuming, especially for a large-scale system serving millions of clients, as each reply is returned in the form of a bitcoin transaction. The maximum transaction processing capacity of bitcoin is estimated between 3.3 and 7 transactions per second [41]. Castaldo et al., [33] implemented a blockchain-based tamper-proof audit mechanism for OpenNCP (Open National Contact Points) [42], which is a system for exchanging eHealth data between countries in Europe. The idea is similar to the one proposed in [32], but dealing with data exchange instead of answering queries. They also encode the data that need to be exchanged into the transactions, but the data are encrypted using symmetric keys which are shared in advance between the sender and receiver through a secure channel. The author suggests to use Multichain because it provides low overhead for the transactions handling. ProvChain [34] is a blockchain based data provenance architecture for assuring data operation (i.e., data access and data changes) in the cloud storage application. This differs from the previous two solutions mentioned, as the major challenge is that the provenance data are also sensitive but still need to be validated by a third party. The authors proposed an additional layer as provenance auditor which interacts with a blockchain network by blockchain receipts which include provenance entry for future validation. The 2018 iDASH competition first track provides us with an opportunity to explore the design of efficient logging and querying methods for a blockchain system. We attempt to design a blockchain-based log system that can serve as a light-weight and widely compatible component for the existing blockchain platforms. Especially, our solution is optimized for genomic dataset access auditing under the requirements of the competition task. Method We design a blockchain-based log system that is time/space efficient to store and retrieve genomic dataset access audit trail. Our method only leverages the Blockchain mechanism and is not limited to any specific Blockchain implementation, such as Bitcoin [43], Ethereum [44]. We introduce an on-chain indexing data structure which can be easily adapted to any blockchains that use a key-value database as their local storage. In our development, we use Multichain version 1.0.4 as an interface between Bitcoin Blockchain and our insertion and query method. Multichain is a Bitcoin Blockchain fork. It conveniently provides a feature, data stream, to allow us to use Bitcoin Blockchain as an append-only key-value database. Overview In Fig. 1, we illustrate the overview of the logging system, which is built on top of Multichain APIs. The core task is to design space and time efficient methods for insertion and queries. As described in Section "Technical details of the task", there are three types of primitive queries: point query, AND query, and range query. There are seven fields in the given genomic dataset: Timestamp, Node, ID, Ref − ID, User, Activity, Resource as shown in Table 1. For point query, the user can query on any field. For AND query, the user can query on any combination of fields. For range query, the user can query only on timestamp field with a start and end timestamp. See Table 1 & 2 as a running example. Baseline method We first describe a naive method as a baseline. The baseline method leverages only three Multichain APIs as shown in Table 3. Insertion: First, we create K streams, where K is the number of fields. Multichain builds K tables in its backend key-value database. Second, we build K key-value pairs, where the key is the attribute data and the value is entire record line. Finally, we convert those K pairs into one Blockchain transaction and publish it to Blockchain. The following Fig. 2 is an example conversion from a log record to blockchain transaction. We will use this example log record in the remaining sections. After the transaction is confirmed by Blockchain, Multichain decodes the transaction and insert each key-value pairs to its corresponding table. Point Query: The implementation of a point query is straightforward which simply returns a list of records as shown in Algorithm 1. In this literature, we assume the run time complexity of all Multichain API is O(1). The run time complexity of Point Query is O(1). Enhanced method After testing the baseline solution, which will be discussed in the result section, we found that the retrieve speed heavily depends on the number of API calls. Therefore, the fewer API calls we use, the faster retrieve speed we get. More specifically, we found three non-optimal issues: • The entire record is duplicated K times where K is the number of fields, which is insufficient in terms of storage overheads. • Since we need to query all results and intersect them in local memory, AND query takes significant The blockchain-based auditing system is an append-only structure, so a data structure that keeps the minimum amount of information while maintaining the efficiency is essential. The percentage of read(query) operations in the real-world auditing system is low [45], therefore we trade retrieval speed for storage cost. We redesign the key-value pairs in the blockchain transaction, modified the query algorithm accordingly and built a selectivity list based on data distribution. Most of all, we design a hierarchical timestamp structure which significantly reduces the number of queries(APIs) needed for the range query. Insertion: To address these problems, we redesign the key-value pairs. The key part remained the same (attribute data), but we removed the entire entry from the value part. As a result, we removed all duplicated values in the baseline method as shown in Fig. 3 Point Query: Since we now have an empty value in the key-value database, we cannot use the key to get original record directly. We now take advantage of Blockchain transaction ID which is included in the returning JSON file of liststreamkeyitems API. First, we get a list of TXID (transaction ID) with the given key. Second, we use another Multichain API, getrawtransaction, to get the matching transactions. Finally, we rebuild the original record from the transaction where all attribute data are included. It is worth mentioning that the point query now requires 1 + T times API calls to retrieve the records where T is the size of the TXID list. If the modification is allowed in this competition, we can combine these three steps into one, which reduces the total API calls from 1 + T to 1. In other words, if Multichain nodes can perform the work from line 3 to line 6 in Algorithm 4, users can point query with just one API call. The run time complexity of our point query is O(T), where T is the size of the TXIDs list. AND Query: In order to reduce the retrieval cost, we build a selectivity list for attributes based on the example test data which was given by the competition organizer. A selectivity list is based on the rank of result size of each attribute. The attribute which has the smallest query Timestamp Range Query: Since Blockchain is an immutable structure, the common indexing techniques, such as B-tree and R-tree, which require adjusting/balancing the entire data structure according to the data distribution, won't work. We introduce a hierarchical timestamp structure, which is an incremental data structure and matches the append-only characteristics of the blockchain system. Our design significantly reduces the number of queries(APIs) needed for a single range query. The hierarchical timestamp structure consists of multiple levels. See Table 4 as an example. The range in the high level divides into multiple smaller range in the lower level. We denote each range part as LevelNumber:Starting Timestamp. A timestamp is recorded in the corresponding part at all levels. In our running example, a timestamp 111 will be recorded in L0:100, L1:110, and L2:111 in Table 4. To build this structure, we need to slightly modify the insertion method by adding L streams where L is the number of levels, and we need to add L key-value pairs to Blockchain transaction as well. See Fig. 4 as an example. In our enhanced range query method, we recursively find the largest range in the hierarchical timestamp structure and use multiple point queries to retrieve the result. We reduce the number of queries needed for range query from R T e −T s to The run time complexity of the enhanced range query is O Further optimizations The database normalization can be used for both baseline and enhanced solution. According to the given datasets, Implementation environment We used Python3 as our main programming language to develop our solution, Savior [46] to interact with Multichain API and Docker [47] to simulate 4 Blockchain nodes. Additionally, we created some bash scripts to automatically setup Blockchain nodes and Multichain environment. We also wrote a benchmark program to compare our baseline method and enhanced method. Our code We used the sample testing data supplied by the competition organizer to benchmark our implementation. The sample testing data consists of 4 files, one per node. Each file has 10 5 entries of log records which has 7 fields (Timestamp, Node, ID, Ref − ID, User, Activity, Resource). To illustrate, we provide a few sample data in Table 1. To find the optimal number of levels and the step multiplier of two adjacent levels for the hierarchical timestamp structure (Fig. 4), we test all reasonable parameter combinations by brute-force. For the given sample data, the optimal parameter for the number of levels is 3 and the step multiplier of two adjacent levels is 100. Future work may include finding the optimal number of levels in a more efficient way. Benchmark In our benchmark experiment, we show the scalability of our two methods alongside LevelDB [49] as a reference. Many blockchain systems [43,44,50] use LevelDB as a back-end database to store the raw transaction data. It is worth mentioning that those systems only index the raw transactions, not the actual content inside the transactions. Database system and Blockchain do not share the same design goal: the former is usually administered by a centralized entity, and the latter intents to work in a trustless environment. Nevertheless, this comparison offers useful insights of Blockchain based log system which trades speed for data integrity. We simulate the enhanced insertion, the enhanced point query, and the enhanced AND query behavior in LevelDB. For range query, we use LevelDB native method so we can properly examine our hierarchical timestamp structure. In all tests, we run 10 rounds for each methods with respect to varying the number of records. We calculate the average and the standard deviation from the results. We notice that the standard deviation is extremely small which shows the little trace in all figures expect Fig. 5(Point Query). This is due to the identical environment and the setup of our simulated blockchain nodes. Figure 5 shows query time with respect to the varying number of records for point query, range query, AND query. For the point query test, the response time is determined by the result size. As the number of records increases, the result size increases and the response time increases. The response time of the enhanced method is worse than the baseline method because of the addition API calls which we introduced in the enhanced point query. For the range query test, the performance is constant since the result size of certain time range is constant. It is worth mentioning that our enhanced range query method have very close performance comparing to the native LevelDB range query method. For AND query, since it consists of point query, the response time increases with the increasing number of records. It is worth mentioning that the selectivity list design in our enhanced AND query method offsets the drawback of the enhanced point query method when the number of keys is larger than 2. Figure 6 shows the completion time of insertion methods with respect to varying the number of records. The insertion time is depended on the transaction size. The insertion times of the two methods are approximately the same. The enhanced method needs more key-value pairs to support hierarchical timestamp indexing structure. However, the empty values in key-value pairs offset this transaction size increment. Figure 7 shows the total blockchain size in bytes with respect to varying the number of records. The blockchain size information is collected by calling Multichain API. Since Blockchain and LevelDB measure their size in different ways, we exclude LevelDB in this test. The figure suggests that the enhanced method uses less storage than the baseline method. The duplication removal from the blockchain transaction in the enhanced method works as designed. Detailed comparison In this section, we show a detailed performance difference of 3 query types in the baseline method, enhanced method, and LevelDB. We use the fixed 1000 records in the remaining tests. Point Query: Figure 8 shows the query response time for different attributes. The enhanced method performance is worse than the baseline method, because of the additional API calls in the enhanced method. The rank in the result also matches the rank in selectivity list which indicates the return record size. The return record size of Activity is the largest among the attributes. In other words, Activity has the lowest selective and need more API calls to get the result than other attributes, so it has the worst performance difference. Range Query: Figure 9 shows the query response time with respect to varying the time range. The enhanced method is at least one order of magnitude better than the baseline method. It proves that our hierarchical timestamp structure can batch a large number of queries into a small (almost constant) number of queries. Hence, the enhanced method achieves almost constant time performance as LevelDB native range query method. AND Query: Figure 10 shows the query time with respect to varying the number of keys. We test all combinations of keys. For example, for 2 keys test, we test all 21 combinations(7 choose 2) and average the result. It is much easier to find a more selective key when the number of keys is increasing. This is the reason why the enhanced method has a downward slope. When there are only 2 keys, the enhanced method has high possibility to find a low selective key. As a result, when AND query takes a low selective key, it requires a long response time. Discussion Our design is heavily governed by the competition requirements, evaluation criteria [39], and Multichain 1.0.4 capability. In this paper, we intend to use Multichain as only an interface, so our design can be applied to any arbitrary blockchain system. Multichain 1.0.4 does not allow an item to have multiple keys and competition does not allow participators to modify Multichain, so we have to manually construct the blockchain transaction. There are two major developments for the future work: 1) A new interface which encodes/decodes the log entry to/from blockchain transaction more efficient. For example, in Bitcoin blockchain transaction script, it can write entire log entry only once in the blockchain transaction and let local interface client translate the script to the database. A specific Bitcoin interface for this log system can significantly reduce the transaction size. 2) A new Blockchain oriented database system, such as Forkbase [37]. It aims to design a new key-value architecture to reduce the development efforts of the above application and provide efficient analytical query performance. It is possible to replace the key-value engines in the existing blockchain platforms for better query performance. In this paper, we focus on designing efficient logging and querying schemes for immutable blockchain systems, and assume the blockchain network has been well-established under a specific consensus algorithm and acceptable transaction throughput. In the following, we discuss how they may affect our solution. Consensus algorithm may affect the performance of insertion functions because a newly generated access log (as a transaction) need to be accepted by all node in the network (achieving a consensus on the next block) in order to be stored in the ledger. Consensus algorithm manifests the transaction throughput, which is majorly controlled by a predefined parameter in Multichain called mining-diversity (the default configuration is 0.3). If the transaction throughput is low, the insertion would be insufficient since it may be suspended until the previous batch of logs is finished. The transaction throughput also affects the audit queries because the query is performed on the locally synchronized ledger. Under low transaction throughput, a newly generated log may take a long time to be included in the ledger and synchronized to a node so that the query on a node may not be able to provide the accurate real-time answer. Further, the access log could be private since it records all of the queries issued by a user. This is a challenge for existing blockchain platforms since the ledger is public to every node in the network for increasing transparency and security. A recent version of Hyperledger Fabric [50] includes a new function for this problem. The idea is dividing the ledger to different channels and selectively sharing a channel among a subset of users. There are also other efforts for this problem by adopting secure multiparty computation [9], zero-knowledge proof [10] or trusted hardware [51]. Although this problem is beyond the scope of this competition, our solution could be extended using the above techniques. Conclusions In this paper, we presented two solutions for blockchainbased logging and querying genomic dataset audit trail. We built a baseline solution and then adjusted our implementation based on the evaluation criteria of the competition [39] and the general real-world characteristics of log systems [52]. The blockchain-based log system is an append-only structure, so the storage increases monotonically. In the real world, the percentage of writing operation(insertion) is much higher than the portion of reading operation(query) in the workload [52]. Based on the above two reasons, we decided to prioritize the storage space over retrieval speed and insertion speed. We can reduce the storage cost by 25% and increase the range query speed by at least one order of magnitude. We claim that our hierarchical timestamp structure design is Blockchain implementation independent. It can be adapted to any Blockchain (e.g., Bitcoin, Ethereum, Hyperledger) with the help of an intermediary, such as Multichain.
6,213.2
2019-07-17T00:00:00.000
[ "Computer Science" ]
Modifying Effect of a New Boron-Barium Ferroalloy on the Wear Resistance of Low-Chromium Cast Iron : This paper presents the results of a study and analysis of the effect of modifying low-chromium hypoeutectic cast iron with a new boron–barium ferroalloy on its properties—wear resistance and impact resistance—in comparison with traditional boron- and barium-containing additives. The uniqueness and novelty of the work lies in the study of the nature of changes in the structure and wear-resistant properties of low-chromium cast iron as a result of its modifying treatment with a new boron–barium ferroalloy. In a laboratory electric resistance furnace, low-chromium cast iron was melted, and four batches of prototypes were cast. Samples of the first batch, for subsequent comparison, were made without modification. When casting the remaining three batches of samples, the cast iron was modified with three different additives: ferroboron FeB12, ferrosilicobarium FeSi60Ba20, and a new complex boron–barium modifier. In order to compare the degree of effectiveness of the applied modifiers, a metallographic analysis of the structure was performed, hardness measurements were performed on the surface of the samples, and they were subjected to abrasion and cyclic shock-dynamic impact tests. In all cases, when modifying cast iron, there was an increase in hardness, a noticeable grinding of the microstructure, and a redistribution of structural components towards an increase in the proportion of perlite and finely dispersed ledeburite. A comparative analysis of the results of testing samples for dry friction and shock showed a higher surface resistance of cast samples made of modified cast iron compared to unmodified low-chromium cast iron of the same composition. A comparative study of the parameters of wear tracks and craters on damaged surfaces established that the most optimal combination of wear-resistant qualities of low-chromium cast iron occurs when it is treated with a complex boron–barium modifier, which is also evidenced by obtaining a more favorable microstructure. Introduction The sustainable development of the raw material sector of the world economy, along with the growing need to improve the technical and economic indicators of production, create the prerequisites for the search for new economical and efficient solutions in technological production processes. One of these problems in modern mechanical engineering, which has not lost its relevance and requires attention, is the improvement of the performance properties of low-alloy wear-resistant chromium cast iron as one of the most accessible and cheap materials for the manufacture of cast parts operating under friction and wear conditions. The world economy is based on mining and processing natural mineral raw materials into final products. One of the most time-consuming and costly processes is the crushing and fine grinding of the mined materials for the most complete recovery of the necessary useful elements. The fine grinding of mineral raw materials is carried out in special ore mills using steel and cast iron grinding media (rods, balls, cylpebs, etc.). Resistance of these grinding elements against impact and abrasive wear determines the efficiency of the technologies used, as the costs of crushing and grinding are, on average, 60-70% of the cost of the resulting product [1]. In addition to grinding media, costly consumable elements of crushing and grinding equipment of factories that are subject to intense impact and abrasive wear include lining elements of mills, classifiers, crushers, screens, working parts of slurry and sand pumps, and many more. The main and the most common materials for manufacturing the above products are alloy steels and white cast irons. Steel grinding bodies are mainly produced by severe plastic deformation of the metal (rolling and forging), whereas other steel products with more complex configurations are produced by casting or welding. Parts of white cast iron are made exclusively by foundry methods: by casting into sand-clay molds and chill molds, by the lost foam casting method, etc. The severe operating conditions of parts from this group impose rather high and to some extent contradictory requirements on the materials for their manufacturing: the metal must have high bulk hardness in order to resist friction and wear, and at the same time be sufficiently ductile and strong, able to withstand impact loads. Today, the world leaders in the market of high-quality grinding media are foreign companies: the South African ScowMetals Company for producing steel balls, the Belgian Magotteaux Company for producing iron balls. These manufacturers produce their products on expensive high-tech lines. The strong martensitic base and high hardness of carbides are, as a rule, achieved by alloying the alloy with chromium (usually about 14-18% by weight) or chromium and nickel (Ni-Hard alloys), followed by obligatory heat treatment [2][3][4]. The use of such materials and technologies in the production of grinding media at domestic and many foreign enterprises is still complicated due to the high cost; therefore, there remains the problem of developing alternative, less expensive materials and technologies that allow for producing the grinding products, the quality of which must fully meet the requirements of the domestic market. This article presents experimental data of changing the mechanical properties of cast samples of white cast iron containing 1% chromium after modifying treatment with a new complex boron-barium ferroalloy in comparison with samples of cast iron of the same chemical composition but modified with ferroboron and ferrosilicobarium separately. A significant increase in the hardness and impact strength of low-chromium white cast iron was achieved by the authors of [5,6], but a great result in these cases was provided not only by modifying the treatment of cast iron, but by subsequent heat treatment. It is known from [7] that a noticeable improvement in the microstructure and an increase in the hardness of low-alloy white cast iron up to 61 HRC can be achieved by introducing about 0.5% Cu into the melt. In studies [8,9], the authors used boron to modify complex-alloyed wear-resistant white cast irons of various compositions, which made it possible to achieve a significant improvement in the morphology of primary carbides by changing their chemical composition. However, the above methods also involved additional alloying of cast iron with a complex of alloying elements: Mn, Ni, Mo, Ti, Al, and Nb, which had a significant impact on cost. Works [9][10][11][12] were devoted to studies of the effect of complex modification of rare earth materials (REM) containing modifiers of various compositions on the microstructure and properties of chromium cast iron. Moreover, the complex modifiers used in these studies often had a rather complex composition. In addition to REM, they contained elements such as Ti, Mg, V, Bi, N, K, Zn, Na, and Al. There have also been positive results of scientific research on the modification of white cast iron with boron-containing complex modifiers [13,14], proving the effectiveness of the use of boron to improve the structure and working properties of cast iron. At the same time, the authors noted an improvement in the morphology and distribution of carbides in the structure from reticular to compact rack and, as a result, an increase in the hardness and impact strength of metal. The results of wear resistance tests under dry sliding conditions of chrome-plated cast irons modified with boron microdoses [15] also confirmed the high modifying effect of boron to increase the wear-resistant qualities of cast parts. However, scientists noted the high reactivity of boron-part of boron was consumed immediately after introduction into the liquid melt for deoxidization and denitrogenization of the metal, and the remaining amount called "active" boron had a direct modifying effect and alloyed the matrix microscopically [16]. Therefore, the nature of the effect of boron on the structure and properties of cast iron was strongly influenced not only by the amount of the additive introduced, but also by the initial content of elements such as oxygen and nitrogen in cast iron. Thus, from the analysis of modern modifiers used to improve the wear-resistant qualities of white cast iron, it follows that the high efficiency of most of these additives is provided by the complex effect of a group of active elements, a combination of their alloying and modifying effects on the metal, and, in some cases, the use of mandatory heat treatment. It is known that boron in white cast iron greatly increases its hardenability, increases microhardness and overall hardness, promotes the formation of dispersed hardening refractory particles in the structure that increase wear resistance, and reduces the technological temperature of casting alloys due to the approximation of the chemical composition of the alloy to eutectic [17]. However, the increased initial content of such harmful impurities as oxygen, nitrogen, and sulfur in cast iron of ordinary quality, smelted at the vast majority of foundries, despite all the listed positive qualities of boron as a modifier, significantly limits the effectiveness of its use in its pure form. At the same time, it is a well-known fact that barium, along with calcium and magnesium, is one of the most effective deoxidizers, desulfurizers, and modifiers of cast iron and steel [16,18,19]. Barium in the composition of modifying additives leads to the grinding of non-metallic inclusions in the structure of the processed alloy, the homogenization of the liquid metal, a decrease in the liquidus temperature, and an increase in technological plasticity [20]. One of the important properties of barium in the composition of complex modifiers is its ability to reduce the reactivity of the remaining active elements of the additive [16] and noticeably increase the duration of their action, enhancing and prolonging the overall modifying effect [21]. From the analysis carried out, it should be taken into account that, to date, there are no known results of research in the information field on the practical use of ferrous additives containing both boron and barium. Studies of the complex modifying effect of boron and barium on the wear-resistant properties of such an affordable and cheap structural material as low-chromium white cast iron are of undoubted interest and practical value. In this study, the task is to compare the mechanical properties of cast samples of white cast iron containing 1% chromium after modifying treatment with a new complex boronbarium ferroalloy with samples of cast iron of the same chemical composition without modifying treatment, as well as modified ferroboron and ferrosilicobarium separately. Materials and Methods In the Tamman laboratory furnace, the remelting method was used to melt white low-chromium cast iron of the following chemical composition (% wt.): 3.18% C, 0.66% Si, 0.63% Mn, 1.05% Cr, 0.03% S, 0.32% P, the rest Fe. After complete melting of the mixture, the temperature of the metal in the crucible was brought up to 1500 • C, and the modifier was introduced by means of immersion on a steel wire rod. The temperature of the metal in the furnace was controlled using a stationary tungsten-rhenium thermocouple VAR-5(VR-5)/VR-20. Three series of melts were conducted, in which ferroboron FeB12, ferrosilicobarium FS60Ba20, and a new complex boron-barium ferroalloy were used separately as modifiers. Their composition and the range of consumption are given in Table 1. The chemical To study the effect of introduced modifiers on the mechanical properties of lowchromium cast iron containing 1% Cr, samples were selected and prepared: − Using low-chromium cast iron modified with carbothermal ferroboron, with the amount of additive introduced 0.08% by weight of the liquid metal (sample 1); − Using low-chromium cast iron modified with ferrosilicobarium, with the amount of additive introduced 0.05% by weight of the liquid metal (sample 2); − Using low-chromium cast iron modified with a boron-barium ferroalloy, with the amount of additive introduced 0.14% by weight of the liquid metal (sample 3). The optimal costs of these additives for modifying low-chromium cast iron were established in the course of previous studies [26]. A sample of unmodified low-chromium iron was used as a reference sample (sample 0). The doses of modifiers and the estimated residual content of the main modifying elements in cast iron are presented in Table 2. Next, there was a 30-s exposure and casting of ø20 mm × 100 mm samples by the lost foam casting method. To ensure vacuum in the mold, a UK 25-1.6 compressor unit was used. The castings were knocked out after a two-minute exposure in the mold at the temperature of approximately 600 • C. From the middle part of the castings obtained, according to the scheme shown in Figure 1, samples of ø20 mm × 10 mm were cut out to measure hardness on a macro Vickers hardness tester Wilson VH 1150 (Buehler, Waukegan Road Lake Bluff., IL, USA, to conduct metallographic studies on a light microscope Zeiss AxioVert 200MAT (Carl Zeiss, Göttingen, Germany) and to determine the mechanical properties (tests for impact-dynamic action and abrasion). The surface roughness of the samples prepared for hardness measurements was approximately Rz160. The hardness of the samples was measured on the surface along the cutting plane at four points, at regular intervals, at a distance of 3 mm from the edge. Lake Bluff., IL, USA, to conduct metallographic studies on a light microscope Zeiss Axio-Vert 200MAT (Carl Zeiss, Göttingen, Germany) and to determine the mechanical properties (tests for impact-dynamic action and abrasion). The surface roughness of the samples prepared for hardness measurements was approximately Rz160. The hardness of the samples was measured on the surface along the cutting plane at four points, at regular intervals, at a distance of 3 mm from the edge. The preparation of metallographic samples was carried out on the equipment and according to the methodological guidelines of the Metalog Guide owned by "Struers A/S" company (Rødovre, Denmark), according to methodology E [27]. To study the microstructure, the microslips were etched with a 3% alcohol solution of HNO3. The elemental analysis of the samples was carried out using energy dispersion analysis (EDA) and COMPO (reflected electrons) methods on a JEOL JSM-7600F scanning electron microscope (JEOL Ltd., Akishima, Tokyo, Japan) with an OXFORD X-Max 80 detector (Oxford Instruments PLC, Abingdon, UK) in SEI (secondary electrons) mode, at an accelerating voltage of 15 kW. To analyze the obtained images, the Aztec Version 3.1 (Oxford Instruments PLC, Abingdon, UK) analyzer program was used. The samples were tested for abrasion on a high-temperature friction machine (hightemperature tribometer, CSM Instruments SA, Peseux, Switzerland) in one-way rotation mode ( Figure 2) under the following conditions: the temperature was 25 °C; humidity was 70%; the test medium was air; the shape of the counter-body was a ball; the counter-body material wasAl2O3; the counter-body diameter was 6 mm; the linear velocity was 10 s −1 ; the load was 5 N; and the distance was 300 m. At the same time, a ball of aluminum oxide was chosen as a counter-body due to the fact that this substance has high hardness and strength, is widely distributed in nature, and occupies a significant share in the composition of processed ores (basalts, granites, clays, feldspar, corundum, etc.). In the Earth's crust, aluminum is the most prevalent among metals and is third among all known elements. Taking into account the composition and hardness of the low-chromium cast iron being tested, a distance of 300 m was considered sufficient for a preliminary assessment of the wear-resistant properties of the surface of the samples. The preparation of metallographic samples was carried out on the equipment and according to the methodological guidelines of the Metalog Guide owned by "Struers A/S" company (Rødovre, Denmark), according to methodology E [27]. To study the microstructure, the microslips were etched with a 3% alcohol solution of HNO 3 . The elemental analysis of the samples was carried out using energy dispersion analysis (EDA) and COMPO (reflected electrons) methods on a JEOL JSM-7600F scanning electron microscope (JEOL Ltd., Akishima, Tokyo, Japan) with an OXFORD X-Max 80 detector (Oxford Instruments PLC, Abingdon, UK) in SEI (secondary electrons) mode, at an accelerating voltage of 15 kW. To analyze the obtained images, the Aztec Version 3.1 (Oxford Instruments PLC, Abingdon, UK) analyzer program was used. The samples were tested for abrasion on a high-temperature friction machine (hightemperature tribometer, CSM Instruments SA, Peseux, Switzerland) in one-way rotation mode ( Figure 2) under the following conditions: the temperature was 25 • C; humidity was 70%; the test medium was air; the shape of the counter-body was a ball; the counter-body material wasAl 2 O 3 ; the counter-body diameter was 6 mm; the linear velocity was 10 s −1 ; the load was 5 N; and the distance was 300 m. At the same time, a ball of aluminum oxide was chosen as a counter-body due to the fact that this substance has high hardness and strength, is widely distributed in nature, and occupies a significant share in the composition of processed ores (basalts, granites, clays, feldspar, corundum, etc.). In the Earth's crust, aluminum is the most prevalent among metals and is third among all known elements. Taking into account the composition and hardness of the low-chromium cast iron being tested, a distance of 300 m was considered sufficient for a preliminary assessment of the wear-resistant properties of the surface of the samples. The remaining parameters such as temperature, sliding speed, and load on the sample were selected to approximate the actual working conditions of grinding bodies in a drum mill during dry grinding. The test for cyclic impact-dynamic action was carried out using an impact tester (CemeCon AG, Würselen, Germany). The samples under study were subjected to a series of impacts at a constant frequency of 50 Hz using a WC-Co hard alloy ball with the diameter of 5 mm at loads of 500 and 700 N. The number of impacts was 10 5 . Abrasion and shock tests were also carried out on the cutting surface of the samples. To determine the parameters of deformation and wear tracks, a Veeco WYKO-NT1100 optical profilometer (Veeco Instruments Inc., New York, NY, USA) was used. To compare the results achieved, all the above tests were repeated on samples of unmodified low-chromium cast iron of similar composition under the same conditions, but without modifying treatment. Results A sample of unmodified low-chromium cast iron (sample 0) was used as a reference sample. The microstructure of samples of unmodified and modified low-chromium cast iron is shown in Figure 3. As can be seen from Figure 3a, the metal base of low-chromium unmodified cast iron is perlite + ledeburite + cementite. The area occupied by perlite is about 75%, ledeburite is 5%, cementite is about 20%. Analysis of metallographic images of samples showed grinding and a more uniform distribution of structural components in modified cast irons. As a result of modification, the type of carbide distribution has changed from dendritic branched to compact and more isolated. The perlite grains in boron-barium ferroalloy-treated cast iron also tend to a compact spherical shape ( Figure 3d). As can be seen from Figure 3b, there was a redistribution of the areas of the structural components of cast iron. The area occupied by perlite was about 80%, ledeburite 10%, cementite about 10%. There was a noticeable grinding of perlite and an increase in the amount of ledeburite. In the structure of cast iron modified with ferrosilicobarium (Figure 3c), the area of ledeburite eutectic increased. The area occupied by perlite decreased to 60%, ledeburite increased to 30%, and cementite remained unchanged (about 10%). The ratio of the areas occupied by structural components when using a boron-barium additive is the same as when modifying FeSi60Ba20 (perlite is about 60%, ledeburite is 30%, cementite is about 10%); however, there is a noticeable change in the morphology of structural components from dendritic to a more compact form (Figure 3d). Figure 4 shows the quantitative ratio of the structural components. Quantitative analysis was performed using Thixomet Pro software (version 0031, Thixomet, Saint-Petersburg, Russian Federation). The ratio of structural components in samples 2 and 3 is almost the same, but the morphology of the structure is different. In sample 3, the structure is more dispersed, the cementite lamellae are thin, and the pearlite zones are more spheroidal and have a smaller size. We assume that these differences in the structure of the samples can have a significant impact on the wear-resistant properties of the alloy, especially on the resistance to external impact. For each of the samples, four surface hardness measurements were carried out at the points shown in the Figure 1. The results of measuring the hardness of low-chromium cast iron samples before and As can be seen from Figures 3 and 4, depending on the nature of the modifier, the ratio of structural components and the nature of the structure change. After modification, the proportion of cementite decreased in all samples, which, in all likelihood, may explain some increase in the impact resistance of the modified samples. The ratio of structural components in samples 2 and 3 is almost the same, but the morphology of the structure is different. In sample 3, the structure is more dispersed, the cementite lamellae are thin, and the pearlite zones are more spheroidal and have a smaller size. We assume that these differences in the structure of the samples can have a significant impact on the wear-resistant properties of the alloy, especially on the resistance to external impact. For each of the samples, four surface hardness measurements were carried out at the points shown in the Figure 1. The results of measuring the hardness of low-chromium cast iron samples before and after modification on the Wilson VH1150 hardness tester were translated into Rockwell units for simplicity and are presented in Table 3. To determine wear resistance, dry friction tests of metal samples were carried out on a high-temperature tribometer. The wear patterns and profiles of the tested surfaces are shown in Figures 5-7 below. We assume that the relatively low abrasive wear resistance of sample 0 is also plained by the uneven distribution of carbides over the volume due to accelerated cooli of the casting surface and later slow crystallization of its central region. The structure such cast iron is dominated by carbides of the Me3C type that have an orthorhombic cr Figure 7 shows that the surface wear of samples 1-3 is less than that of sample 0. The smallest groove depth is obtained on the surface of sample 2, which can be explained by increased hardness. However, the profile of the worn surface has a rather pronounced jagged relief, which can indicate the tearing out of the matrix of hard particles, carbides, and silicides, which have a rough structure. Sample 0 is subjected to the greatest wear, which is generally explained by its rough structure and the uneven distribution of carbides in the metal base. When the surface is destroyed in this case (Figure 7, sample 0), after wear of a certain layer of the matrix, under the action of friction, brittle fracture of the surface occurs with chipping and precipitation of hard carbide particles. With intensive impact-abrasive loading of a sample made of low-chromium pre-eutectic cast iron, the surface first cracks under the influence of dynamic stress from impact combined with the micro-cutting action of the abrasive, and then the material is removed from the destroyed surface under the action of surface friction forces [29]. Sample 3 has the smallest index for the width of the wear track, and the groove depth is smaller than that of samples 0 and 1. The worn surface relief of this sample is smoother, Figure 7 shows that the surface wear of samples 1-3 is less than that of sample 0. Th smallest groove depth is obtained on the surface of sample 2, which can be explained b increased hardness. However, the profile of the worn surface has a rather pronounced jagged relief, which can indicate the tearing out of the matrix of hard particles, carbides and silicides, which have a rough structure. Sample 0 is subjected to the greatest wear, which is generally explained by its roug structure and the uneven distribution of carbides in the metal base. When the surface i destroyed in this case (Figure 7, sample 0), after wear of a certain layer of the matrix, unde the action of friction, brittle fracture of the surface occurs with chipping and precipitatio of hard carbide particles. With intensive impact-abrasive loading of a sample made o low-chromium pre-eutectic cast iron, the surface first cracks under the influence of dy namic stress from impact combined with the micro-cutting action of the abrasive, and the the material is removed from the destroyed surface under the action of surface frictio forces [29]. Sample 3 has the smallest index for the width of the wear track, and the groove dept is smaller than that of samples 0 and 1. The worn surface relief of this sample is smoother We assume that the relatively low abrasive wear resistance of sample 0 is also explained by the uneven distribution of carbides over the volume due to accelerated cooling of the casting surface and later slow crystallization of its central region. The structure of such cast iron is dominated by carbides of the Me 3 C type that have an orthorhombic crystal lattice [9]. Therefore, under these conditions, the metal has uneven hardness over the section: a relatively hard but at the same time brittle surface layer and a softer loose core, as shown in previous studies [28]. Decreasing the metal hardness and density in the central parts of the casting is also facilitated by the presence of graphite inclusions with a diameter of up to 15 µm. Thus, the positive effect of boron-and barium-containing modifiers on the shape, size, and distribution of the structural components of low-chromium white cast iron becomes quite obvious, which also makes it possible to predict with a high degree of probability the beneficial effect of these additives on the performance characteristics of cast iron. To analyze changing the mechanical properties of samples of low-chromium cast iron modified with boron-and barium-containing additives, we tested the samples for abrasion and cyclic impact-dynamic action. Figures 6 and 7 show the nature of wear and presents the parameters of the wear tracks of low-chromium samples unmodified and modified with boron-and barium-containing white cast iron additives. Figure 7 shows that the surface wear of samples 1-3 is less than that of sample 0. The smallest groove depth is obtained on the surface of sample 2, which can be explained by increased hardness. However, the profile of the worn surface has a rather pronounced jagged relief, which can indicate the tearing out of the matrix of hard particles, carbides, and silicides, which have a rough structure. Sample 0 is subjected to the greatest wear, which is generally explained by its rough structure and the uneven distribution of carbides in the metal base. When the surface is destroyed in this case (Figure 7, sample 0), after wear of a certain layer of the matrix, under the action of friction, brittle fracture of the surface occurs with chipping and precipitation of hard carbide particles. With intensive impact-abrasive loading of a sample made of low-chromium pre-eutectic cast iron, the surface first cracks under the influence of dynamic stress from impact combined with the micro-cutting action of the abrasive, and then the material is removed from the destroyed surface under the action of surface friction forces [29]. Sample 3 has the smallest index for the width of the wear track, and the groove depth is smaller than that of samples 0 and 1. The worn surface relief of this sample is smoother, with the smallest protrusions and depressions, which is also noticeable in Figures 5-7. This indicates the high efficiency of modifying low-chromium cast iron with the boron-barium ferroalloy to obtain a favorable, more uniform structure that can effectively resist friction and wear. The coefficients of sliding friction for all the samples under study have fairly close values (about 0.8-0.9), which is typical for surfaces made of materials similar to cast iron obtained by casting, without mechanical processing. The high coefficient of sliding friction, as well as its gradual increase as the distance travelled increases (Figure 8), indicates a sufficiently intensive nature of surface wear, when the particles destroyed and separated from the sample surface, having a high hardness, coarse fraction, and sharp shape, have a noticeable abrasive effect on the surface. This indicates the high efficiency of modifying low-chromium cast iron with the boronbarium ferroalloy to obtain a favorable, more uniform structure that can effectively resis friction and wear. The coefficients of sliding friction for all the samples under study have fairly clos values (about 0.8-0.9), which is typical for surfaces made of materials similar to cast iron obtained by casting, without mechanical processing. The high coefficient of sliding fric tion, as well as its gradual increase as the distance travelled increases (Figure 8), indicate a sufficiently intensive nature of surface wear, when the particles destroyed and separated from the sample surface, having a high hardness, coarse fraction, and sharp shape, hav a noticeable abrasive effect on the surface. In partially graphitized half wear-resistant cast irons, finely dispersed graphite inclu sions can dissipate external and internal stresses and fill voids formed during carbide de lamination [30]. The results of the experiments showed that sample 3 has the lowest roughness of th surface wear Ra ≈ 1.9, whereas in unmodified cast iron, this indicator is about 2.7, and, in other modified cast iron samples, it is equal to 2.4 (sample 1) and 2.1 (sample 2). At the same time, the indicators of the greatest deepening of the Rv profile as they increase are arranged in the following order: sample 2, 6.39 µ m; sample 3, 6.69 µ m; sample 1, 7.22 µ m sample 0, 8.15 µ m (Figure 9). The smallest width of the groove profile at the base (0.37 mm) and in the middle of the depth (0.26 mm) also belongs to sample 3 modified with a boron-barium modifier, and the worst indicators for the unmodified sample are 0.5 mm In partially graphitized half wear-resistant cast irons, finely dispersed graphite inclusions can dissipate external and internal stresses and fill voids formed during carbide delamination [30]. The results of the experiments showed that sample 3 has the lowest roughness of the surface wear Ra ≈ 1.9, whereas in unmodified cast iron, this indicator is about 2.7, and, in other modified cast iron samples, it is equal to 2.4 (sample 1) and 2.1 (sample 2). At the same time, the indicators of the greatest deepening of the Rv profile as they increase are arranged in the following order: sample 2, 6.39 µm; sample 3, 6.69 µm; sample 1, 7.22 µm; sample 0, 8.15 µm (Figure 9). The smallest width of the groove profile at the base (0.37 mm) and in the middle of the depth (0.26 mm) also belongs to sample 3 modified with a boron-barium modifier, and the worst indicators for the unmodified sample are 0.5 mm and 0.36 mm, respectively. Although the results obtained do not allow us to identify an obvious pattern between the coefficient of friction, the roughness of the surface wear, and the degree of its wear, we consider it quite appropriate to use data from visual inspection of fracture sites and geometric parameters (dimensions and shape) of the relief of grooves on worn surfaces for a preliminary assessment of the effect of additives on the wear-resistant properties of cast iron. A diagram with the values of the coefficient of sliding friction, surface roughness, and groove sizes on the surface of the samples is shown in Figure 9. It is known that unmodified low-chromium cast iron has rather low impact resistance due to the fact that the main structural components of cast iron, i.e., pearlite, ledeburite eutectic, and carbides, have a coarse structure and uneven distribution over the volume. Resistance of cast iron to impact loads is also significantly reduced by the large-lamellar shape of cementite grains in the pearlite composition and the elongated shape of ledeburite eutectic grains, which serve as stress concentrators at high impact-dynamic loads [31]. Under dynamic impact, this leads to the formation of microcracks at the grain boundaries and to subsequent fatigue failure of the metal when the critical state is reached. The uneven distribution of large carbides in the volume of the metal, among which there are elongated needle-shaped grains, also significantly reduces the alloy impact resistance as a result of the crack appearance and further chipping. As rather low impact resistance of unmodified low-chromium cast iron has previously been shown [31], tests for cyclic impact-dynamic action were carried out only on samples 1-3, in order to compare their performance with each other. Figures 10 and 11 show the nature of dents on the surface of samples made of modified low-chromium cast iron and their profiles after cyclic impact-dynamic action on the impact tester with a counter-body load of 500 and 700 N. As rather low impact resistance of unmodified low-chromium cast iron has pre ously been shown [31], tests for cyclic impact-dynamic action were carried out only samples 1-3, in order to compare their performance with each other. Figures 10 and 11 show the nature of dents on the surface of samples made of mo fied low-chromium cast iron and their profiles after cyclic impact-dynamic action on impact tester with a counter-body load of 500 and 700 N. Figure 12 shows that, by the nature of the surface deformation, sample 1 has the hi est impact resistance, and the lowest index belongs to sample 2. The dent profile obtain on the surface of sample 3 occupies the middle position. The deformation profile on the surface of cast iron sample 2 has the smallest dep however, traces of cracks, chips, and metal delamination are visible on the surface of sample. This indicates the formation of a rough inhomogeneous metal structure near sample surface (Figure 13). The results of tests of samples for cyclic shock-dynamic impact at a load of 500 Figure 11. Three-dimensional images of dents on the surface of samples made of low-chromium cast iron, with the test load of 700 N. Figure 12 shows that, by the nature of the surface deformation, sample 1 has the highest impact resistance, and the lowest index belongs to sample 2. The dent profile obtained on the surface of sample 3 occupies the middle position. ls 2022, 12, x FOR PEER REVIEW 13 of (a) sample 3 (b) sample 2 (c) sample 1 Figure 11. Three-dimensional images of dents on the surface of samples made of low-chromiu cast iron, with the test load of 700 N Figure 12 shows that, by the nature of the surface deformation, sample 1 has the hig est impact resistance, and the lowest index belongs to sample 2. The dent profile obtain on the surface of sample 3 occupies the middle position. The deformation profile on the surface of cast iron sample 2 has the smallest dep however, traces of cracks, chips, and metal delamination are visible on the surface of t sample. This indicates the formation of a rough inhomogeneous metal structure near t sample surface (Figure 13). The results of tests of samples for cyclic shock-dynamic impact at a load of 500 showed that the lowest roughness (Ra ≈ 1.21 µm) and crater depth (Rv ≈ 3.87 µm) belo The deformation profile on the surface of cast iron sample 2 has the smallest depth; however, traces of cracks, chips, and metal delamination are visible on the surface of the sample. This indicates the formation of a rough inhomogeneous metal structure near the sample surface ( Figure 13). Inspection of the surface of the remaining samples did not reveal any noticeable si of destruction (Figure 14). The linear dimensions of the craters (depth and diameter at the base) formed on surface of modified cast iron samples during impact tests are shown in Figure 15. The results of tests of samples for cyclic shock-dynamic impact at a load of 500 N showed that the lowest roughness (Ra ≈ 1.21 µm) and crater depth (Rv ≈ 3.87 µm) belong to sample 2. The worst results are for sample 1 (Ra ≈ 2.66 µm and Rv ≈ 7.13 µm), and sample 3 occupies the average position according to these indicators (Ra ≈ 2.14 µm and Rv ≈ 6.09 µm). In terms of the diameter of the craters on the samples, the best result was achieved when modifying cast iron with a boron-barium additive (d ≈ 0.45 mm), whereas in other samples, the measurement of the diameter of the dents showed very similar results: sample 1, 0.53 mm; sample 2, 0.56 mm. When conducting such tests with a load of 700 N, craters with a greater depth but a smaller diameter at the base are formed on the surface of all three samples. Moreover, the indicators for both sizes in all three samples are quite close and are Rv ≈ 8 µm and d ≈ 0.5 mm. Upon visual inspection of the surface of the samples after impact, numerous cracks and chips were found on the surface of sample 2 ( Figure 13), which may indicate reduced plastic properties of the metal. Inspection of the surface of the remaining samples did not reveal any noticeable signs of destruction ( Figure 14). Inspection of the surface of the remaining samples did not reveal any noticeable sig of destruction ( Figure 14). The linear dimensions of the craters (depth and diameter at the base) formed on surface of modified cast iron samples during impact tests are shown in Figure 15. The linear dimensions of the craters (depth and diameter at the base) formed on the surface of modified cast iron samples during impact tests are shown in Figure 15. Figure 16 shows that when iron is modified with ferroboron, noticeable grinding pearlite colonies takes place in the structure of cast iron, the shape of which tends to compact spherical, which should favorably affect its impact resistance. A positive role in improving the impact resistance of samples 1 and 3 is played the refinement of the structure (Figure 3) and the spheroidizing effect of boron on t structural components ( Figure 17). In Figure 17, in the lower part of the multilayer EDS (energy dispersive spectroscop map, dark spots of graphite inclusions of the compact shape are visible in the structure sample 1. A high concentration of carbon in this area recorded on the map of the eleme distribution and the total spectrum of the map, confirms the assumptions about the orig and nature of these inclusions. The secondary phases consisting of graphite, seconda precipitates, and residual austenite are able to prevent delamination of the matrix a Figure 16 shows that when iron is modified with ferroboron, noticeable grinding of pearlite colonies takes place in the structure of cast iron, the shape of which tends to be compact spherical, which should favorably affect its impact resistance. Figure 16 shows that when iron is modified with ferroboron, noticeab pearlite colonies takes place in the structure of cast iron, the shape of whi compact spherical, which should favorably affect its impact resistance. A positive role in improving the impact resistance of samples 1 and 3 the refinement of the structure (Figure 3) and the spheroidizing effect of structural components (Figure 17). In Figure 17, in the lower part of the multilayer EDS (energy dispersive map, dark spots of graphite inclusions of the compact shape are visible in th sample 1. A high concentration of carbon in this area recorded on the map distribution and the total spectrum of the map, confirms the assumptions ab and nature of these inclusions. The secondary phases consisting of graph precipitates, and residual austenite are able to prevent delamination of th carbide and minimize their damage under external impact-abrasive action The compact shape of graphite provided by the inhibitory effect of boron on its growth weakens the working cross section of the matrix to a lesser extent and does not have a strong notching effect, which contributes to the development of high stress concentrations around graphite spheroids [30,32]. Summing up, it should be noted that sample 2, made of low-chromium cast iron modified with ferrosilicobarium (0.05% by weight of the liquid metal), has a dense finegrained structure due to an increase in the degree of supercooling and the initiation of many additional crystallization centers, which ensure the highest hardness index (HRC 59 units). However, the nature of the sample surface destruction under cyclic impactdynamic action and abrasion indicates inherent brittleness (Figures 6c and 10b), probably caused by the formation of heterogeneous by dimensionally pearlite colonies in the structure, some of which were formed as rough conglomerate (Figure 3c). Surface hardness of sample 1, made of cast iron modified with carbothermal ferroboron (0.08% by weight of liquid metal), increased to 56 HRC units (7 units higher than that of unmodified cast iron), which, to a greater extent, can be caused by the pronounced carbide-stabilizing effect of boron. Impact-dynamic testing of sample 1 shows the best results in terms of toughness (Figure 12), which indicates increasing impact resistance of the metal due to the formation of a structure with the compact granular shape of the components. According to the results of the dry abrasion test, the wear resistance index for this cast iron is also noticeably higher than that of the unmodified cast iron. Low-chromium hypoeutectic cast iron modification with carbothermal ferroboron made it possible to make positive changes in the structure by increasing the degree of supercooling of cast iron during crystallization and grinding of structural components as In Figure 17, in the lower part of the multilayer EDS (energy dispersive spectroscopy) map, dark spots of graphite inclusions of the compact shape are visible in the structure of sample 1. A high concentration of carbon in this area recorded on the map of the element distribution and the total spectrum of the map, confirms the assumptions about the origin and nature of these inclusions. The secondary phases consisting of graphite, secondary precipitates, and residual austenite are able to prevent delamination of the matrix and carbide and minimize their damage under external impact-abrasive action [30]. The compact shape of graphite provided by the inhibitory effect of boron on its growth weakens the working cross section of the matrix to a lesser extent and does not have a strong notching effect, which contributes to the development of high stress concentrations around graphite spheroids [30,32]. Summing up, it should be noted that sample 2, made of low-chromium cast iron modified with ferrosilicobarium (0.05% by weight of the liquid metal), has a dense finegrained structure due to an increase in the degree of supercooling and the initiation of many additional crystallization centers, which ensure the highest hardness index (HRC 59 units). However, the nature of the sample surface destruction under cyclic impact-dynamic action and abrasion indicates inherent brittleness (Figures 6c and 10b), probably caused by the formation of heterogeneous by dimensionally pearlite colonies in the structure, some of which were formed as rough conglomerate (Figure 3c). Surface hardness of sample 1, made of cast iron modified with carbothermal ferroboron (0.08% by weight of liquid metal), increased to 56 HRC units (7 units higher than that of unmodified cast iron), which, to a greater extent, can be caused by the pronounced carbidestabilizing effect of boron. Impact-dynamic testing of sample 1 shows the best results in terms of toughness (Figure 12), which indicates increasing impact resistance of the metal due to the formation of a structure with the compact granular shape of the components. According to the results of the dry abrasion test, the wear resistance index for this cast iron is also noticeably higher than that of the unmodified cast iron. Low-chromium hypoeutectic cast iron modification with carbothermal ferroboron made it possible to make positive changes in the structure by increasing the degree of supercooling of cast iron during crystallization and grinding of structural components as well as preventing the nucleation and growth of graphite inclusions. The obtained profilograms show that a sample of this cast iron shows the best resistance to impact loads. With the treatment of cast iron of the same composition with ferrosilicobarium FeSi60Ba20, the modifying effect is manifested in the greatest increase in the metal hardness but with a slight decrease in its strength and ductility. The treatment of cast iron of the experimental composition with a complex boronbarium additive led to noticeable strengthening of the sample. When tested by dry friction, the track of the groove on the worn surface is relatively shallow and narrow, and the magnitude of the ridges from peak to trough is lower than for the other samples ( Figures 5-7). After cyclic impact, there were no traces of brittle fracture: cracks, chips, potholes, were found on the surface of the sample (Figures 10, 11 and 13). Introducing a boron-barium modifier into low-chromium cast iron, sample 3, due to the complex modifying effect of boron and barium, makes it possible to achieve the optimal increase in both hardness and abrasion resistance of cast iron and its strength characteristics. This ensures stable hardness of cast iron of 57 units HRC (higher than that of unmodified cast iron by 17%). It should be noted that, due to the optimal combination of the properties obtained, sample 3, made of low-chromium cast iron treated with a boron-barium modifier, quite effectively resisted both abrasive loads ( Figure 7) and impactdynamic action (Figure 12) during the tests. Active components of the boron-barium additive relieve stress in the cast iron matrix that presents pearlite with improved ductility and impact strength. All these factors contribute to improved impact fatigue cracking resistance and impact wear resistance. Conclusions The use of a new complex boron-barium modifier for out-of-furnace processing of low-chromium cast iron, due to the combined action of both active elements, boron and barium, makes it possible to obtain the optimal ratio of hardness and strength of the metal, which significantly affects its performance without the additional use of any expensive alloying components. It has been experimentally determined that, for the modification of pre-eutectic lowchromium white cast irons used for smelting wear-resistant parts, it seems most appropriate to use a complex boron-barium modifier containing both active elements at the same timeboron (8.88%) and barium (3.92%)-in an amount of 0.14% of the mass of the liquid metal. Boron and barium in the ratio used contribute to the formation of a compact form in the graphite structure, desulfurize the alloy, and have a grinding effect on the structural components. It is shown that the introduction of a boron-barium modifier into low-chromium cast iron increases the hardness in cast iron to HRC 57, which is 17% higher compared to unmodified cast iron. The dimensions of the wear track on the surface of the sample when modifying cast iron with a boron-barium modifier are noticeably reduced: the depth by 18%, the width of the profile at the base by 26%, and the width at half the depth of the profile by 28%. This is the best indicator in the group of modifiers studied. Measuring the dimensions of the profiles and examining the nature of the imprint after a shock-dynamic test showed that an alloy modified with a complex boron-barium additive has a higher plasticity. Thus, it was experimentally established that, among the additives used in this study for modifying low-chromium white cast iron, the optimal combination of hardness, plasticity, and resistance to dry abrasion was achieved using a new boron-barium modifier, which proves its effectiveness in improving the quality of wear-resistant castings. Author Contributions: D.A., A.I. and S.K. conceived and designed the concept of the research and methodology. Y.C. and V.K. performed material investigations and contribute to paper editing. S.A. performed metallographic studies, further data analysis, and interpretation. D.A., A.I. and S.K. supervised and substantively revised all work and the article text. All authors participated in writing the paper. All authors have read and agreed to the published version of the manuscript. Funding: This research is funded by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (Grant No. AP08855477 «Developing and implementing the technology of producing the «Nihard» class cast irons with increased operational properties for mining and processing equipment parts»). Institutional Review Board Statement: Not applicable.
11,197.2
2022-07-06T00:00:00.000
[ "Materials Science", "Engineering" ]
Design of Terminal-independent Service Mobility in Wired and Wireless IP Network Terminal-independent service mobility is capability to keep using a service while switching between terminals without service disconnection. That allows users to continue the active session between different terminals. Recently Internet keeps evolving toward ubiquitous environment. Users may use a lot of terminals for a particular IP (Internet Protocol) network service and choose the best terminal between user’s available terminals depend on the situation. In this paper, we propose a TCS (terminal control server) concept for terminal-independent service mobility. The TCS enables users to choose the best terminal which fits ongoing service among available user terminals. For implementation, the TCS manages various information including user terminals and service subscription. This information mainly obtained from existing system such like AAA (Authentication Authorization Accounting) including authentication procedure and user profiles. Introduction The mobility is one of the main concerns in IP network.Internet standard organizations provide significant effort to implement the IP mobility in Internet such as MIPv4 (Mobile IP) (1,2) , MIPv6 (Mobile IPv6) (3) , PMIP (Proxy Mobile IP) (4)(5)(6) and NEMO (Network Mobility) (7) .Although many different mobility approaches are proposed, most approaches just provide solutions based on terminal dependent technologies.Using the Internet service, especially multimedia service, users may want to use more suitable terminal for better service quality (8)(9)(10)(11)(12) . In general, devices using an IP (Internet protocol) service are terminals which can use an Internet service, for example, a mobile PC (Personal Computer), an IPTV (Internet Protocol television), and so on.These terminals provide users with IP services by using their own functions suitable for corresponding services.It tends to merge or combine functions of different terminals to produce a multifunctional terminal.For example, the multi-functional terminals include a mobile PC which enables a user to enjoy various IP services such as VoD (Video on Demand) and IPTV, and a mobile phone capable of Internet browsing. However, even the multi-functional terminals that provide various functions have their own primary functions and additional secondary functions.For example, a terminal suitable for a voice-based service is preferably compact and light, like a mobile phone, so that a user can easily carry it, and a terminal suitable for a video service requires a large screen and high performance speakers in consideration of visual effects and sound effects. The satisfaction of the user is proportional to how much a function of the terminal is specialized for the service, and thus the user tends to want a specialized terminal for a particular service, with a better performance than that of a secondary function of a multifunctional terminal.For example, if a user uses an IPTV service with a mobile phone while on the move, the user might want to keep using the IPTV service by changing a terminal from the mobile phone to a television with a bigger screen and a clear image quality when there is the television the user can use. As mentioned above, when a user wants to keep using the same service through a different terminal, conventionally, the current used service is stopped and then the user is required to access to the same service using the new terminal.However, in such the method, a different service session needs to be generated for the newly started service, and thus the service continuity is not guaranteed. In this paper, we design a TCS (terminal control server) which allows users to keep using a currently used IP service with a different terminal that the user owns without disconnection. Terminal Control Server The TCS is comprised of several sub-modules and each of them has their own functions respectively.Figure 1 shows the functional architecture of TCS.The TCS includes an AAA (authentication authorization accounting) interface unit, a terminal interface unit, a service/user property management unit, a terminal property management unit, a service/user status management unit, a terminal status management unit, a service proxy unit and a terminal-shifted service control unit. The AAA interface unit receives unique information having fixed properties with respect to the terminal authorized by the AAA and the user of the terminal, and the two property (service/user, terminal) management units store and manage the unique information received from the AAA interface unit. The unique information having fixed properties can be classified into terminal unique information and service and user unique information. The terminal unique information includes general information of the terminal, such as, a terminal identification, a terminal name, a terminal serial number, a model of the terminal, and a shape of the terminal, and information of terminal's function, such as built-in protocol list, NIC maximum access speed, supportable bandwidth, CPU performance, an operating system version, compatible codec, and remote control capability.These terminal related properties are managed by the terminal property management unit, which enables a terminal-shifted service control unit to recognize what service can be provided to a corresponding terminal. The service and user unique information includes user personal information (e.g., identification, contact numbers, subscribed services, bills and the like), general information relating the services (i.e., service feature information) and information of user's terminal (a list of owned terminals).Such the service and user related properties are stored and managed by a service/user property management unit, which infers connection between pieces of information, thereby obtaining a user's taste in services, a pattern of use, and service preference of the user. Thus, the terminal-shifted service control unit can learn through the service/user property management unit what service the user has subscribed to, kinds and features of the services that are provided to the user, what types of valid terminals the user owns, and the service preference of the user. The terminal interface unit receives status information having variable properties with respect to an authorized terminal and the currently used service from the authorized terminal, and two status management units store and manage the pieces of status information received from the terminal interface unit. The service/user status management unit manages the status information, such as user's location information and service session information, which varies according to the status of use of the service.Also, the service/user status management unit performs reset or synchronization on the Terminal Control(on/off, service trigger, etc.) Service Control(session control, etc.) Service Proxy on/off, etc. Fig. 1.Title.Functional architecture of TCS currently used service, thereby managing the status of the current service.By the service/user status management unit, the terminal-shifted service control unit can recognizes what service can be provided to the terminal based on the current location of the user, the status of the currently used service, and if the shift in the terminals is available for the currently used service. The terminal status management unit manages the status information of each terminal that the user has registered.The status information of various terminals registered by the user includes pieces of variable information such as an access network, access port information, a terminal address, terminal source availability which varies according to use of the service.The variable information is periodically obtained from a corresponding terminal and updated. The terminal-shifted service control unit primarily controls the terminal-shifted service.When a terminal that is using a service requests the TCS to shift the service to different terminal, the terminal-shifted service control unit refers to the (unique and status) information of the user and the service and provides the requested terminal with information of terminals available for the service based on the information, and generates service session information and sends it to a selected terminal for shifted service. The service proxy unit requests the CP (contents provider) to send the service that the requested terminal is using, receives service data in response, and modifies the service data in accordance with the specification of the terminal to be provided with the service and then sends the modified service data to the terminal.The service proxy unit acts as a buffer between the terminal that requests the shift of the service and the terminal to which the service is shifted. Terminal-shifted Service For the terminal-independent service mobility, the TCS manages pieces of information which are classified into information having fixed properties and information having variable properties according to variability in use of the IP services. The information having fixed properties may be general information of the user and information about the subscribed service.Generally, such the information is already possessed by a service provider, and is utilized for the AAA or the like.Meanwhile, the information having variable properties may be information regarding the terminal or service session information, which changes according to the use of the service.Unique information that has fixed properties can be obtained through the AAA which is a subscriber terminal authentication system, and the information having variable properties can be obtained from each terminal.Figure 2 shows concept of how to shift a service from one terminal to a different terminal while a user is using a service. Referring to Figure 2, the terminal-shifted service may be classified into two types according to whether the TCS modifies the service data.One is to use mainly the terminal-shifted service control unit, and another is to use the service proxy unit of TCS. In the first type, the TCS obtains unique and status information of each user-owned terminal, which are registered by a user through an IP network, and retains the information.When the user requests to shift the IP service from the terminal 3 to terminal 2, the TCS controls terminal 2 to issue a service request to the CP by utilizing the service status information which has been obtained from terminal3. Then the CP provides terminal 2 with a service to which a corresponding service session is applied, and stops the service to terminal 3. In the second type, a terminal-shifted service can be provided using a service proxy unit.The TCS requests the CP to send service data when there is a terminal shift request from a user after the user registers a service and information of the terminal that the user is using through an IP network.Then the TCS receives the requested service data, processes the data in accordance with the specification of a terminal which is requested to be provided with the service, i.e. term.2, and then sends the modified data to the terminal. Figure 3 shows terminal operation for TCS.When a terminal is turned on, the terminal sends authentication information to the AAA system for the authentication process of service provider, and the AAA system processes the authentication for the terminal and IP service based on the authentication information sent from the terminal, and notifies the terminal of the completion of authentication.Once the authentication is verified, the user terminal is authorized to use the IP service.Such the authentication procedures are defined by unique authentication processes of the each service provider.After the authentication process, the terminal which is informed of authentication completion from the AAA sends information about the current status of the terminal, including an address, network information and service usage information, to the TCS by periods.The TCS stores and manages pieces of status information received from individual terminals, thereby capable of learning the current status of the corresponding terminal. Figure 4 shows service shift procedure using TCS.Referring to Figure 4, an ongoing service is shifted from previous terminal to new terminal. First, the authentication process predetermined by a service provider is performed, and the AAA sends TCS obtainable unique information such as property of the terminal, property of the terminal user and the information of the terminal the user is holding when the authentication is complete according to the predefined procedures. The authenticated terminal can access the service (including network service, value added IP service), and the terminal sends the status information regarding the current service and the terminal to the TCS.These procedures can be referred to as preparation for use of the service of each terminal.When the preparation for use of the service with respect to each terminal is completed, the service/user property management unit and terminal property management unit of the TCS stores and manages the unique information of the service and the user and the unique information of the terminal respectively.Likewise, the service/user status information management unit and terminal status management unit stores the status information of the service and the user and the terminal status information respectively.Then the terminal issues a service request to the CP and the CP provides the requested service to the terminal. Thereafter, if the user needs to change the terminal for use of the service with a new terminal while using the service with the previous terminal, the user can request the TCS for the terminal-shifted service.A particular button or switch may preferably be used for this procedure to initiate a specific function to interrupt the operation of the terminal which is using the service. Receiving the request of the terminal-shifted service, the TCS searches the each property management unit, and makes a list of terminals that have been registered by the user and are available for the current IP service (the list includes only available terminals, except the offline terminal like the terminal 1 in Figure 2), and sends the list to the previous terminal on the user side.In response, the user chooses a terminal from the list and sends the user's selection to the TCS. Then, the TCS sends the selected terminal a remote start message in accordance with the characteristics of the selected terminal, and remotely starts the selected terminal (i.e.new terminal).Alternatively, the TCS sends the user a starting guide message so that the user can start the desired terminal manually.However, if the new terminal has been already turned on, the TCS is aware of this situation, and thus the procedure will be discarded.And then, the preparation of use of the service is performed on the new terminal to which the service is to be shifted.When the new terminal is ready for using the service, the TCS sends a preparation completion message to the previous terminal. The previous terminal that receives the message sends the status information of the current using service and the terminal status information to the TCS.The service status information to be sent may be a kind of the current service and the method of accessing to the service, cookie information including the service log-on information and temporary information generated in the course of using the service, and service usage information including the time when the service is used or the progress of the service.In addition, the terminal status information may be an IP address obtained by the terminal, information of the currently used access network, resource usage information including the bandwidth and CPU occupancy rate, etc. Hereafter, the rest of the procedure can be separated into two types of process according to using the service proxy unit of TCS. In the first type process, the TCS receives the status information as mentioned above, generates service session information regarding the current service for the new terminal, and sends the generated service session information along with a service starting request to the new terminal.In response, the new terminal issues a service request to the CP based on the service session information received from the TCS, and the CP provides the corresponding service to the new terminal in response to the service request from the new terminal. Thus, according to the procedures described above, for example, when a user who is doing Internet shopping using a desktop personal computer (previous terminal) at home wants to go out, the user can continue to browse the Internet outside the home using a laptop computer (new terminal) by changing the terminal to the laptop computer. In the second type process, the service proxy unit of the TCS requests the CP to send the service that the previous terminal is currently using.When the CP sends the service data to the TCS in response, the service proxy unit processes the corresponding service data in accordance with the status of the new terminal.The service data may be processed to have its bandwidth or codec adjusted, or to have offset information applied thereto.The processed data service is sent to the new terminal. Thereafter, the service provided to the previous terminal can be stopped according to the type of the service policies of the CP, and the stopping time point may vary according to the policies of the CP. After all process so far, the user can use the service through the new terminal.That is, for example, a user who is using a video streaming service through a PDA outside home can change the terminal to an IPTV at home to continue to use the video streaming service without disconnection. Conclusions Due to the continuous and explosive growth of wireless Internet, mobility has become a fundamental attribute of current communication environment.However, mobility management technology still depends on terminal mobility. On the one hand, it is predicted that the IoT era will come in the not too distant future.At that time, the current communication environment will have many changes.In the end, it will evolve into an environment that allows individual users to receive desired services easily and at anytime and anywhere through a variety of terminals, rather than being able to receive various services through a single terminal. In this paper, we proposed TCS concept and designed a functional architecture of the TCS to provide terminal-independent service mobility, which enables users to use the terminal-shifted service between user's terminals in Internet.Accordingly, user can conveniently change the terminal to use seamless service without disconnection.This availability encourage the user to use a terminal the most suitable for ongoing service from among terminals that the user has owned and registered, and thus the user can obtain more satisfaction regarding the use of the service. Also, most fixed property information (i.e.service, user, terminal related information) from among information about items to be managed by the TCS can be obtained through the existing system such like AAA.The terminal-independent service mobility can be implemented by only adding a TCS system to a network and adjusting software of the existing terminal operation and an AAA authentication system.
4,186
2018-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Investigative properties of CeO2 doped with niobium: A combined characterization and DFT studies Catalytic capacity of ceria mainly stems from a facile switch in the Ce oxidation states from +4 to +4 − x. While various experimental and computational studies pinpoint the reduction chemistry of Ce atom through the creation of oxygen vacancies, the analogous process when ceria surface is decorated with cations remains poorly understood. Where such results are available, a synergy between experimental and first principle calculation is scarce. Niobium materials are evolving and their use in catalysis is being widely investigated due to their high surface acidity and thermal and chemical stability. This study aims to report structural and electronic properties of various configurations of mixed Ce–Nb oxides and elaborates on factors that underpin potential catalytic improvements. Evaluations of the samples through X-ray diffraction (XRD), Fourier transform infrared (FTIR), N2-adsorption–desorption, scanning electron microscope (SEM), energy dispersive spectroscope (EDS), and thermogravimetric (TGA) analyses are examined and discussed. First principles density functional theory (DFT) calculations provide structural features of the Ce–Nb solutions at low concentration of Nb via computing atomic charge distribution. Contraction in the lattice parameter after Nb doping was confirmed with both XRD and DFT results. SEM analysis reveals particle growth at the loading of 50wt%. FTIR results established the Ce–Nb–O bond at 1,100 cm and the TGA analysis confirms the thermal stability of Nb-doped ceria. Tetrahedral O atoms demonstrate an increase in electronegativity and this in turn facilitates catalytic propensity of the material because the O atoms will exhibit higher affinity for adsorbed reactants. Cerium oxide (CeO2) after Nb doping displays a noticeable band gap narrowing, confirming the possible improvement in the catalytic behavior. The 4d states of the Niobium pentoxide (Nb2O5) is found to fill up the 4f states of CeO2 around the Fermi energy level promoting electrons excitation in the CeO2. Reported electronic, structural, and thermal characteristics herein indicate promising catalytic applications of niobium-promoted ceria. Introduction Doped or pure metal oxides have been widely deployed to improve the catalytic activity and selectivity of both oxidation and reduction reactions through substitution of the cation of the base metal oxide with the cation of a second metal oxide [1]. Niobium pentoxide (Nb 2 O 5 ) exhibits amorphous or crystalline structures, and possesses wide band gap, good chemical stability, efficient electron injection efficiency, and efficiently serves as acid catalyst for the production of a wide array of chemicals and fuels [2,3]. Defected Nb 2 O 5 synthesized either through incorporation of ad-atoms or oxygen exhibits promising catalytic capacity in niche industrial operations [4,5]. The promotional effect of Nb 2 O 5 as a second metal or metal oxide in operations such as catalysis is gaining attention due to the relative ease of their reaction with other metal oxides to form mixed metal oxide phases with a complex structure [6]. This complex structure could result from the distortion of niobium oxide deposited on the parent metal or originated from the actual disruption of the chemical bonds at the surface of the parent oxide [1,7]. In either case, electron transfer occurs, and the active centers are the dopants or the oxygen atoms close to the dopants, leading to a material behavior that departs from the pure metal oxides [1]. The synthesis of a truly homogenous-mixed oxide with a profound improvement in catalytic activity is a channeling endeavor. The presence of other materials such as impurities in the precursors might as well contribute to the observed changes in their performance [1]. On the other hand, cerium oxide (CeO 2 ) enjoys various applications due to the inherent electrical, chemical, and physical properties [8]. Stand-alone CeO 2 has been effective in numerous catalytic applications, most prominently in the semi-hydrogenation of alkynes to alkenes [9], and water and gas splitting reactions [10]. Due to their profound redox properties, efficient oxygen storage ability, and low cost, CeO 2 has been deployed as mixed oxide catalyst for enhanced performance [11]. Among the various metal oxides, CeO 2 offers high interaction with Nb 2 O 5 [7]. As a reducible oxide, CeO 2 could switch oxidation from Ce 4+ to a reduced form as Ce 4−x by interaction with the Nb 5+ of the niobium oxide [12]. This phenomenon induces a notable effect on the chemistry of the mixed oxide formed. The selective catalytic reduction (SCR) performance of CeO 2 displays significant improvement when doped with niobium oxide owing to the strong acidity and redox ability of the latter [12]. A 100% selectivity to N 2 was achieved for NO x reduction in NH 3 -SCR reaction when cerium was doped with niobium [13][14][15]. To provide an atomic base insight into the observed catalytic improvement of mixed Ce-Nb oxides, it is essential to comprehend governing structural and electronic attributes of these configurations. For instance, doping metal oxides that display dissimilar cationoxygen bond lengths affects their catalytic behavior; however, a robust evaluation of such scenario necessitates materials modeling via density functional theory (DFT) calculations [1]. Experimentally, the various available spectroscopy techniques could be used to underpin structural changes from the pure metal oxides after doping. Literature provides a detailed account on pure oxide forms of CeO 2 and Nb 2 O 5 [16]. However, to the best of our knowledge, studies that report properties of the mixed Ce-Nb oxides from both experimental and DFT studies are rather scarce. Thus, this article aims to evaluate the energetic, atomic, and electronic features of clean, defect-free, and doped CeO 2 (111) and Nb 2 O 5 (111) structures using DFT and to confirm the formation of the mixed oxides using material characterization techniques. The main motivation behind this work is to illustrate attributes that may improve the catalytic activity of Nb-Ce-O materials. Materials preparation Cerium-niobium mixed oxide samples were prepared with the incipient wet-impregnation method. The cerium(IV) oxide (Sigma Aldrich; 99.99%) was first dried in an oven for 2 h at 200°C to drive out moisture. Nb 2 O 5 (Sigma Aldrich; 99.99%) serves as the precursor with the loading of Nb 2 O 5 varied from 1.5, 2.5, 3.0, and 50 wt%. Aqueous solution of the samples was added to an identified mass of CeO 2 . The resulting mixtures were heated at 70°C for 30 min on a magnetic stirrer at a stirring rate of 150 rpm. The samples were then dried overnight at 100°C. Calcination was carried out under the flow of air for 4 h at 550°C with a heating rate of 10°C/min. Powder X-ray diffraction (XRD) patterns were recorded on X-ray diffractometer (PANalytical Instrument, X'Pert 3 Powder, Philips, Holland) equipped with CuK radiation (k = 1.540598 nm) and operated at 45 mV and 40 mA. The scanning was performed within 2θ range of 10-80°with a step size of 0.02°/min. A Quantachrome Instrument (NOVAtouch NT 2LX-1, USA) performed the N 2 adsorption and desorption experiment. All the catalyst samples were outgassed at 300°C for 3 h utilizing liquid nitrogen (N 2 ) at the temperature of −196°C. The Brunauer-Emmett-Teller (BET) surface areas were computed by employing partial pressure (P/P 0 ) in the ranges of 0.02-0.35, which are reliable based on the obtained C constant values [17]. The Barrett-Joyner-Halenda (BJH) was used to determine the average pore size and the pore size distribution with P/P 0 range of the desorption branches of 0.80-0.35. The Fourier Transform Infrared (FTIR) analysis was performed with FTIR Spectrometer (Jasco Corporation, Japan). Before analysis, the catalyst samples were mixed with potassium bromide (KBr; Sigma Aldrich) initially dried in the oven at 105°C in order to eliminate possible water interference. The recorded spectra were varied in the range of 4,000-400 cm −1 . A multifunctional general-purpose scanning electron microscope (SEM) by (JEOL JSM. 6010 PLUS/LA) integrated with Energy Dispersive Spectroscope (EDS) was used to perform surface and elemental analyses. The instrument is equipped with an auto-coater which enables the sample to be coated with gold particles before analysis. In order to investigate the thermal stability and the composition of the pure and prepared samples, Thermogravimetric analysis (TGA) was carried out with (TGA Q50 V20.10 Build 36 analyzer). The temperature was changed from 0 to 650°C at a heating rate of 10°C/min under the flow of nitrogen. Computational details The CASTEP code performs all the structural optimizations and energy estimations [18] within the DFT framework to examine the properties of CeO 2 and Nb 2 O 5 . Generalized gradient approximation is employed to obtain precise structural parameters and the exchange-correlation function of Perdew-Burke-Ernzerhof is adopted in the local density approximation. On the fly pseudopotential describes the interaction between the valence electron and the ion core. The energy of the convergence tolerance is set at 0.001 eV/atom. The maximum force, maximum stress, and maximum displacement are set at 0.03 eV/Å, 0.05 GPa, and 0.001 Å, respectively. To describe the onsite Coulomb interactions for the Nb 4d and Ce 4f states, effective Hubbard U parameters of 3.0 and 4.5 eV are employed, respectively. This choice of U value follows from previous theoretical investigations [19,20]. The plane wave cutoff energy amounts to 320 eV and a κ-point sampling of 2 × 2 × 1 was generated by the Monkhorst-Pack scheme. The doping is performed by substituting the Ce atom of the CeO 2 with Nb, and the electronic changes are examined. Appropriate choice of dopant concentration is essential to preserve the catalytic activity of the materials. Low concentration of dopant is preferred over the high value because the recombination rate of electron/hole pair is decreased and the reaction rate such as photodegradation is improved [21]. Thus, computations are performed by replacing the Ce atom with one Nb atom. XRD patterns The diffraction patterns observed for the referenced pure CeO 2 and all the prepared catalysts are shown in Figure 1. Conventionally, doped oxide displays a similar structure as that of the host oxide [1]. The patterns observed are typical of the pure fluorite cubic CeO 2 structure (JCPDS 43-1002) [22]. All except one (50 wt% loading) of the XRD results show an absence of the peaks associated with Nb-containing species. This might be due to either low loading or high dispersion. In our experiment, the Nb loading exceeded the optimum solubility weight of Nb (1.4 wt%) on CeO 2 , appropriately, the absence of Nb peaks is ascribed to a high dispersion [23]. The dispersed state of Nb 2 O 5 phase on the CeO 2 is in the NbO x form and the presence as metallic Nb cannot be excluded. This signifies that the crystallinity of the CeO 2 is not distorted with Nb incorporation [1,22,24,25]. However, at a 50.0 wt% Nb loading, the effect of loading is observable with the formation of two new peaks from Nb 2 O 5 around 22°and 50°. The lattice parameters were computed from the XRD peaks in order to investigate the doping effect. The interplanar spacing was evaluated with the Bragg's law, equation (1), and the lattice parameter was calculated with equation (2). The crystallite sizes of the samples were computed with the Scherrer's equation (equation (3)) by utilizing the full width at half maximum (FWHM). where d signifies the inter-planar spacing, λ is the wavelength, a is the lattice parameter, h, k, and l denote the miller indices, β is the FWHM, k is the Scherrer constant, D represents the crystallite size, and θ stands for the Bragg or diffraction angle. The equations were applied to the peak highest intensity, CeO 2 (111) plane. The results obtained are shown in Tables 1-3. The pure CeO 2 gave the highest value of the lattice parameter 5.410 Å which is in good agreement with previous experimental values of 5.410 Å [26,27] and computed values of 5.490 Å [28]. The doped samples assume lower lattice parameter. This decrease in value is associated with the contraction of the CeO 2 lattice, and the possible substitution of the Ce 4+ by the Nb 5+ [29,30]. This is due to the lower ionic radius of niobium (0.64 Å) compared with cerium (0.97 Å) which induced the contraction of the crystal lattice [31]. The incorporation of neodymium (Nd) into CeO 2 expands the crystal lattice owing to the higher ionic radius of Ce 4+ (0.970 Å) when compared to Nd 3+ (1.109 Å) [10]. The shifts in lattice parameter are vital in confirming the formation of doped oxide [1]. This assertion is further corroborated with the result of the crystallite size enlisted in Table 3. Although significant variations in the crystallite size of all the samples were not prevalent, (maximum difference of ±1.8), this difference further supports the formation of doped oxides. This is in agreement with the findings of Amarsingh [31] that the substitution of pentavalent ions such as Nb in CeO 2 does not initiate significant reduction in the crystallite size. Thus, pure CeO 2 affords the highest size of 26.154 nm. Additionally, the minute reduction in the crystallite size suggests that the incorporation of Nb into CeO 2 crystal inhibits the grain growth of the CeO 2 , as later shown with the SEM results [25]. However, this assertion remains valid at low loading. Increasing the Nb loading was found to affect the CeO 2 phase, with the formation of new peaks at the highest loading. The slight increase in the intensity at the (311) plane corresponding to the peaks at 2θ = 56.37°confirms that the amorphous Nb 2 O 5 is incorporated into the CeO 2 crystals and that the CeO 2 content is decreased, compensating for the minute decrease in the crystallinity [13,32]. FTIR analysis The FTIR results present the possible presence of the Nb phases on the CeO 2 as shown in Figure 2. All the Nbdoped CeO 2 samples show the absence of the carboxylic groups C]O stretching at 1,700 cm −1 and C-O asymmetrical stretching at 1,380 cm −1 that are typical of Nb containing species. This is indicative that the sample contains only Nb 2 O 5 [17]. The spectra of the Nb 2 O 5 exhibits the surface Nb]O stretches at the region of 1,050-948 cm −1 [33]. The Nb-O peak at 929.52 cm −1 denotes the stretching vibrations of Nb-O of NbO 6 units and the Nb-O peak at 880 cm −1 is due to angular vibrations [33]. In addition to the OH groups on pure CeO 2 , the characteristic stretching vibrational peaks associated with the Ce-O bond is observed around 590 cm −1 [34]. The characteristic peaks associated with the lattice vibrations of metal-oxygen bonds are observed for the 25.932 50 wt% Nb-CeO 2 28.707 Figure 2: FTIR spectral of the pure and doped samples. Nb-doped samples with the increase in the loading. The peak observed around 1,123 cm −1 is typical of Ce-Nb-O spectral which affirms the formation of doped mixed oxide sample [35]. With the additional loading, the Nb]O disappeared (at 50 wt% Nb-CeO 2 ), and the observed broadened peak of Nb-O shows the incorporation of the niobium ions into the CeO 2 lattice, resulting in little distortion [23,31,36]. In comparison to the pure CeO 2 spectra, the stretching mode vibration of Ce-O at around 590 cm −1 in the other samples shifted to a lower wavenumber. This indicates the weakening of this bond in favor of the formation of the Ce-Nb-O linkage [31,37]. The intensity of the peak of Nb-O identified at wavenumber of 929.52 cm −1 on the pure Nb 2 O 5 sample becomes lessened as the Nb 2 O 5 loading was increased from 1.5 wt% to 3.0 wt%. This shows that the further addition of Nb might facilitate the reduction in NbO to metallic Nb [13]. Thus, the interaction of Nb with CeO 2 is strengthened, while the interaction between Nb-O and CeO 2 is weakened. Also, higher Nb loading enhances both the Brønsted acidity and strong Lewis acidity. These acidic sites serve as the active centers for surface-assisted reaction and are associated with the Nb-O and Nb-O-Nb bonds present in NbOx species. Increasing Nb loading beyond the dispersion capacity will limit the formation of the Brønsted and Lewis acid sites [13]. However, as observed at 50 wt% loading, the broad and intense peak of NbO reappeared. This suggests that a very high loading diminishes the catalytic activity of cerium-niobium mixed oxide. Exceeding the monolayer coverage will lead to the formation of multilayer inactive NbOx. N 2 adsorption-desorption The isotherms plot and the structural properties of the prepared samples obtained with the N 2 adsorptiondesorption analysis are depicted in Figure 3 and Table 4, respectively. All the prepared sample shows a typical type IV isotherms associated with the capillary condensation in mesopores. The pure samples show the H1 type hysteresis loop and the addition of Nb to CeO 2 preserves the H1 type. This observation is consistent with the earlier report involving Nb-CeO 2 -doped catalysts [38]. The values of the pore diameter obtained for all the samples show a mesoporous structure. The pore cerium oxide gives a particle size of 3.4 nm in agreement with the values between 3.24-3.89 nm earlier reported for ceria prepared via precipitation method [39]. Likewise, the pore volume obtained for all the samples is approximately constant despite increasing the Nb loading, suggesting the samples entail a narrow particle size distribution [40]. However, the observable differences result from the evaluated BET surface area. The Nb 2 O 5 gives the lowest surface area of 38.451 m 2 /g, while a value of 50.437 m 2 /g is obtained for CeO 2 . Loading the CeO 2 with Nb decreases the surface area from 43.815 m 2 /g at 1.5 wt% Nb-CeO 2 to 40.833 m 2 /g at 50 wt% Nb-CeO 2 . The minimal reduction observed among the doped materials in the BET surface area might have evolved from the blocking effect on the sample pores due to the incorporation of the Nb on the samples' inter-particle volume. This is supported by the slight decrease shown in the pore volume [13]. Similar results have been reported for the nanostructured CeO 2 doped with platinum [40]. SEM and EDS analyses The elemental composition, morphology, and shape of the prepared samples are analyzed with the SEM. The SEM images and EDS mapping of the cerium oxide and the niobium-doped cerium oxide are displayed in Figures 4 and 5. The pure cerium particles show cerium as fine and well dispersed, with uniform morphology and with the absence of any specific shape [24,41]. This fine structure suggests that CeO 2 is able to withstand the operational temperature employed during the calcination process [39]. The morphology of the pure Nb 2 O 5 discloses fine particles together with agglomerated and sponge shaped particles. The doped samples present similar structure as that of the pure CeO 2 sample at low to moderate loading of 1.5-3.0 wt% of Nb. Figure 4 demonstrates the EDS mapping of the prepared samples. CeO 2 and Nb 2 O 5 reveals a homogenous well dispersed atoms of the constituent elements. The addition of 1.5 and 2.5 wt% of Nb do not show any presence of Nb atoms in the mapping analysis (Figure 4c, only 2.5 wt% loading is present). However, as confirmed by the EDS profiles later, there is the presence of Nb atoms at this loading values. This can be attributed to the very high dispersion and incorporation of the Nb atoms into the CeO 2 [42]. Increasing the loading to 3 wt%, Nb atoms are detected by the EDS mapping (not too conspicuous), and the atoms are well dispersed over the CeO 2 surface. This further confirms that the non-detection of the Nb atoms at lower loading cannot solely be attributed to complete absence of Nb atoms. The Nb atoms becomes more feasible at 50 wt%, with the high concentration at the corner of the sample. This might have resulted from the possible Nb agglomeration due to high loading. Additionally, the high dispersion observed at low loading of Nb loading together with the reduction in the calculated values of crystallite size indicates the strong interaction between the CeO 2 and Nb 2 O 5 oxides of the doped samples [43]. The EDS profiles confirm the presence of only cerium, niobium, and oxygen on all samples. For the pure samples, the identified oxides are CeO 2 and Nb 2 O 5 , confirming the purity of the starting materials. Weak peaks associated with Nb atoms are observed on the doped CeO 2 samples. Additionally, the quantitative EDS analysis reveals that increase in the loading weight gives a corresponding increase in the amount of Nb deposited on the CeO 2 . The % atomic composition predicts the formula for the 1. Figure 6 shows the TGA measurements obtained for pure CeO 2 and Nb 2 O 5 samples. In order to determine the thermal effects on the doped oxides, only samples with 3.0 and 50 wt% are considered. Pure CeO 2 shows a mass loss of about 1.4%, and equilibrium is reached at about 350°C. This loss is ascribed to the H 2 O present in the sample surface [10]. This result is in good agreement with mass loss of about 1.3% and 350°C that was reported earlier [10]. Similarly, the doped samples show a mass loss at about 1.1%, suggesting that the crystallinity of CeO 2 is preserved after the doping [44]. The impregnation of Nb in CeO 2 is found to affect the thermal stability of the doped samples, reducing the mass loss as the temperature is raised [45,46]. This is corroborated with the result of enhanced thermal stability obtained for the pure Nb 2 O 5 samples. Two stages of mass loss are observed for the Nb 2 O 5 . Stage 1 (about 1.5% loss), the interval 50−450°C encompasses the elimination of the adsorbed H 2 O, and stage 2 (6.2% loss) between 450 and 600°C signifies the loss of structural H 2 O [44,47,48]. As observed, after about 450°C, all the samples assumed a steady value, thus the calcination temperature was kept below 600°C. CeO 2 structure and charge distribution analysis The effect of Nb content on CeO 2 has been evaluated with the DFT calculations. The electronic interaction and distribution are examined on the CeO 2 (111) structure. CeO 2 exhibits a fluorite crystallographic structure with the Ce atoms located at the face center cubic (fcc) positions, while the O atoms prefer the tetrahedral sites. Addition of the Nb atoms preserves the fluorite structure, a result previously confirmed with our XRD analysis. Figure 7 shows the optimized geometries of the bulk and surface structures of both the perfect CeO 2 (111) and niobium-doped structures. The optimized bulk and surface geometries of Nb 2 O 5 (111) are likewise shown. The lattice parameter of the perfect CeO 2 stands at 5.464 Å and slightly shrinks to 5.284 Å after Nb doping. This reduction trend agrees with the XRD prediction where the lattice parameters reduce with increase in the loading weight of Nb. Although, our calculation result gives a sizeable reduction in the lattice parameter as compared with XRD values, both results confirm that Nb doping reduces the lattice parameter [1]. Table 5 enlists the bond length of the considered structures. Generally, Ce-O bond distance is used to validate the possible expansion or contraction of the crystal lattice of doped materials. Ionic radius of dopants is essential in determining the behavior of the bond length. Earlier reports have provided contrasting observation. For instance, CeO 2 doped with Yb, Er, and Y presents a similar Ce-O and dopant-oxygen distances, while for Gd, Sm, La dopants, observed distances were higher for the dopant-oxygen than the Ce-O distance. It is expected that dopants with higher ionic radius than Ce should induce higher dopant-oxygen distance, and dopants having lesser ionic radius should give lesser bond distance. This is not always true, because similar distance was observed for Ce-O and Yb-O, Er-O and Y-O, despite Yb, Er, and Y possessing higher ionic radius [49]. Perfect CeO 2 shows a Ce-O bond length of 2.370 Å, which is in a perfect agreement with 2.352 Å and 2.340 Å reported previously [10,27]. The Ce-O distance in the perfect CeO 2 demonstrates an overshoot of 0.073 Å in the bulk as compared to the surface. In comparison with the doped structures, the Ce-O distance shortens by 0.011 Å for the bulk, while an increase of 0.066 Å is observed for the surface (in reference to experimentally measured value for the bulk CeO 2 ). The Nb-O bond This is expected to improve the catalytic tendency of the material since the O atoms will assume more affinity for adsorbed reactant [51]. Simultaneously, after doping, the Ce atoms are less positively charged (i.e., there is presence of more electrons), suggesting that the Nb atoms derives more reduction on the two Ce atoms bounding the tetrahedral O atoms, engendering the Ce atoms more reactive. The edge and corner Ce atoms surrounding the bulk CeO 2 are likewise reduced to that obtained for the perfect bulk structure. This is a desirable result for improved catalyst performance, ratifying the possible interswitch of Ce atom oxidation state between Ce 4+ to Ce 4−x [10]. Analysis of the atomic charge on the surface structures presents similar trend observed for the bulk structure. The Ce atoms on the perfect surface exceeds that of the bulk by 0.07 e, while the O atoms is 0.03 e which is higher. Doping the surface presents a lower negative charge for all the O atoms on the surface, this will expedite their removal as oxygen molecules during reduction reactions [9]. The TDOS and the PDOS are calculated to explicate the electronic states of the prepared samples and to provide information germane to contribution of the orbitals around the Fermi energy level (E f = 0 eV, represented by the dotted lines) [52]. Figure 9 shows the DOS of both the bulk and surface un-doped CeO 2 . The band width is examined between −5 and −20 eV for all structures. The pure bulk CeO 2 reveals the concentration of electrons with a narrow band within the 0-5 eV region, and a relative electron distribution at a higher energy. Our calculated band gap of Nb 2 O 5 amounts to 3.037 eV, a value that is in a very good accord with the corresponding experimental measurement of 3.09 eV [21]. The electron concentration in the doped sample, Nb-CeO 2 , shifts to a higher energy level and demonstrates a band gap of 1.086 eV (that is, a reduction of 0.614 eV), in a close agreement with the analogous experimental value which shows a reduction of 0.59 eV in the band gap for Pb-CeO 2 system [53]. This reveals that lower unoccupied molecular orbital and higher unoccupied molecular orbital are shortened, accounting for improved electron excitation into the conduction band. The band gap reduction observed shows that Nb will significantly promote the optical and catalytic properties of the mixed oxide [21]. Figure 10 portrays the PDOS of the prepared samples. The O 2s and Ce 3p states lie at the lowest energy band (not shown) and the O 2p and Ce 4f states interact around the Fermi energy level with a hybridization of the Ce 5d and Ce 4f states [54]. The bandwidth of the O 2p band is 4.2 eV and the Ce 4f states exhibit a spacing of 1.5 eV which agrees with the computational values of 4.5 and 1.4 eV for O and Ce, respectively [28]. Our computed values for O 2p-Ce 5d orbitals band gap shows a band separation of 5.7 eV, which is in a good agreement with to be active at the Fermi level, with some contribution of Nb 2p states around the conduction band leading to high interaction between the O-2p, Nb-4p, and 4d states. There is an overlap between the Nb 4p states and 2p states at the regions of 2-6 eV. The Nb-doped CeO 2 shows that, in addition to the occupied Ce 4f states around 2 eV, more 4f electrons that are absent in the pure CeO 2 now concentrates around 4-6 eV (segment marked with arrow). The presence of excess electrons can either be initiated through the creation of oxygen vacancies or the addition of dopant. These excess electrons would occupy the Ce 4f states, localizing on individual Ce atoms [28]. This infers that the f states occupation observed herein originates from the addition of Nb atoms. In addition, the Nb 4d electrons which are conspicuous in the pure Nb 2 O 5 sample is redundant after doping, indicating the possible electron transfer to the Ce 4f states, thus, reducing the Ce 4+ to Ce 3+ . The availability of Ce 3+ is known to promote catalytic reactions, and the excess electron gain by the 4f would give rise to n-type conductivity and the charge carrier in the band [55]. Similar analogy involving peaks shortening and disappearance was used to confirm electron transfer between atoms of similar doped systems [56]. Conclusion Nb 2 O 5 is found to improve the properties of CeO 2 in terms of the narrowed band gap and the electronic states of the Ce and O atoms. Experimental results of XRD confirm the formation of new peaks associated with Nb at high loading, and the EDS analysis detects the presence of Nb. There is a narrow distribution of the crystallite size of the prepared samples with the reduction in BET surface area as the Nb loading increases. TGA analysis predicts that the calcination temperature should be limited to below 600°C. DFT calculations support the experimental observation pertinent to the decrease in the lattice parameter of the CeO 2 when Nb atom is incorporated. The Hirshfeld charges reveal the reduction in Ce atom after Nb doping with maximum reduction in the Ce atoms nearer to the Nb atom. The electronegativity values of the tetrahedral O atoms are increased by 0.01 e after Nb doping. This will promote the catalytic tendency of the material since more O atoms will exhibit higher affinity for adsorbed reactants. Analysis of the TDOS successfully re-produces the experimentally measured analogous values for pure CeO 2 and Nb 2 O 5 . Nb doping is found to improve the semi conductivity nature of CeO 2 through the reduction in the band gap of CeO 2 by ∼0.60 eV. PDOS identifies the filling of the Ce-4f states from the Nb-4d states electron which is expected to improve the catalytic capacity of CeO 2 . Funding information: This study has been supported by and a start-up grant from the College of Engineering at the United Arab Emirates University, UAEU (grant number: 31N421). Computations were carried out at the high performance cluster (HPC) of the UAEU. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
7,203
2021-12-28T00:00:00.000
[ "Materials Science" ]
FAIR and open multilingual clinical trials in Wikidata and Wikipedia This project seeks to conduct language translation on metadata labels for research publications, attribution data, and clinical trials information to make data about medical research queriable in underserved languages through Wikidata and the Linked Open Web. This project has the benefit of distributing content through Wikipedia and Wikidata, which already have an annual userbase of a billion users and which already have established actionable standards to practice diversity, inclusion, openness, FAIRness, and transparency about program development. The impact will be localized access to basic research information in various Global South languages to integrate with existing community efforts for establishing the same. Although Wikidata development in this direction seems inevitable, the cultural and social exchange required to establish global multilingual research partnerships could begin now with support rather than later as a second phase effort for including the developing world. Wikipedia and Wikidata are established forums with an existing active userbase for multilingual research collaboration, but the research practices there still are immature. By applying metadata expertise through this project, we will elevate the current amateur development with more stable Linked Open Data compatibility to English language databases. Using the wiki distribution and discussion platform to develop the global conversation about data sharing will set good precedents for the trend of global research collaboration. About The School of Data Science at the University of Virginia received funding through an open call from the Wellcome Trust for this research and development proposal titled "FAIR and open multilingual clinical trials in Wikidata and Wikipedia". This document combines the concept note, full application, and supporting details in that application. Our intent in publishing this is to make the proposal available with free and open copyright licensing for other researchers to reuse and remix as they wish. The project objective is to import clinical trials metadata from ClinicalTrials.gov into the Wikidata platform, and to curate the data using Wikidata workflows, and also to present the overall process as a model for using the Wikipedia ecosystem to share and remix data. If the project is successful to the extent of our wishes then our hope is that all sorts of researchers and the public will access and use clinical trials data for both traditional and new purposes. Although we believe that researchers who currently use clinical trials data will find benefit from its curation through the Wikipedia platform development process, we also seek to promote access to this medical information to new audiences both within conventional analysis about clinical trials and in new and unexpected contexts. The new audiences which we anticipate are those whom we already know to browse Wikipedia's medical content, including researchers who prefer to access information outside of English language and non-researchers including students, journalists, and policy makers who previously would not have considered seeking this data were it not accessible through the familiar Wikipedia. By making the data much more accessible and also available for Wikipedia's style of crowdsourcing, we hope that others will develop and reuse this data including by linking trials to papers, people, and organizations; visualizing the trials with charts and maps; general curation of trials by keyword tagging or concept disambiguation; and language translation of technical terms in structured data collections such as Wikidata. We are sharing this text in alignment with Wikipedia community values of openness and in a contemporary social context where sharing proposals is uncommon but which we wish were more routine. At the time of publishing this proposal we have developed the project but have not yet completed it. This document only presents the proposal, and we will publish our methods, results, and the overall model in a later paper. The term of the project is extended due to COVID-19, and we changed some of the project focus from the original proposal in response to the pandemic. The text here does not account for our response to COVID. 1. By default, adopt the established Wikipedia and Wikidata publishing and engagement practices for open, FAIR, documentation, receiving feedback in permanent public forums, and collaboration 2. Contribute to the documentation about the position of Wikipedia and Wikidata in the Linked Open Data ecosystem, particularly emphasizing university participation in import and export of research metadata in the Wikimedia platform and collecting impact metrics for doing so. 3. Within Wikidata, contribute to the WikiCite project which seeks to enrich data around citations and metadata, including PubMed research papers, and subsets of CrossRef, ORCID, and ClinicalTrials.gov. Explore and document possibilities to ingest non-United States clinical trial databases. 4. Identify the set of terms and concepts which are necessary to perform and visualize queries of medical research data, for example, "clinical research sites in a given country with the highest trial completion rates in infectious disease research" 5. Translate those terms to languages including Hindi, Bengali, and Swahili to the level of quality which is established as a norm by existing local community participants in Wikipedia and Wikidata 6. Use Wikipedia and Wikidata's native metrics reporting processes to measure the impact to users and the engagement of peer reviewers Introduction The School of Data Science at the University of Virginia received funding through an open call from the Wellcome Trust for this research and development proposal titled "FAIR and open multilingual clinical trials in Wikidata and Wikipedia". This document combines the concept note, full application, and supporting details in that application. Our intent in publishing this is to make the proposal available with free and open copyright licensing for other researchers to reuse and remix as they wish. The project objective is to import clinical trials metadata from ClinicalTrials.gov into the Wikidata platform, and to curate the data using Wikidata workflows, and also to present the overall process as a model for using the Wikipedia ecosystem to share and remix data. If the project is successful to the extent of our wishes then our hope is that all sorts of researchers and the public will access and use clinical trials data for both traditional and new purposes. Although we believe that researchers who currently use clinical trials data will find benefit from its curation through the Wikipedia platform development process, we also seek to promote access to this medical information to new audiences both within conventional analysis about clinical trials and in new and unexpected contexts. The new audiences which we anticipate are those whom we already know to browse Wikipedia's medical content, including researchers who prefer to access information outside of English language and non-researchers including students, journalists, and policy makers who previously would not have considered seeking this data were it not accessible through the familiar Wikipedia. By making the data much more accessible and also available for Wikipedia's style of crowdsourcing, we hope that others will develop and reuse this data including by linking trials to papers, people, and organizations; visualizing the trials with charts and maps; general curation of trials by keyword tagging or concept disambiguation; and language translation of technical terms in structured data collections such as Wikidata. We are sharing this text in alignment with Wikipedia community values of openness and in a contemporary social context where sharing proposals is uncommon but which we wish were more routine. At the time of publishing this proposal we have developed the project but have not yet completed it. This document only presents the proposal, and we will publish our methods, results, and the overall model in a later paper. The term of the project is extended due to COVID-19, and we changed some of the project focus from the original proposal in response to the pandemic. The text here does not account for our response to COVID. Who is the project coordinator? Name: Lane Rasberry Organization: University of Virginia Department: School of Data Science Division: Center for Ethics and Justice Email<EMAIL_ADDRESS>ORCID: 0000-0002-9485-6146 How will you evaluate the success of your activities? This project will use Wikipedia's own established processes for monitoring and evaluation of university projects to develop and publish general reference information in Wikipedia and Wikidata. We will evaluate this project in these established ways: 1. Content metrics -Report the standard publishing metrics as measured by Wikimedia's own native metrics suite for publishers 2. Diversity and inclusion -Partner with established Wikimedia community organizations; confirm their oversight and approval 3. Impact metrics -Report audience readership as measured by Wikipedia's own native metrics suite for users 4. Quality review -university student researchers will evaluate and publish an evaluation of the source research metadata and the translation process 5. Bias evaluation -we will subjectively publish our opinions on bias we identify and its cause. One obvious source of bias will be availability of open data, as much research indexed in PubMed and ClinicalTrials.gov is not compliant with recommended metadata standards. This project favors institutions which apply FAIR principles, and we will identify these practices. Wikipedia as a publishing, technology, and community platform continually introduces processes for content development and evaluation. In 2012, with the establishment of the Wiki Education Foundation, there was a major cultural shift to make Wikipedia compatible with university education and research. Today, that precedent has developed into a suite of open evaluation tools for measuring audience size, levels of engagement, use of factchecking processes, and a culture applying metrics to perform critical review of Wikipedia's quality. These measurements and processes establish a precedent for this project to follow in doing publishing and content development, operationalizing ethics in digital governance, and publicly demonstrating community conversation in seeking feedback on this project's activities in the context of global Wikimedia content development. The project coordinator should describe their related research. Who are key collaborators for this project? School of Data Science, University of Virginia Daniel Mietchen, data scientist Lane Rasberry, Wikimedian in Residence The School of Data Science at the University of Virginia is importing structured data into Wikidata, evaluating the quality of information in Wikipedia and Wikidata, and documenting best practices for university partnerships with the Wikimedia platform. This team identifies the research content for which there is a need in research discovery to be FAIR, available for query, and translated to promote global collaboration. Daniel Mietchen, data scientist School of Data Science at the University of Virginia, Principle Investigator of the Scholia / WikiCite project to develop the Wikipedia / Wikimedia platform based interface for discovering and visualizing scholarly publications in a free and open system analogous to the popular but closed product Google Scholar. Dr. Mietchen's primary concern are the academic publications, whereas this proposal to Wellcome would integrate clinical trial data into this network and also localize the interface for non-English pilot languages relevant to the developing world. Another way to say this is that Dr. Mietchen operates a Wikipedia-based tool similar to PubMed and Google Scholar, and this project would collaborate with him to integrate ClinicalTrials.gov data into this and permit non-English language use. UVA Global, University of Virginia This is a language department at the university where faculty and classes will translate and publish structured data into Wikidata, where it will be FAIR and open in the semantic web. Wikimedia community organizationsThese organizations provide community feedback on publication in Wikipedia and Wikidata and also on the translation process. These partnerships ensure participation among stakeholders and regional communities of users. What outcomes will the completed project have? This project will integrate ClinicalTrials.gov into Wikidata, making it once and for all generally machine readable and free and open for export to any other platform. Furthermore, we will translate the search interface to three languages, Hindi, Bengali, and Swahili, bringing this data to those languages for the first time and as a precedent in global diversity. If we are successful to the limits of our expectations, then all clinical trials data forever after will be free and open in the Semantic Web. Furthermore, we will set the precedent in this project that linguistic diversity must be central to open data projects of global interest. Finally, we will publish a case study of this project in advocacy of accessible open data. Describe the vision for your proposal, describe how it will promote open research, and explain how you will evaluate impact. Vision Our vision for this project is to increase public understanding and global discourse of medical research by making cataloging data on clinical trials much easier to access, query, and visualize in aggregate in English and 3 pilot underserved languages. Our aims are to enable the following: 1. through publication in Wikidata, professionals in clinical research will have radically increased access to routine data about clinical trials, including from ClinicalTrials.gov and PubMed 2. beyond conventional clinical research data, and for the benefit of the general public and humanities research, through Wikidata we will pilot access to previously inaccessible social ClinicalTrials.gov data including integration with geolocation data, grant and funding awards, corporate financing, and demographic data such as nationality, gender, ethnicity, or socioeconomic status among research participates 3. after sharing the data, we will document accessibility options for all kinds of people, including citizen researchers, to use it. While the primary initial userbase will be people who already use ClinicalTrials.gov, we seek to make this data accessible and interesting to undergraduate students of all disciplines and pilot data accessibility in non-English languages including Hindi, Bengali, and Swahili languages. Open practices Openness and FAIR data integration is a strength of this proposal which we take for granted as superior, and instead our focus regarding open data practices will be in good reporting of what content we share and documenting our publication process as a model for others to emulate. Our publication venue is in Wikidata which has been the most popular, FAIR, and open cross-domain data repository in the world since at least 2015. This project starts with semi-structured open data in ClinicalTrials.gov which we will map to Wikidata, thereby making it highly structured, FAIR, and accessible in the Semantic Web and in multiple languages. Perhaps more significant than our making this data FAIR is our intent to document our process as a case study to demonstrate how the data was inaccessible and not FAIR before. Currently, many researchers see ClinicalTrials.gov to be FAIR and open because they compare it to conventional data management. This project will demonstrate how much more open this data can be and what networked integration can accomplish. Monitoring Monitoring is a strength of this proposal which we take for granted as superior, and instead our focus regarding monitoring will be in good reporting of what monitoring we accomplish and documenting our monitoring process as a model for others to emulate. This project will publish its output into Wikidata, the structured data general reference repository which is part of the Wikimedia platform. Since its inception in 2001, Wikipedia and the Wikimedia platform have developed a culture and community where anyone can edit and a mix of humans, human-operated semi-automated tools, and automated bots monitor the billions of edits to millions of publications which hundreds of thousands of people make every day in hundreds of languages. Our view is that in comparison to any other general interest data curation project, Wikidata provides the most openness and transparency to scrutiny and natively provides the most information about how it processes its data collections. The best way to describe the monitoring plan for this project is to say that we will use the native Wikimedia platform monitoring suite of tools and products to collect and report metrics including count of edits; count of reported changes or conflicts; count of errors identified in the source dataset; count of comments; count of active reviewers and volunteer participant editors; and audience communication impact. This project has the strength of having a designed monitoring system in place which we will not change. Instead, we will make a model report collecting the metrics which are relevant to this project, and we will document how we collect those metrics from the Wikidata platform and how we interpret them, and we will create documentation for anyone else to post data for research production into Wikidata and monitor their own projects after our model. Success indicators include the following: 1. integration of records from 80% of ClinicalTrials.gov trials into Wikidata, with each trial having an average of 10 structured data statements of fact 2. translation of a limited vocabulary for queries and the web interface to make this data accessible in English, Hindi, Bengali, and Swahili 3. published comments -endorsement or other feedback -from a diverse community of 100 Wikimedia editors 4. publication of documentation for anyone to model this project and in advocacy of FAIR and open data What is your plan for managing project deliverables? The short explanation of our output management plan is that we will publish everything into the Wikimedia platform, and deposit copies into our university institutional repository, and additionally publish a research paper in an indexed academic journal. All data from this project will have a Creative Commons Zero (CC0) dedication, and all other media will have a Creative Commons Attribution 4.0 International (CC BY 4.0) license. We intend for every part of this project to be open, FAIR, and accessible to a diverse audience of users. There are two kinds of research outputs for this proposal: structured data and prose documentation. Our primary venue for publishing structured data is Wikidata, because there it becomes available for production or export in the Semantic Web and keeps metadata labels in multiple languages for its provenance and open licensing. We will additionally publish a copy of our data as its own dataset and media product, including in Zenodo and our university's institutional repository. We will set up a Wikimedia research project page, as is customary for any project in the Wikimedia platform, and either host or link out to all media projects from that page in the established way for this platform. This research page will be a multimedia interface for accessing the data, using the data, and browsing prose documentation. Prose documentation will include instructions for using this data in Wikidata or exporting it for use in any other context. We will also publish information about the project to encourage broad social discourse in diverse academic fields about public access to clinical trials research. Audiences which we imagine include researchers, non-research professionals, and citizens with interest in clinical research, health care, public policy, corporate finance, research funding, public health, and social disparities. While our most detailed prose explanations will be in English language, as the structured data for this project is multilingual, we will also publish our portal interface in English, Hindi, Bengali, and Swahili. Provide an explanation of the budget This project has two salaries -one for a researcher to oversee integration of the content into Wikidata, and another for a data scientist to provide assistance with refining the ClinicalTrials.gov data to the Wikidata model. Student researchers will conduct evaluation of data quality, characterization of this project's dataset within the Open Semantic Web, and critique the proposal for the ethical considerations which it raises. There are two research projects planned -one for a group of data science students whose analysis will include machine learning and probably include entity disambiguation and matching. In the summer project, the students will collaborate with the Center for Ethics and Justice at the university to document the data's accessibility and utility. The translations will happen fairly early in the project, as they concern the labels and query terms for accessing the data, and usually not the data itself. This project begins with a substantial structured data corpus of the target languages and will build from that rather than originating any new system. Hosting institution School of Data Science, University of Virginia
4,592
2021-03-25T00:00:00.000
[ "Medicine", "Computer Science" ]
Malware Analysis Using Visualized Image Matrices This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. Introduction Malware authors have been generating new malware and malware variants through various means, such as reusing modules or using automated malware generation tools. As some modules for malicious behavior are reused in malware variants, malware variants of the same family may have similar binary patterns, and these patterns can be used to detect malware and to classify malware families. Moreover, most antivirus programs focus on malware signatures, that is, string patterns, to detect malware [1]. However, various detection avoidance techniques such as obfuscation or packing techniques are applied to malware variants to avoid detection by signature-based antivirus programs and to make analysis difficult for security analysts [2,3]. With the help of malware generation techniques, the amount of malware is increasing every year. Although security analysts and researchers have been studying various analysis techniques to deal with malware variants, they cannot be analyzed completely because the malware in which avoidance techniques are applied is exponentially increasing. Therefore, new malware analysis techniques are required to reduce the burden on security analysts. Recently, several malware visualization techniques have been proposed to help security analysts to analyze malware. In this paper, we propose a novel method to analyze malware visually to classify malware families. The proposed method converts the opcode sequences extracted from the malware into images called image matrices and calculates the similarities between each image. In addition, we apply the proposed method to the execution traces extracted through dynamic analysis, so that malware employing detection avoidance techniques such as obfuscation and packing can be analyzed. To reduce the computational overheads, we extract the opcode sequences only from the blocks that are related to staple behaviors, such as functions and application programming interface (API) calls, by using a major block selection technique [2]. Representative images of individual malware families are generated and are used to classify the unknown sample rapidly. Using these image matrices, we 2 The Scientific World Journal obtain the similarities between the images after the RGBcolored pixel information of the images is vectorized and the pixel similarities are calculated. This paper is composed as follows. In Section 2, malware analysis-related studies are described. In Section 3, malware analysis methods using visualized opcode sequences and the methods to calculate similarity are proposed, and the experimental results are presented in Section 4. Finally, in Section 5, conclusions and future directions are provided. CFGs are generated by dividing the instructions extracted through disassembling into blocks and by connecting the directed edges between the blocks. Some malware analysis methods using these CFGs as signatures have been proposed. Cesare and Xiang [5] proposed a method that defines CFGs as signatures in string form that consist of a list of graph edges for the ordered nodes and that measures the similarities among signatures by using the Dice coefficient algorithm [17]. Bonfante et al. [6] proposed a method that converts the CFGs into tree-based finite state machines through syntactic analysis and semantic analysis and then uses them as signatures. Briones and Gomez [7] proposed an automated classification system based on CFGs. The CFGs are summarized as three tuples including the number of basic blocks, the number of edges, and the number of subcalls, and then two functions can be compared. However, if the complex information is summarized into a small size, high false alarms may occur. There is much research aimed at detecting malware based on information such as system-calls, functions, and API calls, which is used for malware execution in operating systems. Shang et al. [8] proposed a method that generates functioncall graphs, which represent the caller and callee relationships between functions as signatures of malware samples, and they then compute the similarities by using those function-call graph signatures. Kinable and Kostakis [9] classified malware using the call graph clustering technique. Their proposed method generated the call graphs against the functions included in the malware samples, and they performed the clustering based on the structural similarity scores of the call graphs calculated through the graph edit distance algorithm. Statistical information regarding the instructions extracted through disassembling can be used in the static analysis of malware. Rad and Masrom [11] proposed a method based on the instruction frequencies in order to classify metamorphic malware. Since instruction frequencies are mostly not changed, even though the obfuscation techniques are applied to the malware, the instruction frequencies can become the features of malware. Therefore, their proposed method calculated a distance by using the instruction frequencies extracted from each malware sample, and they then classified metamorphic malware by using the distance value. Bilar [12] showed that there were different instruction frequencies in different malware. Particularly, they showed that rare instructions in malware could become better predictors to classify malware than other instruction could. Han et al. [13] proposed a method using instruction frequencies. The proposed method generated instruction sequences that were sorted according to the instruction frequencies, and they showed that the distances between instruction sequences from the same malware family had low distance values. Santos et al. [14] proposed a malware classification method using n-gram instruction frequencies in which n-gram instructions included n-instructions. In the proposed method, they generated the vectors for each n-instruction sequence and used some of the vectors as signatures. In addition, dynamic analysis methods including tainting, behavior-based methods, and API call monitoring have been proposed. Egele et al. [18] proposed a method using tainting techniques, which tracks the behaviors related to the flow of information that are processed by any browser helper object (BHO). If the BHO leaks sensitive information to the outside, the BHO is classified as malware. Fredrikson et al. [19] proposed a method that automatically extracts the characteristics of behaviors by using graph mining techniques. Their proposed method made clusters by identifying core CFGs for each similar malicious behavior in a malware family, and these were then generalized as a significant behavior. Furthermore, methods based on dynamic monitoring techniques using an emulator have been proposed. Vinod et al. [20] traced malware API calls via dynamic monitoring within an emulator and measured their frequencies to extract critical APIs. Miao et al. [21] developed a tool called the "API Capture" that extracts the major characteristics automatically, such as system-call arguments, return values, and error conditions by monitoring malware behavior in an emulator. Even though there are many static and dynamic analyses methods available, new techniques that can complement existing techniques are still needed to improve malware analysis performance and conveniences of analysis by security analysts. Recently, several visualization methods have been proposed to help security analysts to observe the features and behaviors of malware [22]. To visualize malware behavior, Trinius et al. [23] proposed a method that visualized the percentages of API calls as well as malware behavior into each of two images called a "treemap" and "thread graph, " respectively. Saxe et al. [24] developed a system that generated two types of images. One image showed the system-call sequences extracted from malware systemcall behavior logs, and the other image showed similarities and differences between selected samples. Conti et al. [25] proposed a visualizing system that shows the images for the byte information of malware samples such as byte values, byte presence, and duplicated sequences of bytes contained within a sample. Anderson et al. [26] proposed a method to show the similarities between malware samples in an image named a "heatmap. " Nataraj et al. [27] converted the byte information into gray-scale images and classified the malware using image processing. After generating images Step 1. Extraction of opcode sequences Step 2. Image matrix generation Step 3. Similarity calculation using byte values, they applied an abstract representation technique for the scene image, that is, GIST [28,29], to compute texture features. Moreover, they proved that the binary texture analysis techniques using image processing could classify malware more quickly than existing malware classification methods could [30]. However, since the texture analysis method has large computational overheads, the proposed method has problems in processing a large amount of malware [31]. In this paper, we propose a novel analysis method using image matrices to represent malware visually so that the features of the malware can be easily detected and the similarities between different malware samples can be calculated faster than with other visualization methods. Our Proposed Method 3.1. Overview. Our proposed visualized malware analysis method consists of three steps, as shown in Figure 1. In Step 1, opcode sequences are extracted from malware binary samples or dynamic execution traces. Then, image matrices in which the opcode sequences are recorded as RGB-colored pixels are generated in Step 2. In Step 3, the similarities between the image matrices are calculated. In the following sections, each step is explained in detail. Figure 2 shows the process to extract opcode sequences from malware binary samples for Step 1 through static analysis or dynamic analysis. Basic Block Extraction. To extract opcode sequences from malware binary samples, the binary sample files are first disassembled and divided into basic blocks, using disassembling tools, such as IDA Pro [32] or OllyDbg [33]. However, if obfuscation or packing techniques are applied in malware samples, static analysis using a disassembler is not feasible [34,35]. Therefore, some malware samples (in which obfuscation or packing techniques are applied) need to be executed in a dynamic analysis environment [36]. In dynamic analysis, as shown in Figure 3, some repeated instruction sequences are included in the dynamic execution traces because a program may have some loops or repeated calls, and these repeated sequences can increase the size of not only the execution traces, but also the processing overheads. Kang et al. [37] proposed a repetition filtering method for dynamic execution traces. Our filtered basic blocks are extracted from the dynamic execution traces after the repetition filtering method is applied. Finally, if basic blocks are extracted from malware samples or dynamic execution traces, then major blocks are selected from the basic blocks by our proposed technique, which is explained in the next section. Major Block Selection. The malware analysis method proposed in this paper does not target all of the basic blocks from the binary disassembling results or dynamic execution traces. If all the basic blocks are used for analysis, then some blocks for binary file execution in an operating system are included in the basic blocks. Moreover, many meaningless blocks may be included in the basic blocks extracted from malware samples. As a result, the number of basic blocks that have to be analyzed by the security analysts is increased and distinguishing malware features becomes difficult. In addition, the number of comparisons between the basic blocks from two malware samples is also increased dramatically. On the contrary, if the number of unnecessary blocks can be reduced as much as possible in the malware analysis, the analysis time cost for not only the individual malware sample, but also a large number of malware samples can be reduced. Therefore, we selected some blocks relating to suspicious behaviors and functions from among the entire set of basic blocks. As shown in Figure 4, the blocks selected as major blocks are those that include the CALL instruction, which is used to invoke APIs, library functions, and other user-defined functions. This is because not only user-defined functions, but also various system calls are used to implement the behaviors and functions of most programs. If blocks that include these function invocation instructions are used in malware analysis, malware features can be extracted [2]. Through a major block selection technique, the image matrix generating time is reduced by recording only those selected blocks in the image matrix. Opcode Sequence Extraction. To extract malware features, as shown in Figure 5, the opcode sequences in the individual major blocks are used as malware information. From each opcode, only the first three characters are used to generate information for the block. The reasons for using a three-character opcode are as follows. From the entire set of opcodes used in the Intel x86 assembly language, 41.4% of them have three characters, and the appearance frequencies of these opcodes within the binary files are higher than for other opcodes. On the other hand, 28.8% of opcodes have four characters, 17.8% have five characters, and 5.2% have over six characters, respectively; thus, their appearance frequencies are relatively low. In addition, since the meanings of the individual opcodes are maintained even though they are reduced to three characters, the different opcodes can be distinguished. For example, four-character opcodes such as PUSH are reduced to three characters, PUS, and twocharacter opcodes such as OR are expanded by adding a blank character to a three-character opcode. Then, these three-character opcodes are concatenated together, and the character string is used to represent the block as an opcode sequence, which is used to generate image pixels in an image matrix in the next step. Figure 6 shows the procedure for Step 2 that converts the opcode sequences into pixels in an image matrix. A hash function is used to decide the -coordinates and RGB colors of the pixels. Generation of the Image Matrix. To visualize a binary file as an image matrix, both the length and the width of an image matrix are initialized to 2 n , where is selected by the users. To reduce the probability of collisions of the hash function; n should be large enough. In our experiments, we selected as 8 to minimize collisions. The coordinate-defining module and the RGB colordefining module are used to generate image matrices. First, the coordinate-defining module defines the ( , ) coordinates of pixels on the image matrix of each code block. Second, the RGB color-defining module defines the color values of pixels on the image matrix. RGB colors are defined by calculating values of 8 bits each for the red, green, and blue colors. SimHash [38] is applied to opcode sequences extracted in Step 1 in order to define both the coordinates and the color values of the pixels. SimHash is a local-sensitive hash function used in the similar sentence detection system, which assumes that if the input values are similar, then the output values will also be similar. That is, since SimHash tokenizes the input strings and generates hash values for each token, if a few tokens are different in two input strings, then the generated hash values are not completely different, but are similar. Therefore, if the character strings of the opcode sequences are similar, then the outputs will be similar, and they will map onto similar coordinates in an image matrix. Once the coordinates and RGB colors of the individual pixels have been defined, RGB-colored images are recorded on the individual coordinates of image matrices. To provide human analysts with a more convenient visual analysis, pixels around the defined coordinates are recorded simultaneously. As shown in Figure 7, nine pixels from ( − 1, − 1) to ( + 1, + 1) around an ( , ) coordinate for a block are recorded. If the images overlap each other because the coordinates defined for multiple opcode sequences are adjacent, as shown in Figure 8, the sums of RGB colors become new pixel colors. If the result of a color summing exceeds 255 (0xFF), the result will be set to 255. For example, if RGB 1 is (255, 0, 0) and RGB 2 is (0, 176, 50), the new color will become (255, 176, 50). The number of pixels recorded on an image matrix varies according to the major blocks, and the number of overlapping pixels will increase as the number of images increases. If there are too many overlapping images, then the size of the image matrix should be increased. Representative Image Matrix Extraction. Since many malware variants exist in each malware family, as the number of malware samples increases, the total amount and time of the similarity calculation increase, too. Therefore, we extracted a representative image matrix of each malware family to reduce the costs of malware similarity calculations. That is, when a new malware sample is found, the amount of time to calculate the similarity is reduced by comparing the image matrix of the new malware with the image matrices that represent individual malware families instead of comparing it with all of the image matrices of the existing malware samples. As shown in Figure 9, to extract a representative image matrix for each malware family, image matrices are generated for samples in malware families. Then, the representative image matrix is extracted by recording only the common pixels that have same coordinates and RGB colors from the image matrices of individual malware samples in the same family. Figure 9 shows an example of the generation of representative images of malware families. Similarity Calculation Using Image Matrices. The advantage of the similarity calculation using the image matrices is a faster performance than with exact matching using the string type of opcode sequences, even though there are some extra false positives due to hash collision. When using the string, the time complexity is defined as ( 2 ) due to the process of finding pairs of exactly matched strings. However, if the image matrices are used to calculate similarities, since the coordinates and colors of the opcode sequences are defined through SimHash, the process of finding the pairs is skipped. Therefore, the time complexity of the similarity calculations using the image matrices is defined as O(n), because only the color information of the pixels recorded on the same coordinates in both image matrices is used to calculate the similarities between the image matrices. Pixel similarity calculations are carried out first for pixels in each image matrix. The most important consideration in a similarity calculation in this case is that only those RGB color pixels recorded in the individual image matrices should be used. Image matrices have RGB-colored pixels on square images with black backgrounds. If black pixels are also used in similarity calculations, the similarities between samples from different malware families can be calculated as very high. Therefore, when the similarities of the image matrices are calculated, the following cases are considered for pixels on the same coordinates in the two image matrices, as shown in Figure 10. In this case, the vector angular-based distance measurement algorithm is used to calculate the similarities between color pixels. This algorithm calculates similarity values by expressing the color pixels constituting each image as 3D vectors, as shown in (1), and then using the angle information and size information (a) Case 1: if all of the pixels in the areas of both image matrices are black, the pixel similarity calculation will not be carried out and the next pixel will be selected. (b) Case 2: if one pixel in a selected area is black and the corresponding pixel in the other image is colored, the pixel similarity will be defined as 0. (c) Case 3: if both pixels are not black but colored, the color pixel similarity will be calculated using the vector angular-based distance measurement algorithm [39], as follows: The similarity values of the image matrix when considering individual cases are calculated, using the results from the pixel similarity calculations, as shown in (2). That is, the sum of pixel similarity values calculated in case 3 is divided by the number of pixels calculated in cases 2 and 3 to calculate the average: Sim ( , ) = sum of pixel similarity values in case 3 # of pixels in case 2 and case 3 . (2) Experimental Data and Environment. Using the visual analysis tools implemented in this paper, and the malware samples shown in Table 1, image matrices were generated, and similarity calculations were performed. First, set A consists of 290 malware samples from 16 families in which the detection avoidance techniques, such as obfuscation and packing, are not applied. These malware samples are used to extract the basic blocks through static analysis using a disassembler. Second, set B consists of 560 malware samples from 14 families in which the packed and nonpacked malware samples coexist. We used these malware samples to generate dynamic execution traces through the PIN tool in a dynamic analysis environment, and the filtered basic blocks are extracted from the dynamic execution traces through the repetition filtering technique, as explained in Section 3.2.1. For the experiments, we constructed an experimental environment consisting of the analysis server, malware server, and monitoring machine, as shown Figure 11. We set up VMware vSphere ESXi 5.1 in the analysis server, which has an Intel Xeon E5-1607 processor and 24 GB of main memory, and we installed two Windows operating systems (OSs) as guest OSs. In the first Windows OS, the dynamic execution traces were extracted through the PIN. In the other Windows OS, the image matrices were generated and similarities were calculated through our visual analysis tool. Malware samples that are provided for the analysis server and the dynamic execution traces extracted from the analysis server are stored in the malware server. The monitoring machine controls the analysis server through the PowerCLI tool that is the remote command line interface. Experiments with Static Analysis. For the experiments in this section, we disassembled the malware samples within set A and extracted major blocks from the basic blocks. We then generated the image matrices using opcode sequences of those major blocks and analyzed the similarities among them. Image Matrix Generation. In this paper, we set the sizes of the generated image matrices to 256 × 256 pixels for the experiments. As shown in Table 2, the reasons for using this image matrix size can be briefly summarized as the middle ground between file size, similarity calculation time, and classification accuracy. The accuracy was calculated by using (3): Accuracy = # of correctly classified malware samples # of total malware samples . (3) Figure 12 shows examples of the image matrices generated from the malware samples of individual families within set A. Only three image matrices for each malware family and one representative image matrix extracted by recording only those pixels commonly existing in all image matrices were included. Since the number of opcode sequences used as malware information varied, the number of pixels recorded on the image matrices differed. In the case of malware, many of the same or similar RGB-colored pixels are found among the image matrices of malware samples classified as the same family. However, even if pixels are recorded on the same coordinates of different image matrices, the pixel similarities have different values if the RGB color information of the relevant pixels is different. Our results show that image matrices of variants included in the same malware family can be shown to be similar and that clear differences exist among malware samples from different families. Figure 13 shows the image matrix differences before and after the application of the major block selection technique. The image progression indicates that the number of pixels recorded in the image matrices decreases because of the selection of major blocks from among the basic blocks. The The Scientific World Journal Figure 12: Image matrices of malware family samples. 10 The Scientific World Journal similarity changes after the application of the major block selection is described in the next subsection. Major Block Selection. Similarity calculations of the image matrices after the application of major block selection are shown in Figures 14 and 15. When the major block selection technique was applied, the similarity changes ranged from a minimum of 0.002 (the Tab family) to a maximum of 0.147 (the Lemmy family) among the malware samples in the same families. The results of the similarity calculations for different families showed that the changes ranged from a minimum of 0.001 (the Eva family) to a maximum of Figure 16 shows the results of the similarity calculations of an unknown sample both with all of the image matrices of malware samples and with the representative image matrices of individual families. When all of the image matrices were used, the Tab family was found to have an average similarity value of 0.781, while all the other families had values smaller than 0.05. When representative image matrices of individual families were used, the average similarity of the Tab family had a value of 0.348, while the other families had values of less than 0.03. Therefore, the unknown sample is expected to be a variant of the Tab family. In fact, the diagnostic name of the unknown sample used for this experiment was Trojan-Dropper. Win32.Tab.gd. Table 3 shows the list of the malware samples selected as unknown samples for this experiment and it includes the results of the similarity calculations using representative image matrices. These malware samples except for Agobot.02.a and Sdbot.04.a were detected as variants of each proposed methods together, that is, the major block selection and representative image extraction techniques. Whereas the similarities between malware samples from the same families had values between 0.19 and 0.36, the similarities between malware samples from different families were less than 0.05. The classification accuracy, which was obtained by using the image matrices that were generated through the static analysis, was 0.9896. That is, only three malware samples in set A were misclassified into the other malware families. 12 The Scientific World Journal Our result was a little better than the average classification accuracy of 0.9757 using the binary texture analysis in [30]. Therefore, we conclude that our methods are feasible for malware classification because similarities within the same families will be relatively high compared to the similarities between malware samples from different families. Execution Trace-Based Experiments. For the execution trace-based experiments, the malware samples within set B in Table 1 were executed in dynamic analysis environments using the PIN tool. Dynamic execution traces were then generated, and the repetition-filtered basic blocks were extracted from those execution traces. After filtering, the major blocks relating to suspicious behaviors and functions were selected. Our proposed techniques were applied to these execution traces to generate image matrices and to analyze similarities. Figure 18 shows the decrease in size of the execution traces resulting from the application of the repetition filtering and major block selection techniques. If the sizes of the execution traces were first reduced through the repetition filtering technique and then the major block selection method was applied, the sizes of the execution traces were reduced by 76.5% on average (69.3% minimum, 83.6% maximum) compared to the original execution traces. Figure 19 shows changes in the generated image matrices resulting from the application of the repetition filtering method and the major block selection. Decreases in the number of recorded pixels in the image matrices can be recognized when the three image matrices are compared. Figures 20 and 21 show changes in the similarity values with the application of the repetition filtering technique and the major block selection. Although changes in the values are not large, some malware families are distinguishable if the threshold of similarity values is set properly. In these experiments, the average similarity values of malware samples from the same families was approximately 0.65 and those from different families were approximately 0.36. Compared to the results of the static analysis described previously, the results from the execution trace-based experiments show relatively small differences. The reason for these results is that similar system dynamic link libraries (DLLs) were invoked when the malware samples of each family were executed in the dynamic analysis environment to extract the dynamic execution traces. As a result, similar opcode sequences due to the DLL calls and the executing of DLLs from the dynamic execution traces were recorded in the image matrices of individual families, so the similarity values increased. Nevertheless, the classification accuracy obtained through the similarity calculations using the image matrices that were generated based on the execution traces was 0.9732 because only 15 malware samples in set B were misclassified, and this result was similar to the accuracy in [30]. Conclusions and Future Work In this paper, we proposed a novel method to analyze malware samples visually by generating image matrices. To generate the image matrices, opcode sequences were extracted through static analysis and dynamic analysis. In addition, we calculated the similarities between the malware variants using vectorized values of the RGB-colored pixels in the image matrices. The similarity calculation method using the image matrices has a faster performance than exact matching using the string type of opcode sequences or basic blocks. Our proposed method was implemented as a visual analysis tool. The experimental results showed that malware variants included in the same family were similar when converted into image matrices, and the similarities between malware variants were shown to be higher. With our proposed method, security analysts can analyze malware samples visually and can distinguish similar malware samples for further analysis. Our future studies include faster malware detection and classification using the parallelization techniques and realtime processing based on GPGPU.
6,973.4
2014-07-16T00:00:00.000
[ "Computer Science" ]
What kinds of groups are group agents? For a group to be an agent, it must be individuated from its environment and other systems. It must, in other words, be an individual. Despite the central importance of individuality for understanding group agency, the concept has been significantly overlooked. I propose to fill this gap in our understanding of group individuality by arguing that agents are autonomous as it is commonly understood in the enactive literature. According to this autonomous individuation account, an autonomous system is one wherein the constituent processes of the system actively produce and sustain that self-same system, which will run down or fail if any of these constituent processes cease. This definition of autonomy provides us with a precise and operational account of the individuality of group agents. I will then compare this account to those of Carol Rovane and Raimo Tuomela to argue that it offers the best explanation of what kinds of groups are group agents. Introduction What kinds of groups are group agents? Despite the recent upsurge of interest in the nature of group agency (List & Pettit, 2011;Pauer-Studer, 2014;Rovane, 2019;Tollefsen, 2002Tollefsen, , 2015Tuomela, 2013), the issue of determining what kind of group is the right kind remains a contested matter. To address this question, I will focus on adapt to these new circumstances. Under normal conditions, however, the members are each alone and almost entirely constrained. It is only as a rebellion to this constraint that striking even makes sense at all -if the attitudes, beliefs, or goals of the members and that of the group were aligned, then there could never be a situation in which a majority of the group's members go on strike. Even those in positions of power within these structures do not have a great deal of personal autonomy when they are acting on behalf of the group. It is illustrative of this point that when Steve Jobs died, nothing fundamentally changed for Apple. This is because the corporation was not bound by the man; he, instead, only served a particular function within the group structure. These are the sorts of groups that I consider the object of analysis here. These I will call proper or genuine groups, as opposed to mere collectives, which involve those 'groups' of people with shared intentions. Intuitively, groups that count as proper groups are things like corporations, political parties, NGOs, and universities. Being able to differentiate between proper groups and mere collectives will better allow us to understand and address unjust collective actions, coordination problems, and will improve our theorising about social and political problems more generally. It matters, for instance, whether a particular injustice was perpetrated by a mere collective of singular agents or by a genuine group agent. In the former case, responsibility lies solely with the people involved. Preventing the same injustice from occurring again should therefore involve improving (moral) education for singular people, addressing factors that affect particular people's lives, and other individualised responses. In the latter case, however, the group agent determines the best available actions, and so our resolution more likely lies in restructuring the agent or influencing its environmental incentives. Furthermore, by simply having a better picture of our social and political landscapes in terms of proper groups and mere collectives, we will be able to think more accurately about influences on our own ways of thinking and living together. To provide an account of proper group agents and answer the titular question, I will argue for what I call here the autonomous individuation account of individuality found in the enactive literature on agency (Di Paolo & Thompson, 2014;Di Paolo et al., 2017). In one sense, then, this paper is a partial defence and expansion of the enactive theory of agency to groups. Hence, many of the views expounded and defended here, as well as the methodology of the argument more generally, may be distinctive of that approach. That said, I believe the extra work involved in expounding the atypical use of certain concepts present in the enactive literature is worthwhile since the theory provides us with the most robust account of group agency presently available. I begin the argument by explaining 'agency' as it is used here and defending the basic enactive definition of agency as involving a system acting in order to achieve some goal (Barandiaran et al., 2009, p. 369). This view, I argue, is the common understanding of agency in the social ontology literature, being either implied or stated as the initial perspective from which much reasoning on group agency begins. Therefore, it follows that individuality is a necessary element of defining group agency as it is commonly understood. Next, I will discuss the concept of individuality itself, explicating the necessary features of a definition of individuality by drawing on the 1 3 work of Barandiaran et al., (2009), Jonas (1966, and Meincke (2019). I then argue for the autonomous individuation account by showing how autonomy is central to individuality in agents and then providing the enactive definition of autonomy. I will then compare the autonomous individuation account with the other candidates for definitions of individuality in the group agency literature, focusing on the work of Rovane (2019) and Tuomela (2013). I demonstrate that their definitions often fall short in many respects compared to the autonomous individuation account. I will conclude by considering a few examples of different kinds of groups in order to show that the autonomous individuation account of individuality is a robust and operationalizable account, and to demonstrate how it differs from the other available accounts. Agency: an overview Agency, as it is understood here, refers to 'at least, a system doing something by itself according to certain goals or norms within a specific environment' (Barandiaran et al., 2009, p. 369). This basic definition comes from an investigation of the discussions on agency in cognitive science and adaptive behaviour modelling and involves three essential parts (Barandiaran et al., 2009, pp. 368-9). First, there must be a system that is separate from its environment. This is the individuality criterion, with which we are primarily concerned here. The second and third criteria concern the individual's ability to act and the goals or norms according to which that system acts. These are called the interactional asymmetry and normativity criteria respectively (Barandiaran et al., 2009, pp. 369 − 72). I take this to be the basic definition of the kind of agency that I am interested in here. 'Agency' is used in different ways across (and even within) disciplines, such as in chemistry, sociology, or meta-ethics. 3 When I claim that groups can be agents, however, I mean that they can be agents in the same way that humans, dogs, and other organisms can be agents. 4 What distinguishes different agents of this sort will be in the particular ways that their agency is established or manifested. A human agent, for instance, can consciously determine their own goals and can reflect on the best ways to influence the world to achieve those goals. A bacterium, on the other hand, is much more restricted in the norms that it could possibly pursue and is unlikely to be capable of any kind of conscious reflection on its reasons for taking one action over another. The most obvious difference between group and singular agents concerns the internal relations between their parts. Group agents are physically discontinuous systems whereas singular agents are physically continuous. Nevertheless, whether the system is continuous or discontinuous, there must be something that makes it a 'system'. This basic idea of agency is notably similar to other ideas expressed in the group agency literature, which is indicative of the fact that the core concept of 'agency' 1 3 being theorised is the same for both the enactivists and for social ontologists concerned with group agency. List and Philip Pettit's (2011) definition, for instance, differs predominantly due to a couple of additions. They argue that an agent is a system with representational states and motivational states capable of processing these states and acting on their environment in order to pursue their motivations (List & Pettit, 2011, p. 20). The addition of representational states and the capacity for processing one's states ultimately suggests that agency necessarily involves a particular cognitive framework, but the core idea that there is still some distinct system or individual doing something for a goal remains. Tuomela (2013) is similarly explicit: 'The account [Tuomela's] regards organized groups that are capable of action as functional group agents' (p. 13); 'the notion of group agent (or that of a group capable of action)' (p. 46). This capacity for action depends on the singular agents who make up the group acting together for the same authoritative group reasons (Tuomela, 2013, p. 23). In this way, the group constitutes a system that is evidently distinct from its environment acting according to its own, internally determined group reasons. Finally, Tollefsen (2002), in arguing that group agents are intentional agents, does so on the basis that 'our explanations of the actions of organizations in terms of their beliefs, intentions, and desires are successful' (p. 397). So, again, an agent must be a system (an organisation) whose beliefs, intentions, and desires we are trying to explain with reference to the acts they have performed. Though she does not appear to have a preferred definition of how a system is constituted, Tollefsen does note that, to be a group agent, groups must in some way form a coherent whole: 'The performance of joint actions on the basis of group ends, shared intentions, joint commitments, or we-intentions might very well be the way in which corporate agents form and sustain their agency over time' (Tollefsen, 2015, p. 47). There must, she suggests, be some persistent entity that is in some way unified. The idea that agency involves at least a system doing something in its environment to achieve a goal is evidently uncontroversial. Some accounts of group agency add additional criteria, as with List and Pettit, and some focus more heavily on particular aspects, as Tollefsen focuses primarily on the goals or norms of groups. Nevertheless, individuality, interactional asymmetry, and normativity are common to all. It is, I contend, the first of these that is most often overlooked, despite being a necessary condition of agency. Let us, then, give it the attention it deserves. The conditions of individuality Allowing us to identify group agents is an important part of any successful definition of group individuality, so here I will explicate the conditions of one. Following Jonas, Barandiaran et al., (2009) point out that an agential system must be capable of distinguishing itself as an individual and, in doing so, defining its environment for itself (p. 370). This is a given so long as we take agency to be an objective fact of certain systems, which is the non-metaphorical position taken by many other philosophers concerned with group agency (List, 2021, p. 4;List & Pettit 2011, pp. 2-6;Pauer-Studer 2014;Rovane, 2019, p. 4870;Tollefsen 2002, p. 396;Tuomela 2013, p. 47). We can-1 3 not, then, impose individuality on agential systems. By virtue of being the locus of activity, the agent is necessarily a self-distinguishing system. This is why autonomy is central to understanding agency, as I will argue further in the next section. That agents define their own identities as individual systems follows from the fact that agents are agents regardless of outside observers judging them so. But not all systems are genuine systems without external observers (Barandiaran et al., 2009, p. 369). This means that, when considering what constitutes a proper group, we cannot just assume that anything we describe or perceive as a 'system' qualifies. For instance, what belongs to a workspace as a system depends entirely on the functionality of the various parts in that space for the observer (Barandiaran et al., 2009, p. 369). This is a case where there is something that might be considered a 'system' according to certain understandings of the term, but this is only because of its usevalue to external observers rather than it being a self-individuating system. We will need a more specific understanding of what constitutes the kind of systems we are interested in. Furthermore, List & Pettit (2011) claim that group agents must be able to persist through changes in membership and that 'any multi-member agent must be identifiable over time by the way its beliefs and desires evolve' (p. 32). The thrust of these claims is correct, though they require some amendment. The first of their points, that group agents must be able to persist through changes in membership, is reminiscent of Jonas's (1966) argument that organisms cannot be identical to their material parts. Jonas (1966) argues that if we were to take a purely material picture of the world, 'all the features of a self-related autonomous entity would, in the end, appear as purely phenomenal, that is, fictitious' (p. 78). Agential systems, for Jonas as for List and Pettit, are dependent for their existence on the availability of material parts while at the same time maintaining a separate functional identity that is not the same as the identity of its material parts. 5 Jonas (1966) calls this relationship one of 'needful freedom' (p. 80): the agent both needs and is free from its matter. Just as a person's identity does not change while they breathe, eat, and sweat, so too does a proper group maintain its identity through changes in membership. It is for this reason that individuality refers to the individuation of agents only, which is a special case of individuation. As Wayne Christensen and Mark Bickhard (2002) have aptly pointed out, there are a number of properties that can serve as observer-independent criteria for identifying a given system (p. 8). Physical cohesion is one such property. It allows certain systems to be individuated from their environment insofar as they are causally bonded in particular ways. If you kick a small rock, the entire rock will move while the ground below it will remain in place (Christensen & Bickhard, 2002, p. 8). In this case, however, the entire identity of the rock is given by its physical cohesion. If these particular physical bonds are broken, the rock no longer exists. Agents, on the other hand, actively maintain their structure by taking in new material to replace what has been or will be lost. This is true even in singular agents. Though singular agents are physically continuous beings, their parts are not permanently cohesive the way that a rock's parts are. For both singular and group agents, then, individuality cannot be defined in strictly physical terms. Hence, it makes sense that List and Pettit point to the idea that an agent must be flexible over time. The latter part of their claim is that group agents must be identifiable over time by the way their beliefs and desires evolve. I do not agree with their specific claim, but it does point to a more general point that holds true for any agent. Meincke (2019) argues that things that persist through time must be conceived of as 'stabilised processes' and 'what matters from a metaphysical point of view is that any process of whatever kind persists as long as stabilisation can be maintained' (pp. 24 − 5). List and Pettit gesture toward the general rule that Meincke is concerned with -the persistence of identity over time. While the rock persists for as long as its molecular bonds hold, agents persist for as long as they can continue to be active systems. So, group agents may not be identifiable by the way they evolve, since we do not know without a picture of the system just what is evolving; but it is necessarily the case that persistence, as Meincke points out, requires change. Hence, it is a general rule that group agents do need to 'evolve' over time in order to persist. This might occur very quickly or exceptionally slowly, but in the face of a changing environment, the agent needs to adapt or the external conditions for its survival will no longer be met. Blockbuster as compared to Netflix serves as an apt and familiar example here. Beliefs and desires are not core to this picture, instead what is necessary is an evolution in behaviour. This may or may not result from evolving beliefs and desires, but we need not take any position on this particular claim. Taking these points together, the other goal of a definition of individuality is to define the agent's conditions of stabilisation over and above a particular relationship between its material components. At the same time, however, it is important that we are able to distinguish between the agent itself and those parts of its environment that it relies on for its stabilisation. Again, organisms require food and water, but the sources of these things in the environment of the agent are not themselves parts of the agent. Businesses similarly require customers, but the customers are not constitutive of the business itself. In both cases, we must be able to distinguish between those parts of the world that belong to and constitute the agent and those parts that are external to it that are nevertheless necessary for its persistence. The autonomous individuation account The enactive definition of autonomy provides, as Di Paolo and Evan Thompson (2014) put it, the criteria for the self-individuation of bodies (p. 69), where a body is not 'constituted exclusively by its biochemical or physiological processes' (p. 72). Autonomy here still refers at the broadest level to self-governance (see also Barandiaran & Egbert 2013, p. 8;Barandiaran 2017, p. 410;Christensen & Bickhard 2002, p. 3). Because agents necessarily demarcate their own boundaries and define their own environments, as I argued above, autonomy is vital to understanding agency. The claim being made here is that it is precisely this self-governance that generates the 1 3 agent's individuality. 6 Being an autonomous system just is what sets the agent apart from its material components and the rest of the physical world. How exactly this occurs is the point of the somewhat technical definition given below. For a system to be operationally closed, its constitutive processes must collectively actively produce and sustain those self-same processes (Di Paolo et al., 2017, p. 112). So, if process A sustains process B, which sustains process C, which sustains process A, then the system ABC is operationally closed. Think of the ways a plant's roots, stems, and leaves all sustain themselves and each other. Here, we can see that the parts of the system are related to the system as a whole via their production and sustenance of that self-same system, which is stabilised by this active producing and sustaining. Of course, being self-sustaining does not imply that an operationally closed system is cut off from its environment. There are a few ways in which other processes might influence the system without thereby being a part of the system. Furthermore, it is through the systems' interactions with these external processes that it defines its environment, which is crucial for both a metaphysical understanding of group agents and for pragmatic reasons concerning how we might influence groups by changing their environments. First of all, some processes enable the system in question while not themselves being enabled by the system (Di Paolo & Thompson, 2014, p. 71;Di Paolo et al., 2017, p. 114). In plants, the sun acts as an enabling condition for their photosynthesis, while not being sustained by plant life itself (Di Paolo & Thompson, 2014, p. 71). Likewise, for groups we could consider the conditions of capitalism broadly (private property, private control of the means of production, the legal structures that protect and ratify these, and so on) as enabling conditions for certain corporations. Furthermore, there are processes that act as boundaries and constraints for operationally closed systems (Di Paolo et al., 2017, p. 114). These can be the same processes that enable the system in the first place. There may, however, also be boundaries that are not created by the enabling conditions for that system, as minimum wage laws and the threat of union action are against modern corporations. Note, as Di Paolo et al. point out, that these enabling and binding conditions are implied by the organisation of the system in question (Di Paolo et al., 2017, p. 114). The sun is a necessary enabling condition for the plant, as determined by the organisation of processes in the plant. We, as external observers, may or may not notice this enabling condition, but it exists regardless. In this sense, operational closure generates individuality for the agent and distinguishes it from its environment while giving us as external observers a useful tool for uncovering the structures of the agential system in a non-arbitrary manner. Next, a word on precariousness. By precariousness is meant: 'in the absence of the enabling relations established by the operationally closed network, a process belonging to the network will stop or run down' (Di Paolo & Thompson, 2014, p. 72). So, all of the processes that are a part of a system are precarious because they are actively enabled by the other processes in that system. This concept is necessary to avoid the inclusion of trivial cases in the set of self-individuating systems. A crystal, for example, satisfies the conditions of operational closure since 'chemical interactions lead to the spontaneous growth of a clearly identifiable entity, which thereafter is maintained over time' (Di Paolo et al., 2017, p. 116). The issue is that this crystal does not have to do anything after being formed to maintain its existence. It simply persists, without its processes needing to actively maintain each other. That said, the idea of precariousness is not as vital to the account presented here as operational closure. There is in theory no issue for a definition of agency that a crystal counts as self-individuating, since the idea that an agent must be an active entity will be taken care of by the concepts of interactional asymmetry and normativity. All agents need to be self-individuating systems, but not all self-individuating systems need to be agents, since there are other criteria that must be satisfied. Whether the crystal is a self-individuated system or not is neutral to the project of defining group agents. Nevertheless, it is reasonable to think that all agents really are precarious in the sense argued for above. Organisms must actively sustain themselves. Corporations and political parties must similarly be engaged in constant activity or else they will fail. The autonomous individuation account suggests that agents are individuals because they autonomously make themselves such via their active determination and maintenance of their own structures. Their precariousness, furthermore, produces the normative demands with which we, as biological systems, are intimately familiar. What they are and what they do, then, are self-governed. Rovane and Tuomela Here, I will compare the autonomous individuation account with two of the more substantial engagements with the individuality criterion, namely Rovane's (2019) and Tuomela's (2013). I will argue that each of their accounts face issues that the autonomous individuation account manages to avoid. Rovane (2019) sums up her account of agency thus: 'wherever there is a commitment to meeting the normative requirements that define individual rationality there is an individual agent' (p. 4874). The normative requirements in question are consistency, closure, and transitivity (Rovane, 2019, p. 4873). To be consistent, an agent must resolve conflicting beliefs. Closure is achieved by accepting the implications of one's beliefs. Transitivity is the ordering of preferences. A group agent exists just in case there is a collection of people who together meet the normative requirements that define individual rationality (Rovane, 2019, p. 4870). According to Rovane's definition of agency, the individuality criterion is satisfied by the presence of the relevant commitment. This commitment defines the boundaries and sustained existence of the group since only those parts of the world that are teleologically oriented toward this commitment to the norms of rationality can possibly count as a part of the group, and the group persists so long as the commitment remains. So, to determine whether a particular group is a proper group, we ask whether that group has consistent beliefs that it at least attempts to follow through on and whether it prioritises potential tasks in terms of greater or lesser importance. Here, we will likely look for organisational structures that allow for the relevant processes to occur. This might take the form of a voting procedure, or there might be individuals authorised to make decisions on behalf of the group due to their (purported) expertise. Importantly, Rovane's (2019) definition of agency is grounded in human agency, both in the sense that groups always seem to be made up of humans (p. 4870) and that groups are agents because they are relevantly like humans (p. 4877). The first assumption is not a significant issue, since in that paper Rovane is concerned with the question of whether or not group agency is a social phenomenon, and so she may argue that she is concerned with groups made up of humans in this particular context. Still, it is worth noting that non-human animals like bees and ants most likely form group agents just like humans do and excluding those groups as counting without justification limits the account. The second assumption, however, is likely to lead to an inflated conception of agency where agency is taken to involve some number of properties that do not belong, especially when defining non-human agents such as group agents. In Rovane's (2019) case specifically, these are mental properties, including intent (p. 4871), beliefs, attitudes, and commitment (p. 4873). Agents are not necessarily mental beings. The enactive theory of agency defines agents without needing to rely on the concepts Rovane employs, which implies that using mental concepts to define agency in general requires some justification. The normativity criterion might be the primary avenue through which mental features make their way into defining agency. Some might take the possession of norms and goals, for instance, to imply that the agent must have the capacity to reflect on these norms or to choose their goals. This, however, is mistaken. The goal of survival among biological systems is evident and is one that does not require a mind to adopt. Plants and amoebae, neither of which is often taken to have minds capable of beliefs or attitudes, actively pursue their own survival. The lack of indifference to the conditions a system finds itself in is a definitive characteristic of the living system (Cagnuilhem, 1991, p. 126). Hence, as anyone who has had a plant by a window will know, plants will orient themselves and their leaves toward the sun. This is just to say that mindedness and agency are, at the very least, conceptually separable and, therefore, it cannot simply be assumed that an agent has a mind or mental capacities. If it must, then this is something that needs to be argued for, not merely assumed. For what it's worth, I find the idea, mentioned by David Spurrett (2020) on Twitter (of all places), that '[c]ognition is the control of agency' far more likely. This would help to explain the apparent differences in complexity among agents. I, as a human, can control my actions, develop my own goals, rationally reflect on the best ways to achieve those goals, and so on. A bacterium, on the other hand, will not form goals that fall outside of its biological needs, and cannot reflect on the best ways of achieving its goals, though is still an agent since it is an individual that acts on its environment to achieve its own normative ends. (2009) term the coupling-constitution fallacy (p. 81). This occurs when someone moves from the observation that process X is causally connected to process Y to the view that X is constitutive of Y (Adams & Aizawa, 2009, p. 81). In Rovane's case, her focus on human agents, in whom our cognitive processes are very likely causally connected with our agency, has possibly led to the implicit view that certain cognitive processes are constitutive of agency itself. Rovane's argument in particular appears to be an instance of what Fred Adams and Kenneth Aizawa Because Rovane's definition depends on expanding an understanding of human agency to agents that are not themselves humans, we are left with two options: (1) her account implies certain attributes that require mental capacities that groups almost certainly do not have, or (2) her use of those terms is purely metaphorical. The first option implies we should reject Rovane's view. Even if singular humans can be said to have beliefs and preferences on behalf of the group, these are not the group's beliefs or preferences. Similarly, it is not clear how 'commitment' to the standards of rationality figures as a sturdy definition of the group's individuality. Rovane (2004) talks of a commitment to a group in terms of a person's conscious choices or feelings regarding the activities of that group (p. 194). This does not address the differences between the necessary processes that make up the group itself and external processes that the group relies on. Thus, it does not allow for a demarcation between genuine members of the group, such as people performing necessary tasks, and interested parties, such as politically engaged individuals who try to convince their friends to vote for their preferred party. The autonomous individuation account, on the other hand, does address these concerns since the demarcation of the agent's own parts from its external environment is built into the enactive definition of autonomy. It is, therefore, preferable for its greater clarity. The second option is less serious but does suggest that if we want a complete understanding of the ontology of group agents -and for the sake of more technical work -we should figure out what 'beliefs', 'preferences', and 'commitment' are metaphors for. Here we might simply substitute a more robust account of agency, such as the enactive theory. Talk of mental states might be useful if we are just trying to give non-experts a rough overview of the general ideas, since the terms will more easily communicate the approximate idea without having to explain technical terms like 'operational closure' and 'precariousness'. If we are after an accurate, operational account, however, then technical terminology should not be a barrier. For Tuomela (2013), a group agent is a mind-dependent entity with both fictitious properties and real causal powers (p. 47). They are partly fictitious in the sense that group agents do not really have intentional features that, Tuomela (2013) claims, depend on a biological brain such as the capacity for reason (p. 48). On the other hand, they have objectively real causal powers insofar as the existence of a group agent for its members produces certain outcomes in virtue of those members acting as group members (Tuomela, 2013, p. 47). Given the ontological mind-dependence of group agents for Tuomela (2013), he also argues that they must be collectively constructed and collectively accepted (p. 47). He offers the example of John and Jane, who jointly intend to paint their house together, to clarify his point (Tuomela, 2013, p. 49): Now consider the dyad, a group agent, consisting of John and Jane. This group agent is collectively constructed. Simplistically put, John and Jane form a group agent because they (and others) take them to form a group. This view… is ontologically grounded by John's and Jane's relational state of joint intention (i.e., the individual we-intentions and the mutual awareness that it is ontologically composed of) and by their joint action dispositions. The construction that is mentioned here involves a collective acceptance that is dependent on people's imaginative capacities (Tuomela, 2013, p. 49). The dyad is a group agent precisely because John and Jane (and others) imagine and accept that they form a group agent. They hence each adopt a we-intention which, roughly put, involves intending to play one's part in 'our' action (p. 78). This, then, allows the people to act intentionally together, and hence as a group. Although Tuomela believes groups themselves cannot have intentions, the members of the group can still intend to act collectively. 7 Membership in a group agent involves (collectively) accepting the group's ethos. The group's ethos is 'the group's central, typically action-related constitutive properties' (Tuomela, 2013, p. 26). Painting a house, for instance, is a central, actionrelated constitutive property of the John and Jane dyad. To be a member of the dyad, John and Jane must each accept this ethos and the various actions related to it. New members, to act as group members, must also accept the group ethos. Furthermore, if the group is faced with a choice between courses of action in relation to achieving a group goal, this choice is determined by the members of the group collectively accepting an attitude, which becomes the group's attitude (Tuomela, 2013, p. 123). Hence, on this account, a group agent accepts p as true if and only if its members collectively accept p as true for the group (Tuomela, 2013, p. 127). The members of a group collectively accept p as true if and only if they jointly have an attitude expressed by p that is of use to the group, by which he means it promotes the group agent's goals (pp. 127-8). Finally, it is worth mentioning that, while for Tuomela (2013) the group agent ontologically depends on its members, they can be considered 'position-holders', meaning one person may leave and another can come along to take their position without this changing the identity or character of the group agent (p. 26). For instance, the John and Jane dyad is a group agent that has an ethos related to painting their house. Now let's say Jean joins John and Jane, endorsing the group ethos and we-intending to paint the house. By taking up these psychological attitudes, Jean becomes a member of the group. If John leaves the group but Jane and Jean continue to promote the group's ethos, then the group itself will persist. Hence, the group's identity is not directly dependent on its members, although its continued existence does depend on having members who fulfill the relevant roles. Along similar lines, he maintains that exactly the same people can form multiple group agents. The groups will differ insofar as they differ in ethos and activity (Tuomela, 2013, p. 49). John and Jane can have a house painting group, a book club, and a band, all of which will 1 3 be separate group agents despite containing the same singular human agents. The groups have their own identities, which are initially determined by their founders, but thereafter become their own. Given the notions of ethos and collective acceptance, Tuomela has greatly overestimated the extent of the knowledge, power, and agreement that is necessary among group members, even those we would consider executive members in a non-egalitarian group. As Jonas (1966) argues, there is a difference between having and serving a purpose (p. 122). I may form a goal -a purpose -and then carve it up into disparate pieces to be performed by a number of other singular agents who, while knowing their own goals, know nothing of the broader context in which they operate; they are 'goal-blind' (Jonas, 1966, p. 123). Likewise, a group might demand certain actions by various means of its members who are each goal-blind but who together serve the overall purpose of the group. This demand might come in the form of felt pressures. This pressure can come from members enforcing the rules, or even from nonmembers who expect you to play your part given what they know about the group itself. It might come in the form of psychological pressure to perform the duty you have committed yourself to and, perhaps, from fear of failure. It might come from economic or political circumstances. Sources of coercion abound, even in our own minds. In short, there need not be anything like a collective acceptance of p in order for there to be a group acceptance of p. To push the point yet further, there need not even be members. Again, as Jonas (1966, p. 123) explains: I can even reduce the steps to such primitive elements that I can dispense with human agents altogether. It is precisely this dissociability of purpose and execution which permits us to delegate the latter so extensively and distributively to others, to whole chains of subagents, and even to machines. The kinds of structures with which we are concerned when we think about group agents are, in some cases, already made up substantially of automata and automatic processes -machines to make parts and machines to fix the machines that make the parts; software running websites, placing ads, and collecting data; self-serve checkouts that turn the customer into their own cashiers and cashiers into dual-role IT and security personnel. And yet, for Tuomela, these parts are invisible. This is understandable if we are concerned with human agents and their interactions and roles in groups. It is not, however, a viable position to hold in light of current and emerging group agents and their increasingly automated functions. To offer an account of group agency that misses these functions is to describe the human body without skin or fat or bones. In doing so, we run the risk of developing an almost instantly outdated concept of group agency. The autonomous individuation account, in contrast, better captures the automated processes of an agent within the bounds of that agent precisely because it is agnostic about the particular material constitution of the agent. Tuomela does not ask what it is to be individuated, but rather moves directly to an account of how people can come together to form a supposedly individual group agent. The enactivists focus instead on the more fundamental question of individuality, remaining neutral on material 1 3 constitution. It is precisely this methodological difference and the agnosticism of the autonomous individuation account toward material constitution that allows for the accurate employment of it in the contemporary world. Many groups are at least partly automated systems and are extremely large, such that it is extremely unlikely that all (or even many, in some cases) of the group's members can come close to collectively accepting the group's ethos. The autonomous individuation account allows us to accurately demarcate the boundaries of autonomous groups while Tuomela's account would be better framed as concerning the interactions between groups and their human members. If we adopt this perspective, then we need not reject his view, but only to reconfigure our understanding of it. In any case, as a definition of individuality, the autonomous individuation account fares better. Identifying group agents Examples of supposed group agents abound. Here, I will consider some of the kinds of groups that other philosophers of group agency have suggested in order to show both the differences in views and as a proof of concept for the implementation of the autonomous account of individuality. The differences discussed here are relevant since they matter for thinking about moral and legal responsibility, power relations and structures, and any other practical matters pertaining to autonomous group agents. If a friend group commits a crime, we can work out how culpable each individual was and which actions they took and deal with them each accordingly. If a corporation commits a crime, we might also punish some number of key individuals, but the group itself must also be dealt with in some way. Responsible individuals in the group agent case might also get off a little more lightly because of coercive forces within the structure of the group. It might even be that no individuals at all suffer the full brunt of legal force. In a mere collective, if any individuals were coerced, they must have been coerced by another individual, and hence there will always be at least one human being that we can hold fully responsible. This is, of course, just an outline of how the differences in what we consider a group agent matter. How exactly group agents should be held responsible, how their members should be held responsible, and so on, all warrant their own discussions. It is worth pointing out here, however, so that the weight of the following discussion is more obvious. Recall first Tuomela's (2013) dyad consisting of John and Jane, who jointly intend to paint their house together. Tuomela (2013) claims that, simply put, 'John and Jane form a group because they (and others) take them to form a group' (p. 49). I have argued above against Tuomela's account of group agency. Here, I intend to discuss the example to highlight the differences between the autonomous individuation account and Tuomela's account. In Tuomela's example, the dyad's collective activity is to paint a house. This activity is established by John's and Jane's we-intentions to paint the house together. For the dyad to become a group agent on the autonomous individuation account, their forming this agreement together would have to render the dyad itself an autonomous system, now in a relationship of needful freedom from John and Jane. But the group is neither operationally closed nor is it precarious. If John, due to an injury say, can no longer uphold his end of the bargain, Jane is still capable of continuing on with the painting of the house. Her activity is not sustained by John's activity; hence the dyad is not operationally closed. Since her activity does not depend on John's, the dyad cannot be precarious. This does not mean there are no dyadic group agents. It just means there are further conditions that need to be met than the couple simply agreeing that they form a group. If John and Jane formed a company that paints houses, for instance, we might have a group agent on our hands. In the case that they form a company, the company itself generates an impetus toward profit, such that John and Jane as people are not even strictly necessary. Of course, the formation of the company and its getting off the ground requires their intentional activity, but once it is off the ground it certainly might qualify for group agency should the conditions of operational closure and precariousness be met. We could easily say that these conditions have been met should it be possible for their company to persist without John and Jane -if they have hired other individuals who will continue the activity of the house painting business even if John and Jane are not involved. It might also satisfy these requirements if John and Jane each take up different essential roles that rely on each other. For instance, if John takes up the tasks of finding jobs, giving quotes, sorting out where and when houses need to be painted, and so on, and Jane does the actual painting, then we have, as Tuomela would put it, John and Jane as position-holders in a group agent that, for now, happens to be a dyad. If John stops booking jobs, Jane will have nothing to paint, the group will make no money, and they will go out of business. If Jane stops painting, then even if John books jobs, they will not be fulfilled, and the same will happen. This, however, is only the case insofar as neither bothers with hiring a replacement. Since they are both only position-holders, fulfilling particular functions that the group agent demands given the kind of group agent it is (namely, a house painting business), they are both also replaceable. Once again, the issue with Tuomela's account is his emphasis on the singular agents who make up the group, rather than on the group agent itself. In light of this discussion, it is clear that there is a rather thin line between singular agency in mere collectives and group agency. What is less obvious, but is equally worth remarking on, is the minor difference between group agency and singular agency without a collective. In the case where John quits and Jane does not hire someone new, but instead takes on John's roles for herself on top of her painting duties, the group agent disintegrates and we are left with a singular agent, Jane, who is monetising certain abilities she has. What is special about group agents is their physical discontinuity, where their functions are performed by separate entities who are not tied together spatially or temporally. When the functions that might usually be performed by a group agent are instead performed by a single person, there is no longer any point in thinking about the business in terms of group agency, since it loses all the distinctive features of groups, both ontologically and from a pragmatic perspective; the results of thinking about responsibility and group agency are irrelevant, since there is only one person acting and making decisions. Next is an example that List & Pettit (2011) take not to constitute a group agent. In their scenario, there is a swimmer struggling in the water at the beach and a number 1 3 of people notice the swimmer's plight. These people together form a chain so that a lifebelt can be thrown to the swimmer without putting anyone else in danger (List & Pettit, 2011, p. 34). For List & Pettit (2011), the group of rescuers fails to form a group agent because it does not have 'a single system of belief and desire' (p. 34). If we take 'belief and desire' to refer to particular characteristic kinds of activity, then List and Pettit are mistaken that the chain of rescuers does not have a system of belief and desire. The chain of rescuers is normatively oriented toward saving the drowning swimmer, as is evident from the chain's activity and its reason for being formed. If they instead mean that the individuals who make up the chain do not have the relevant beliefs and desires together, then the claim is not relevant when thinking about group agency. I have already argued this point in relation to Tuomela -a group agent may easily be made up entirely or almost entirely of goal-blind singular agents. So long as they perform the relevant functions on behalf of the group, the group itself will persist. Finally, if they mean that the group itself should have the mental properties of belief and desire, then this is again mistaken. As argued above in the discussion on Rovane, it is a mistake to assume that agency and mindedness go together, and hence the idea that an entity should be excluded from the possibility of agency because it lacks such minded qualities as belief and desire is incorrect. Furthermore, we should recall that for List & Pettit (2011) groups, to count as agents, must persist in some sense. They argue that the group cannot be an agent because, due to its failure to form a single system of belief and desire, we cannot predict what it will do in the future. Group agents must indeed persist -they must be stabilised for a certain period of time. But just how long they must persist for is not relevant. There is, then, a rather arbitrary understanding of persistence that is implied by List and Pettit's argument. Different groups -different agents in general -will have different characteristic timescales. The group of individuals forming the chain clearly form an operationally closed and precarious system. The chain relies on all of its members for it to be formed and sustained. If the strength of someone in the middle of the chain fails, then the chain breaks and half of the group members will be in danger. Likewise, if someone refuses to participate, the chain is shorter and the job harder -though in this case the group does not necessarily collapse but is instead different in character since the character of this group is given largely by the length of the chain and, hence, its ease in saving the struggling swimmer. The fact that after the swimmer has been saved the various participants will go their different ways, thereby destroying the group, does not matter. The group exists as an individual for that particular task. Taking both of the examples just discussed together, the autonomous individuation account likely implies that there are less group agents than Tuomela's account does, but more than List and Pettit's. List and Pettit's account denies that John and Jane are a group agent, as the autonomous individuation account does. On the other hand, Tuomela's account coincides with the autonomous individuation account in accepting that the chain of rescuers is a group agent. However, though there are conclusions in common between the autonomous individuation account and the accounts of other philosophers, they come to their conclusions for the wrong reasons. I have argued here for a robust, operational account of group individuality that is able to distinguish between mere collectives and proper groups in a non-arbitrary manner. Agency is not dependent on an arbitrary amount of temporal persistence, nor is it simply based on accepting that something is an agent. Hence, although there is some overlap between the account endorsed here and its philosophical competitors, it is ultimately the differences in methodology that matter. Finally, I will consider one case on which List & Pettit (2011) and I agree: the contemporary university (p. 194). Universities, being much larger than Tuomela's dyad or List and Pettit's human chain, are harder to provide a full account of in terms of their autonomous individuation, though a basic overview should be sufficient to get the point across. Teaching and research are the most obvious processes of most universities. 8 A university, then, relies on its academic staff to fulfil these processes in order that its functions be performed. The academic staff, however, cannot properly do their jobs -certainly not as members of the university they belong to -without administrative staff, management, and students to teach. Likewise, each of these processes relies on each of the others. Students need to be able to enrol in courses, staff need to get paid, plans in case of emergency need to be formulated and acted upon where necessary, among a myriad of other processes dutifully performed by the people who make up the university in question. A university, then, constitutes an operationally closed and precarious system. All of its processes rely on and maintain other processes in that same system, and should one of the processes stop, the whole thing will quickly begin to run down. Therefore, universities are autonomously individuated systems. This conclusion should be uncontroversial, and that is precisely the point. The autonomous individuation account differs from alternative accounts when considering edge cases but, as I have just shown, quite easily handles obvious cases. What makes all of the above examples group agents (or not, in the case of John and Jane), is simply that there is some individual system into which people (and machines) can be fitted to perform the vital functions of the system. The notion of group agency and group individuality defended here does not depend on any particular legal or other social conditions being met. Human group agents and beehives likely both have different social conditions that need to be met for a proper group to form. These social conditions will be determined by the kinds of beings that make up the group. This will be relevant in analyses of particular groups but does not change the concept of group agency itself. Legal requirements similarly do not impact on the concept of group agency, though they do form a part of the environment for legally operated groups. That there are illegal groups, such as drug cartels, is indicative of the fact that these legal requirements do not impact on the ontological possibility of genuine group formation. Final remarks Defining and understanding the individuality of group agents is, I have argued, an often-overlooked issue. This is despite their being of fundamental importance to an accurate assessment of the world and to political practice. Here, I have shown that for a group to be a group agent, it must be an autonomous group in the sense that it is operationally closed and precarious. Its operational closure is dependent on its coconstitutive processes and their performance. That an agent is precarious expresses the fact that agents are active, adaptive systems. Of course, even a complete definition of the individuality of group agents does not amount to a complete theory of their spatiotemporal relations, or their interactions with one another and with their members, and so on. These further discussions are necessary for understanding fully how group agents scaffold or constrain our interactions with each other, our politics, even our lives. To do all of this, however, it is crucial that we have an accurate picture of the systems that we are trying to understand and, perhaps, address.
12,346.6
2022-06-30T00:00:00.000
[ "Philosophy" ]
Evaporated MAPbI 3 Perovskite Planar Solar Cells with Different Annealing Temperature : The power conversion efficiency (PCE) of an Ag/spiro-OMeTAD/CH 3 NH 3 PbI 3 (MAPbI 3 )/ PCBM/mesoporous TiO 2 /compact TiO 2 /FTO planar solar cell with different annealing temperatures of PbI 2 and MAPbI 3 films was investigated in this study. The morphology control of a MAPbI 3 thin film plays key roles in high-efficiency perovskite solar cells. The PbI 2 films were prepared by using thermal vacuum evaporation technology, and the MAPbI 3 perovskite films were synthesized with two-step synthesis. The X-ray spectra and surface morphologies of the PbI 2 and MAPbI 3 films were examined at annealing temperatures of 80, 100, 120, and 140 ◦ C for 10 min. The performance of the perovskite planar solar cell at an annealing temperature of 100 ◦ C for 10 min was demonstrated. The power conversion efficiency (PCE) was about 8.66%, the open-circuit voltage (V oc ) was 0.965 V, the short-circuit current (J sc ) was 13.6 mA/cm 2 , and the fill factor (FF) was 0.66 by scanning the density–voltage (J–V) curve. diffraction intensity and tetragonal crystal at an annealing temperature of 100 ◦ C. Therefore, a perovskite film of good quality could be obtained by changing the temperature of PbI 2 . After optimization, with the PCE, J sc , V oc , and FF of the champion PSC device annealed at 100 ◦ C, the perovskite film achieved 8.66%, 13.6 mA/cm 2 , 0.965 V, and 0.66. This work demonstrates a potential application of spin-coated and thermal vacuum evaporated MAPbI 3 films in planar organic–inorganic hybrid perovskite solar cells. Recently, a method for improving perovskite solar cells in terms of power conversion efficiency because of the impacts of defects considers the absorption layer and adjacent interfaces on perovskite [19], the compositional elements of the perovskite [20], the suppression of nonradiative recombination at the perovskite surface and grain boundaries (GBs) [21], and HTL [22,23]. One major challenge of the perovskite solar cell was to pattern periodic nanostructures on large-area thin-film solar cells. Electron beam lithography was used to fabricate nanostructures with well-controlled size, shape, and spacing, but it was impractical with its low sample throughput and high cost. Vacuum and solution processes were the first two main techniques used to prepare perovskite films. Although PSCs prepared by the solution method have made great achievements, the current process of preparing perovskite films through the solution method requires the use of a large amount of organic solvents, such as chlorobenzene (CB), dimethylformamide (DMF), and dimethyl sulfoxide (DMSO). The post-treatment of these organic solvents will be a major problem that must be faced in the industrialization process. In the dual-source vapor co-deposition process, PbX 2 and CH 3 NH 3 I were used as gas sources to obtain dense and high-quality films [24][25][26]. However, the experimental conditions of this method were harsh and required high-energy-consumption vacuum conditions, and the experimental operation process was relatively uncontrollable. Compared with the solution method, the vacuum vapor deposition technology does not require the use of organic solvents. Its advantages include high surface coverage, low surface roughness, good compatibility with large-area equipment, and precise control of film thickness. In this paper, a universal and straightforward approach to examine the interface physics of perovskite thin-film solar cells is proposed, and uses a method of two-step spin coating to fabricate perovskite films. PbI 2 films were deposited on an FTO glass substrate by thermal evaporation, and they were annealed at temperatures of 80, 100, 120, and 140 • C for 10 min. Then a methylammonium iodide (MAI) solution containing 50 mg MAI and 1 mL isopropanol (IPA) was dropped onto the PbI 2 /FTO glass substrate in the vacuum [27,28]. The spectra of absorbance, transmittance, SEM, and XRD of PbI 2 and perovskite films on the flat FTO glass substrates were investigated at annealing temperatures of 80, 100, 120, and 140 • C for 10 min, respectively. The 2,2 ,7,7 -tetrakis(N,Ndi-p-methoxyphenylamine)-9,9 -spirobifluorene (spiro-OMeTAD) was as the HTL, and the 6,6-phenyl-C61-butyric acid methyl ester (PCBM) was as the ETL, and a solar cell with an Ag/spiro-OMeTAD/MAPbI 3 /PCBM/mesoporous TiO 2 /compact TiO 2 /FTO glass structure performance was fabricated and reported, which demonstrated that the efficiency improvement for the cell with different MAPbI 3 GBs films was induced by light trapping. Materials and Methods Because of the above research background, this paper proposed a solar cell with an organic perovskite (MAPbI 3 ) active layer by using of spin-coating and thermal vacuum evaporation technology. The cell structure and energy band diagram for fabricating MAPbI 3 PSC can be found in Figure 1, where the inset shows a schematic illustration of the perovskite MAPbI 3 preparation two-step process. First, the FTO glass substrate was cleaned by an ultrasonic shaker and ultraviolet (UV)-ozone light for 15 min, respectively. Then a PSC was fabricated by lithography technology. The PSC structure size was 5 × 2 mm. PSCs were fabricated with the typical configuration of an Ag/HTM/MAPbI 3 /ETL/FTO structure. The mesoporous TiO 2 film plays the role of scaffold to support perovskite and the electron collector, and its thickness, porosity, and particle size would greatly influence the device's performance. The experiment procedure and measurement are described in detailed as follows. Fabrication of Mesoporous TiO 2 /Compact TiO 2 /FTO Structure (TiO 2 /FTO) First, the FTO glass substrate was placed in a beaker containing acetone, alcohol, and IPA solutions, by using ultrasonic and UV-ozone light for 15 min, respectively. The etched-FTO glass substrate size was 1.5 × 1.5 cm. The FTO electrode was patterned by lithography technology, and it was covered with high-temperature-resistant tape. To prepare a compact TiO 2 thin film, the precursor solution was composed of a prediluted titanium diisopropoxide bis(acetylacetonate) solution in absolute ethanol (1:9 weight ratio). The precursor solution of 40 µL TiO 2 was spin-coated onto the patterned-FTO substrate at 3000 rpm for 30 s. The compact TiO 2 thin film of 50 nm thickness was fabricated and annealed at 500 • C in the atmosphere for 30 min. A milk-tea-colored porous TiO 2 solution with a weight ratio of 1:4 was mixed into a solvent of titanium dioxide nanoparticle slurry (Ti-nanoxide T/SP) and absolute ethanol at room temperature for 12 h as mesoporous TiO 2 precursor. Then 40 µL of porous TiO 2 solution was spin-coated onto the compact TiO 2 /FTO structure at 3000 rpm for 30 s. Then the obtained 200 nm mesoporous TiO 2 layer was annealed at 500 • C for 30 min in ambient air after the heat-resistant tape was removed. MAPbI3 preparation two-step process. First, the FTO glass substrate was cleaned by an ultrasonic shaker and ultraviolet (UV)-ozone light for 15 min, respectively. Then a PSC was fabricated by lithography technology. The PSC structure size was 5 × 2 mm. PSCs were fabricated with the typical configuration of an Ag/HTM/MAPbI3/ETL/FTO structure. The mesoporous TiO2 film plays the role of scaffold to support perovskite and the electron collector, and its thickness, porosity, and particle size would greatly influence the device's performance. The experiment procedure and measurement are described in detailed as follows. Fabrication of PCBM/TiO 2 /FTO (ETL) A 2 wt% PCBM solution was obtained at an ambient temperature of 25 • C as the ETL precursor where 20 mg of PCBM was mixed in a solvent of 1 mL of chlorobenzene (CB) by using a magnet stirrer to stir for more than 2 h in a nitrogen glove box. A 50 µL precursor solution was spin-coated onto the mesoporous TiO 2 /compact TiO 2 /FTO structure at 2000 rpm for 40 s, and a PCBM film was stored in a nitrogen glove box for 40 min at room temperature. Fabrication of MAPbI 3 /PCBM/TiO 2 /FTO (Perovskite) After the PCBM ETL was formed, a metal mask was covered on the PCBM/TiO 2 /FTO structure. Then a 130 nm thick PbI 2 film was thermally evaporated at 5 × 10 −6 Torr, and the evaporated rate was about 1.2-1.8 nm/s. Four samples were taken out after the chamber was cooled for 10 min and annealed at temperatures of 80, 100, 120, and 140 • C for 15 min, respectively. An amount of 50 mg of methylammonium iodide (MAI) and 1 mL of IPA solvent were mixed as the perovskite precursor by stirring with a magnetic stirrer. An amount of 40 µL of MAI precursor solution was spin-coated onto a PbI 2 /PCBM/TiO 2 /FTO structure. During this process, first, it was dripped in the beginning at 0 rpm for 10 s for it to coat the sample evenly. In the second stage, the precursor solution of perovskite was spin-coated at 2000 rpm for 40 s. Then, the sample was annealed at 100 • C and baked for 10 min. The entire perovskite film was prepared in a nitrogen glove box. The nitrogen glove box needed to be controlled at oxygen and moisture below 0.1 ppm owing to material that was afraid of water and oxygen. This two-step method of forming a film was conducive to the formation of a better texture of a 250 nm thick MAPbI 3 film. Fabrication of Ag/Spiro-OMeTAD/MAPbI 3 /PCBM/TiO 2 /FTO (PSC) After an Ag electrode was patterned on the spiro-OMeTAD/MAPbI 3 /PCBM/TiO 2 /FTO structure by using a metal mask from the glove box, which was sent to the thermal evaporation equipment. The vacuum environment was 4.8 × 10 −6 Torr, and the evaporation rate was 2.5-3.0 nm/s. Finally, a 100 nm thick Ag electrode was deposited on top of the spiro-OMeTAD/MAPbI 3 /PCBM/TiO 2 /FTO structure. The PSC device was constructed by using a MAPbI 3 active layer film through the two-step film-forming technology and thermal evaporation technology. Performance Measurement The field-emission scanning electron microscope (FE-SEM) images of the top and side of MAPbI 3 /TiO 2 /FTO and spiro-OMeTAD/MAPbI 3 /PCBM/TiO 2 /FTO structures at annealing temperatures of 80, 100, 120, and 140 • C were observed using FE-SEM (ZEISS Sigma, ZEISS, Munich, Germany), as illustrated in Figures 2 and 3, respectively. The X-ray diffraction (XRD) spectra of the MAPbI 3 /TiO 2 /FTO and spiro-OMeTAD/MAPbI 3 /PCBM/TiO 2 /FTO structures at annealing temperatures of 80, 100, 120, and 140 • C were measured using an Xray diffractometer (X'Pert PRO MRD, PANalytical, Almelo, the Netherlands) for a 2θ from 10 to 60 as illustrated in Figure 4. The absorbance and transmittance spectra of MAPbI 3 at annealing temperatures of 80, 100, 120, and 140 • C were measured using a UV-VIS/NIR spectrophotometer (UH-4150, Hitachi, Tokyo, Japan) with a wavelength ranging from 400 to 1000 nm as illustrated in Figure 5. The photoluminescence (PL) spectra of MAPbI 3 at annealing temperatures of 80, 100, 120, and 140 • C were measured using a fluorescence spectrophotometer (F-7000, Hitachi, Tokyo, Japan) with a wavelength ranging from 400 to 1000 nm as illustrated in Figure 6. The current density-voltage (J-V) curves, PCE, fill factor (FF), short-circuit current (J sc ), open-circuit voltage (V oc ), and external quantum efficiency (EQE) performances of PSCs at annealing temperatures of 80, 100, 120, and 140 • C are displayed in Figure 7. The J-V curves of the devices were recorded using a Keithley 2420 source meter and a solar simulator (MFS-PV-Basic, Hong-Ming Technology Co., Ltd., New Taipei, Taiwan) producing 1 sun AM 1.5 (100 mW/cm 2 ) sunlight. EQE was measured utilizing a spectral response measurement system (LSQE-R, LiveStrong Optoelectronics Co., Ltd., Kaohsiung, Taiwan). layer, and the PCBM spin-coated on the porous layer formed a flat surface as illustrated in Figure 3b. The PCBM ETL played an important role in the decay of the photoinduced conductivity in MAPbI3/PCBM, which was on the time scale of hundreds of picoseconds to several nanoseconds, due to electron injection into PCBM and electron-hole recombination at the interface occurring at similar rates [32]. The measured XRD spectrum of the PbI2/TiO2/FTO structure is illustrated in Figure 4a at annealing temperatures of 80, 100, 120, and 140 °C. It could be found that the intensity of the PbI2 (001) plan at 12.6° was with a temperature increase [29,33,34]. This was owing to obvious decreased grain sizes and weak Ti-I-Pb bonds facilitating interfacial accommodation of moving iodine ions [23,[29][30][31]. Figure 4b presents the XRD spectrum of MAPbI3 films at different annealing temperatures of 80, 100, 120, and 140 °C. The MAPbI3 film exhibited a main peak at 14.38°, which was the characteristic of the (110) plane in the tetragonal crystal structure, with other peaks corresponding to the (112), (211), (202), (220), (310), and (224) planes in the plot [33,34]. This was owing to the fact that PbI2 has a tetragonal crystal plan with strong diffraction peaks, which shows that the thin film was completely converted to perovskite MAPbI3 [29,30]. However, we noticed the presence of PbI2 diffraction peaks in the sample annealed at 80 °C, indicating that the of the decomposition of perovskite materials at a high annealing temperature, resulting in incomplete conversion of PbI2 and MAI, which seriously affected the PCE of PSCs [35][36][37][38]. The average crystallite size of the (110) plane was derived from the Scherrer formula, and the average crystal sizes at annealing temperatures of 80, 100, 120, and 140 °C were 43.32, 48.74, 48.12, and 31.44 nm, respectively. A maximum size of 48.74 nm was achieved at an annealing temperature of 100 °C, indicating that the optimum annealing temperature was 100 °C. Figure 5a,b shows the absorbance and transmittance spectra of the MAPbI3 perovskite film annealed at 80, 100, 120, and 140 °C. The PL spectra of the MAPbI3/TiO2/FTO structure at annealing temperatures of 80, 100, 120, and 140 °C were measured at a wavelength range of 400 to 1000 nm, as illustrated Figure 6. A broad absorption spectrum was obtained, which corresponds to the energy gap of the PL diagram, and found that the absorption intensity at an annealing temperature of 100 °C was strongest, which was consistent with the results of XRD and PL analyses in Figures 4 and 6. The perovskite film annealed at 100 °C exhibited a small full width at half maximum (FWHM) of about 75.1 nm at the PL peak, which indicates the superior crystallinity of MAPbI3, and the absorption strength was relatively high at this peak. On the other hand, we noticed that the PL peak of the perovskite film annealed at 140 °C had a red shift, which may be related to the reaction of PbI2 and MAI. It could be known from XRD analysis that under high temperature annealing, the MAPbI3 film decomposes into MAI and PbI2 double phase [39]. Finally, the J-V characteristic curves of PSCs were measured by forward scan as illustrated in Figure 7a, and the performance characteristic of the PSCs is illustrated in Figure 7b. It can be concluded that the degradation of device performance depends on the overall degradation of Voc, Jsc, and FF. However, it can be noted that when the annealing perovskite grain boundary acted on the perovskite film in the PSC, resulting in an increase in the rate of charge extraction. Figure 7c shows the EQE spectra of the PSCs with various annealing temperatures. The integrated photocurrent densities from the EQEs were 9.29, 13.62, 11.14, and 8.36 mA/cm 2 , which are consistent with the corresponding J-V measurements in Figure 7a. Conclusions In conclusion, a PSC with an organic perovskite active layer by using spin-coating and thermal vacuum evaporation technology was discussed and optimized by four annealing temperatures. From the SEM surface morphology, it could be observed that the morphology and size of perovskite obviously depend on the annealing temperature. High temperature forms large perovskite grains with random orientation, and the surface film forms a square crystal sparsely and unevenly. As the temperature changes, it could be seen that the perovskite film prepared close to 100 °C was dense and flat. In the XRD analysis, it was known that the PbI2 (001) plan at 12.6° would change with the temperature. Meanwhile, the perovskite (110) plan at 14.38° displayed a strong X-ray diffraction intensity and tetragonal crystal at an annealing temperature of 100 °C. Therefore, a perovskite film of good quality could be obtained by changing the temperature of PbI2. After optimization, with the PCE, Jsc, Voc, and FF of the champion PSC device annealed at 100 °C, the perovskite film achieved 8.66%, 13.6 mA/cm 2 , 0.965 V, and 0.66. This work demonstrates a potential application of spin-coated and thermal vacuum evaporated MAPbI3 films in planar organic-inorganic hybrid perovskite solar cells. Results and Discussion FE-SEM images of the MAPbI 3 /TiO 2 /FTO and spiro-OMeTAD/MAPbI 3 /PCBM/TiO 2 / FTO structures are examined in Figures 2 and 3. Figure 2a-h displays the surface morphologies of the MAPbI 3 film. The PbI 2 crystal grains on the TiO 2 /FTO structure at annealing temperatures of 80, 100, 120, and 140 • C could be observed to have obvious different grain sizes, and some of them would aggregate into large clusters. A uniform perovskite film could be observed as the temperature increased owing to the fact that PbI 2 crystal lattices were changed with different annealing temperatures, making it easier to form them. This phenomenon was attributed to the weak Ti-I-Pb bonds facilitating the interfacial accommodation of moving iodine ions [23,[29][30][31]. For MAPbI 3 films annealed at 80 and 140 • C, the resulting grain sizes were slightly reduced, and the gaps between the perovskite grains were more obvious, and pinholes became discernible. Therefore, relatively bright contrast grains appeared at the GBs, which might be residues caused by incomplete reaction or decomposition at a high annealing temperature. In addition, with the MAPbI 3 film annealed at 100 • C, the obtained film exhibited uniform and dense packed grains, which had almost no pinholes. However, the gap between the grains of the MAPbI 3 film annealed at 120 • C was also more obvious than that of the MAPbI 3 film annealed at 100 • C. This was consistent with the corresponding XRD pattern as discussed later. Figure 3a,b displays SEM images of the cross section of the MAPbI 3 /TiO 2 /FTO and spiro-OMeTAD/MAPbI 3 /PCBM/TiO 2 /FTO structures. A 250 nm thick MAPbI 3 film could be observed and well formed owing to the obvious TiO 2 porous layer, and the PCBM spin-coated on the porous layer formed a flat surface as illustrated in Figure 3b. The PCBM ETL played an important role in the decay of the photoinduced conductivity in MAPbI 3 /PCBM, which was on the time scale of hundreds of picoseconds to several nanoseconds, due to electron injection into PCBM and electron-hole recombination at the interface occurring at similar rates [32]. The measured XRD spectrum of the PbI 2 /TiO 2 /FTO structure is illustrated in Figure 4a at annealing temperatures of 80, 100, 120, and 140 • C. It could be found that the intensity of the PbI 2 (001) plan at 12.6 • was with a temperature increase [29,33,34]. This was owing to obvious decreased grain sizes and weak Ti-I-Pb bonds facilitating interfacial accommodation of moving iodine ions [23,[29][30][31]. Figure 4b presents the XRD spectrum of MAPbI 3 films at different annealing temperatures of 80, 100, 120, and 140 • C. The MAPbI 3 film exhibited a main peak at 14.38 • , which was the characteristic of the (110) plane in the tetragonal crystal structure, with other peaks corresponding to the (112), (211), (202), (220), (310), and (224) planes in the plot [33,34]. This was owing to the fact that PbI 2 has a tetragonal crystal plan with strong diffraction peaks, which shows that the thin film was completely converted to perovskite MAPbI 3 [29,30]. However, we noticed the presence of PbI 2 diffraction peaks in the sample annealed at 80 • C, indicating that the started precursors (MAI and PbI 2 ) were not completely converted. In addition, the perovskite/PbI 2 XRD intensity ratio between the (110) plane of perovskite and the (001) plane of PbI 2 diffraction peaks could be discussed to confirm whether the conversion was complete. A maximum ratio (24.15) was obtained with an annealing temperature of 100 • C, and the (001) peak of PbI 2 was obviously weaker, which indicates that the MAI conversion was relatively complete and the PCE of PSC was more stable. On the other hand, the sample treated at an annealing temperature of 120 • C had the second best XRD intensity ratio (23.04). The MAPbI 3 annealed at 140 • C had a slight phase transition and residual of PbI 2 in the perovskite film, and there were unidentified peaks at 18.2 • and 18.7 • and so forth (indicated by black diamonds in Figure 4b). This indicates that the unidentified peak did not match the pure PbI 2 or MAI tetragonal phase, which has existed as a not-well-described intermediate phase. In addition, the intensity of the perovskite diffraction peak of MAPbI 3 annealed at 140 • C was weakened, and the intensity peak of the (001) plane characteristic diffraction peak of PbI 2 was significantly increased. These results were the result of the decomposition of perovskite materials at a high annealing temperature, resulting in incomplete conversion of PbI 2 and MAI, which seriously affected the PCE of PSCs [35][36][37][38]. The average crystallite size of the (110) plane was derived from the Scherrer formula, and the average crystal sizes at annealing temperatures of 80, 100, 120, and 140 • C were 43.32, 48.74, 48.12, and 31.44 nm, respectively. A maximum size of 48.74 nm was achieved at an annealing temperature of 100 • C, indicating that the optimum annealing temperature was 100 • C. Figure 5a,b shows the absorbance and transmittance spectra of the MAPbI 3 perovskite film annealed at 80, 100, 120, and 140 • C. The PL spectra of the MAPbI 3 /TiO 2 /FTO structure at annealing temperatures of 80, 100, 120, and 140 • C were measured at a wavelength range of 400 to 1000 nm, as illustrated Figure 6. A broad absorption spectrum was obtained, which corresponds to the energy gap of the PL diagram, and found that the absorption intensity at an annealing temperature of 100 • C was strongest, which was consistent with the results of XRD and PL analyses in Figures 4 and 6. The perovskite film annealed at 100 • C exhibited a small full width at half maximum (FWHM) of about 75.1 nm at the PL peak, which indicates the superior crystallinity of MAPbI 3 , and the absorption strength was relatively high at this peak. On the other hand, we noticed that the PL peak of the perovskite film annealed at 140 • C had a red shift, which may be related to the reaction of PbI 2 and MAI. It could be known from XRD analysis that under high temperature annealing, the MAPbI 3 film decomposes into MAI and PbI 2 double phase [39]. Finally, the J-V characteristic curves of PSCs were measured by forward scan as illustrated in Figure 7a, and the performance characteristic of the PSCs is illustrated in Figure 7b. It can be concluded that the degradation of device performance depends on the overall degradation of V oc , J sc , and FF. However, it can be noted that when the annealing temperature of the MAPbI 3 perovskite film prepared with the evaporated PbI 2 film was 100 • C, the PSC showed a significantly improved photovoltaic performance. It can be found that the J sc of the device prepared by annealing PbI 2 at 100 • C was higher than that of the device prepared at 140 • C. It might be that the purity of the perovskite film prepared by annealing PbI 2 at 100 • C was better, so it had higher absorbance, leading to higher photocurrent. The smooth form of the perovskite film formed by PCBM could form a smaller charge transfer resistance. This form of film can not only increase the contact area between the perovskite and spiro-OMeTAD but also improve the PCE of PSCs. It was demonstrated that for the best performance of the PSCs at an annealing temperature of 100 • C for 10 min, PCE was about 8.66%, V oc was 0.965 V, J sc was 13.6 mA/cm 2 , and FF was 0.66. The V oc of the PSCs at an annealing temperature of 140 • C reduced to 0.926 V, which may be due to the residual unconverted PbI 2 in the perovskite film. The passivation of PbI 2 could reduce the charge separation in defects, and the PbI 2 phase was presented. The perovskite grain boundary acted on the perovskite film in the PSC, resulting in an increase in the rate of charge extraction. Figure 7c shows the EQE spectra of the PSCs with various annealing temperatures. The integrated photocurrent densities from the EQEs were 9.29, 13.62, 11.14, and 8.36 mA/cm 2 , which are consistent with the corresponding J-V measurements in Figure 7a. Conclusions In conclusion, a PSC with an organic perovskite active layer by using spin-coating and thermal vacuum evaporation technology was discussed and optimized by four annealing temperatures. From the SEM surface morphology, it could be observed that the morphology and size of perovskite obviously depend on the annealing temperature. High temperature forms large perovskite grains with random orientation, and the surface film forms a square crystal sparsely and unevenly. As the temperature changes, it could be seen that the perovskite film prepared close to 100 • C was dense and flat. In the XRD analysis, it was known that the PbI 2 (001) plan at 12.6 • would change with the temperature. Meanwhile, the perovskite (110) plan at 14.38 • displayed a strong X-ray diffraction intensity and tetragonal crystal at an annealing temperature of 100 • C. Therefore, a perovskite film of good quality could be obtained by changing the temperature of PbI 2 . After optimization, with the PCE, J sc , V oc , and FF of the champion PSC device annealed at 100 • C, the perovskite film achieved 8.66%, 13.6 mA/cm 2 , 0.965 V, and 0.66. This work demonstrates a potential application of spin-coated and thermal vacuum evaporated MAPbI 3 films in planar organic-inorganic hybrid perovskite solar cells.
6,140.4
2021-04-12T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Müller–Zhang truncation for general linear constraints with first or second order potential Let B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {B}$$\end{document} be a homogeneous differential operator of order l=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l=1$$\end{document} or l=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l=2$$\end{document}. We show that a sequence of functions of the form (Buj)j\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\mathcal {B}u_j)_j$$\end{document} converging in the L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^1$$\end{document}-sense to a compact, convex set K can be modified into a sequence converging uniformly to this set provided that the derivatives of order l are uniformly bounded. We prove versions of our result on the whole space, an open domain, and for K varying uniformly continuously on an open, bounded domain. This is a conditional generalization of a theorem proved by S. Müller for sequences of gradients. Moreover, a potential of order two for the linearized isentropic Euler system is constructed. Introduction In the calculus of variations, particularly in the context of quasiconvexity and gradient Young measures, understanding sequences of gradients is crucial. If we are given such a sequence of gradients with a certain limit behavior, one might ask if there exists another sequence of this form with better regularity or convergence properties but exhibiting the same limit behavior. In this context, Zhang showed that a sequence of weakly differentiable functions (u j ) j on R d which converges to a ball B R (0) for R > 0 in the sense that can be modified into a sequence (w j ) j of uniformly Lipschitz functions. This modification takes place on sets whose measure decreases with increasing j such that one obtains |{u j = w j }| → 0 , cf. Theorem 1 in [7], which is a variant of Lemma 3.1 in [10]. Zhang's result has been developed further by Müller [7]. In his paper Müller presents a method to regularize a Communicated by J. M. Ball. B Dennis Gallenmüller<EMAIL_ADDRESS>1 Institut für Angewandte Analysis, Universität Ulm, Helmholtzstraße 18, 89081 Ulm, Germany sequence of W 1,1 -functions satisfying (1.1) where the ball B R (0) is replaced by any compact, convex set K in such a way that the convergence of the new sequence (w j ) j is uniform, i.e. dist(Dw j , K ) L ∞ → 0, and also the sets of modification have measure tending to zero, hence |{u j = w j }| → 0. Our aim in the present work is to generalize Müller's method to L 1 -convergent sequences of functions of the form (Bu j ) j , where B is a linear homogeneous differential operator of order l = 1 or l = 2. For that, we will have to additionally assume that D l u j L ∞ is uniformly bounded in order to obtain a uniformly convergent sequence, cf. Theorem 2.1 below. The motivation for the present work was a better understanding of limit processes under general linear constraints. Especially improving the properties of generating sequences for Young measures was an important aspect, cf. Corollary 2.4. There are prominent examples of linear constraints with potentials of first or second order to which our results apply. For example the symmetric gradient is a first order potential for a second order linear constraint. Furthermore, the linearized compressible Euler equations come with a second order potential operator. The latter two examples will be discussed in Sect. 4. For proving our result we roughly follow the strategy of Müller [7], but we need to introduce a different method of regularization which is crucial for our generalization. The outline of Müller's proof, and also of our proof, goes as follows: First obtain an L 1 -estimate for a certain regularization scheme on balls. Then determine and control the set of balls on which the regularization has to be applied. Finally, one iterates this procedure for each function u j to arrive at the uniformly convergent sequence (w j ) j . In attempting such a generalization there arise some difficulties. One consists of the fact that for general homogeneous operators B of order l it is not true that |D l u| ≤ C|Bu| holds pointwise a.e. for a fixed constant C. To circumvent this problem in the proof we additionally need to assume that the derivatives of order l are uniformly bounded. This assumption is of technical nature. However, in Remark 4.2 below we give an example of a situation where this is naturally given. Without this bound estimating the occurring highest order derivatives is not possible. Furthermore, in the case l = 2 we will face a more subtle problem: One has to estimate terms of the form |u j (x) − u j (y)| which is not possible with a mere bound on D 2 u j L ∞ . We therefore need to make use of a different type of regularization on balls, see Lemma 3.1, which should be compared with Lemma 5 in [7]. Instead of mollifying with a fixed radius and achieving the transition to the original function by multiplication with a smooth cut-off function ϕ and its counterpart (1 − ϕ), we regularize by mollifying with varying radius. More precisely, the radius of the mollification is given by a smooth function decreasing to zero with increasing distance to the center of the ball. By Lebesgue's differentiation theorem the so defined regularized function coincides with the original function as soon as the radius of mollification tends to zero. The benefit of using this method is that differentiating the new function yields no zeroth order derivatives of u j and the terms involving the first order derivatives can be estimated using the fundamental theorem of calculus and the uniform bounds on D 2 u j L ∞ , cf. the proof of Lemma 3.1. Note that for orders l > 2 one would have to come up with yet another regularization scheme since then with our method the lower order derivatives cannot be controlled anymore. After proving our main result on R d we also present a local version thereof in Theorem 2.2 and we moreover show that certain convex integral bounds are preserved. These local results lead to interesting applications: In Corollary 2.5 we show that K can be replaced by a family of compact, convex sets (K x ) x depending uniformly continuously on x. Moreover, our local result allows for a reformulation in the language of Young measures, cf. Corollary 2.4. The latter Young measure formulation of our result is an important intermediate goal of an ongoing joint project with E. Wiedemann. We try to characterize measure-valued solutions to the isentropic Euler system that are generated by weak solutions. For the incompressible Euler equations every measure-valued solution is generated in this way, cf. [9]. In contrast to that, some recent work shows that there exist measure-valued solutions to the isentropic Euler system that are neither generated by weak solutions nor by a vanishing viscosity limit, cf. [2,5]. We may not apply our results to the compressible Euler system directly but to its linearization (4.1). The latter possesses a potential of order two, which was pointed out to me by E. Wiedemann in dimension 3 + 1. At the end of the present paper we give a construction of this potential for dimensions d + 1 ≥ 2. Besides the linearized compressible Euler equations other partial differential operators also have low order potentials. One already mentioned prominent example is the corresponding linear constraint to the symmetric gradient as potential. However, a further generalization of our results to higher order operators is desirable as the work of Raiţȃ [8] shows that any constant-rank homogeneous differential operator has a potential which is in general of high order. This generalization would have consequences for the theory of general A-free Young measures and A-quasiconvex functions, developed e.g. in [1,4,6]. The organization of this paper is as follows. In Sect. 2 we formulate our main results and prove some direct consequences. The main proofs are presented in Sect. 3. At the end of the paper, we present two examples of linear constraints with potentials of order one or two. For that we derive in Sect. 4.2 a potential of order two for the linearized isentropic Euler system. Presentation of the main results Let B = |α|=l B α ∂ α denote a homogeneous differential operator of order l ∈ {1, 2} with B α ∈ R k×m constant. We will use the notation in [7], which we will briefly revise for convenience: In what follows K ⊂ R k denotes a compact, convex set. We define |K | ∞ := max{|A| : A ∈ K }. The sublevel sets are again compact and convex. Moreover, one has Our main result on the whole space R d , which corresponds to Theorem 2 in [7], will be a direct consequence of Proposition 3.3 below. Theorem 2.1 Let K be a compact, convex set in R k and l = 1 or l = 2. Further, let Then there exist functions w j ∈ W l,∞ loc ( , R m ) and an increasing sequence of open sets (U j ), which are compactly contained in , such that One can ask if there are quantities which are conserved under the truncation method that leads to the previous theorems. Indeed, we can show that certain convex integral functionals are preserved in the limit. In particular this holds for the p-norm. We only state the result in the local formulation, but this also holds for = R d . Then there exist functions w j ∈ W l,∞ loc ( , R m ) and an increasing sequence of open sets (U j ), which are compactly contained in , such that From the previous local formulation of our main result we obtain a generalization of Corollary 3 in [7] treating gradient Young measures. For the relevant definitions consult e.g. [4]. Proof Observe that convexity and compactness of K x imply for all x, x 0 ∈ . We introduce a segmentation of R d by open cubes (Q n 1 ) n∈N of side-length one with corners lying in Z d . Then the next family of cubes Q n 1 2 arises from the former by bisecting the edges. So, one cubeQ n 1 contains 2 d cubes of the next generation. Continuing leads to a dyadic segmentation into cubes. Note that for all N ∈ N we have R d \ n∈NQ n 2 −N is a null set. Moreover, for all n ∈ N choose x n i as the center point ofQ n 2 −N i . Now change the families of cubes to Q n 2 −N i := Q n 2 −N i ∩ and change the points x n i (without renaming) to an arbitrary point in We also have by (2.2) Moreover, for neighboring cubes the so constructed sequences agree near the common boundary and equal u 0 there. Hence, the ensemble w i,n j defines for fixed i, j a function w i j ∈ W l,∞ loc ( , R m ). As is bounded we only have finitely many Q n for every n and for all j ≥ j i . The estimate in (2.3) does not lead to an estimate for integral bounds on , as the limit superior on the right-hand side may not commute with the arising sum over n. But luckily in the proof of Corollary 2.3 below we will establish the inequality (3.10) which compares the integrals for every j and not just in the limit superior. The latter estimate shows that we can in fact choose j i such that also holds for all n and for all j ≥ j i . Without loss of generality we may assume that j i is strictly increasing. For all i ∈ N we define w j := w i j for j i ≤ j < j i+1 and w j = u 0 for j < j 1 . Then and for all j ∈ [ j i , j i+1 ) and almost every x ∈ it holds that x ∈ Q n 2 −N i for some n and For the integral bound it holds that As i grows with j, this finishes the proof. Remark 2.6 One can generalize the results of this paper to sequences of the form is as before and f is locally Lipschitz on with A f = 0. Here B has constant rank and A is a constant rank homogeneous differential operator with ker(A) = im(B). In this case the constructed sequence is then also of the form ( f + Bw j ). For this one needs to assume a bound on the radius in Lemma 3.1 below, which has to be dealt with in the succeeding auxiliary results. Although this requires some effort, no substantial new ideas come in. Note also that in view of Lemma 5 in [8] the function f can be rewritten as f = Bϕ if f is a function of high regularity on the whole space R d or has zero mean on the torus. So, this boils down to the situation discussed in Theorem 2.2. Auxiliary results and proof of the main theorems This section is dedicated to the presentation of a detailed proof of our main result. Then there existsũ ∈ W l,∞ (B r (a), R m ) such that u =ũ on B r (a)\B 7 8 r (a) and Proof We follow the basic strategy of proving Lemma 5 in [7]. But note that our regularization of the function u will be different, which enables us to go to order l = 2 under the additional Using the rescaling we assume that |K | ∞ = 1, a = 0, and r = 1. Note that this rescaling argument is the reason why we may only assume a uniform bound on the highest derivative. In the following we use the notation B := B 1 (0) and B t : where < 1 90 will be chosen later. Consider for every y ∈ B the map for all x ∈ R d and all fixed y ∈ B, the map ϕ y is a local diffeomorphism. Noticing that |ϕ y (x)| → ∞ as |x| → ∞, we infer by Hadamard's theorem that we even have a global smooth inverse ϕ −1 y . For all fixed y ∈ B the map x → u(ϕ y (x)) is differentiable a.e. as u is differentiable almost everywhere. In particular, for fixed y and for a.e. x ∈ B one calculates Thus, by the Divergence Theorem and Fubini's theorem we obtain for all In the case l = 2 we argue similarly but with x → u(ϕ y (x)) replaced by x → (Du)(ϕ y (x)) · (e i + y∂ i ρ(x)). Hence, for fixed y and a.e. x it holds that for i, j = 1, . . . , d, and calculating the second order weak derivatives yields for all ψ ∈ Claim: It holds that In order to prove this claim observe that ρ(x) > 0 for all x ∈ B 7 8 \B 5 8 , thus y → x + ρ(x)y is a diffeomorphism from B to B ρ(x) (x). Now we need to separate the cases l = 1 and l = 2 again. For l = 1 and x ∈ B 7 Since K is convex, the distance function z → dist(z, K ) is convex. Moreover, dist(z + w, K ) ≤ dist(z, K ) + |w| for all z, w ∈ R m . Hence, we estimate Note that the estimate (3.1) yields for all x ∈ B Thus, using Fubini's theorem and the substitution z = ϕ y (x) for every fixed y gives This proves the claim for l = 1. In the case l = 2 we estimate for x ∈ B The previous calculation only works for l = 2. Hence, for generalizing to higher order differential operators one has to find more suitable ways of regularizing u or come up with different strategies of proofs. Similarly as in the case l = 1 we choose := θ 1 1+d . Set Proceeding as in the case of l = 1 and using the above estimates as well as (3.3) and (3.4) we estimate This concludes the proof of the claim. From . Therefore, The corresponding estimate for l = 1 follows similarly. The following result is the analogue of Lemma 6 combined with the remark thereafter in [7]. Then there exists a functionũ and such that d) is some constant depending only on the dimension. Proof The main part of the proof is essentially identical to the proof of Lemma 6 in [7] except for the obvious changes in the numerical values of some constants involved. The reason for this is that the structure of the differential operator under consideration plays no role. The only thing left to prove is our statement about the norm of the highest order derivative ofũ. For that observe thatũ = u on R d \A, where A is a union of disjoint balls as in Lemma 6 in [7] on which we applied Lemma 3.1. In particular, We now prove the analogue of Theorem 7 and Corollary 8 in [7]. where C 1 , C 2 are the constants from Lemmas 3.1 and 3.2. Then there exists W l,∞ loc (R d , R m ) such that Bg ∈ K γ a.e. on R d , for some constant C 4 = C 4 (M, d) depending on M and the dimension d. for some constant C 5 Proof The strategy of the proof of Theorem 7 and Corollary 8 in [7] is adapted here. In fact, the main issue is that one needs to keep track of the norm of D l u. Again by rescaling assume that |K | ∞ = 1. Define is the constant from Lemma 3.2 and δ > 0 will be chosen later. Observe that Thus, Now by successively applying Lemma 3.2 we obtain a sequence (u i ) with u 0 = u. Define Hence, Since ∞ i=0 μ i < ∞, one deduces analogously to the proof of Theorem 7 in [7] that Moreover, as D l u i L ∞ (R d ) ≤ M +γ , there is a subsequence which we still denote (u i ) and h ∈ L ∞ (R d ) such that Thus, h L ∞ (R d ) ≤ M +γ . Using (3.6) and testing against compactly supported smooth functions yields h = D l g a.e., and hence We also obtain Similarly as in [7] we choose δ such that (3.7) As γ < C 2 (1 + C 1 M), we obtain that δ ≤ᾱ −1 C 2 (1 + C 1 M), and hence yields the estimate (3.5). Now assume that Bu ∈ K on R d \V . Let where we used (3.7). Proof of Theorem 2.2 Due to rescaling we may assume |K | ∞ = 1. Let U ⊂⊂ be open. Since D l u j L ∞ ( ) ≤ M we can extract a subsequence such that As u j → u 0 in L 1 (U ) testing against compactly supported smooth functions yields that h = D l u 0 on U . Uniqueness of the limit yields that D l u j * D l u 0 in L ∞ ( ). In particular, also Bu j Bu 0 in L 1 ( ). Similarly as in [7] using Mazur's and Fatou's lemmas as well as dist(Bu j , K ) → 0 in L 1 ( ) implies that Bu 0 ∈ K a.e. in U , hence a.e. in as U was arbitrary. Thus, Claim: It holds that To prove this claim we have to consider the cases l = 1 and l = 2 separately: The case l = 1 is easier. Observe that Bw j ∈ K on \V and by the assumptions of the theorem and the properties of the distance function dist(·, K ). For the case l = 2 first note that By the Gagliardo-Nirenberg interpolation inequality we obtain So, in particular, D(u j − u 0 ) → 0 in L 1 (U ). We have Bw j ∈ K on \V , thus This proves the claim. Hence, Let δ > 0. The previous discussion together with Proposition 3.3 implies that there exists j 0 = j 0 (U , V , ϕ, δ) such that for all j ≥ j 0 there exists g j ∈ W l,∞ loc ( , R m ) with Thus, One now finishes the proof exactly as in step 3 in the proof of Theorem 4 in [7]. Proof of Corollary 2. 3 We basically need to repeat the steps of the previous proofs of the auxiliary results extending them by appropriate estimates of the convex integral bound corresponding to F. Lemma 3.1: We prove that additionally to the result of Lemma 3.1 we obtain where h is the degree of homogeneity of F and L F M is specified below. For that we use the above rescaling, so we may assume |K | ∞ = 1. Recall from (3.3) that for y ∈ B we have for y ∈ B. Hence, using Jensen's inequality, Fubini's theorem, and the substitution z = ϕ y (x) = x + ρ(x)y for fixed y we obtain Since F is a convex function, there exists an optimal Lipschitz constant L F M for F on B C 1 (1+C 1 )M (0). The latter ball contains Bũ(x) for a.e. x ∈ B by the bound on Dũ L ∞ from Lemma 3.1 and the definition of C 1 . Thus, Lemma 3.2: We now prove the following claim: For that we rescale and consider the set A ⊂ V ρ from Lemma 3.2 again. We estimate using the previous paragraph For showing that let us consider the inductive procedure from the proof of Proposition 3.3 again. Therein we estimate using the preceding result where we used thatM = e δᾱ ≤ e C 2 (1+C 1 M) . Note that we showed in the proof of Proposition 3.3 that In the following we use the notation and the rescaling of the proof of Theorem 2.2. Let U , V be open with V ⊂⊂ U ⊂⊂ , while noting that is bounded by the assumptions of Corollary 2.3. Further let ϕ ∈ C ∞ c (V ) be such that 0 ≤ ϕ ≤ 1. We already proved that with C B := |α|=l |B α | and R j as in the proof of Theorem 2.2. Hence, converges to zero for j → ∞ since the functions inside F are uniformly bounded and F is locally Lipschitz. Thus, Here we used that D l u j * D l u 0 in L ∞ ( ), and hence D l u 0 L ∞ ≤ M. Let δ > 0. As λ j → 0, by using the result of the previous paragraph corresponding to Proposition 3.3 there exists j 0 (U , V , ϕ, δ) such that for all j ≥ j 0 there is a function g j ∈ W l,∞ loc ( , R m ) with the properties Thus, in particular we obtain k , and 0 ≤ ϕ k ≤ 1 as in Step 3 in the proof of Theorem 4 in [7]. Since is bounded, we can furthermore assume | \Ũ k | < 1 k . Then using the previous step of the proof we infer that there exist j k such that for all j ≥ j k there exist g j such that be a general homogeneous differential operator of first order. There exists a linear map B : R m·d → R k such that Bu = B · Du for all u ∈ C ∞ (R d , R m ). Then observe that B × pr ker B : R m·d → im B × ker B is surjective and hence bijective as im B × ker B R m·d . Let us denote by T B the inverse of the above mapping. In particular, for smooth enough u it holds that Müller's results for the gradient then provide us with a sequence (w j ) such that We thus could derive the results of Theorems 2.1 and 2.2 for general first order operators B from those for the gradient. The same procedure can be carried out for second order operators B by writingBu = B · D 2 u for every smooth u and some fixed matrix B. Note that for the method presented in this remark, we still need to assume the uniform · L ∞ -bound to infer the result for general differential operators. More importantly, our strategy of proving all the auxiliary results for Bu directly keeping track of the · L ∞ -norms offered us access to showing further properties of our truncation method like the conservation of convex integral bounds as in Corollary 2.3. For that an important ingredient was using that F is locally Lipschitz in combination with the uniform boundedness of the functions involved. Two examples of linear constraints with potentials of first or second order The linear constraint curl f = 0 with potential f = ∇u can be handled by the well-known techniques of Müller [7] and Zhang [10]. In the following, we want to present two examples of homogeneous differential operators which may not be treated by the latter frameworks. However, these operators have potentials of order one or two such that our results can be applied. Symmetric gradient The symmetric gradient of a vector field u : → R d e(u) : is a first order homogeneous differential operator that appears for example in the theory of linear elasticity. One cannot estimate the gradient by this operator neither in the L 1 -norm nor in the L ∞ -norm. Hence, the results regarding the gradient are not applicable for this case. The corresponding linear constraint for which the symmetric gradient is the potential reads as followsà see Example 3.10 in [4]. A potential for the linearized isentropic Euler equations Now, we will start with a certain linear constraint and construct the corresponding potential. This potential will turn out to be of second order. Consider the linearization of the isentropic Euler system on R d+1 ∂ t m + div M + ∇q = 0, ∂ t ρ + div m = 0 (4.1) for the unknowns z = (ρ, m, M, q) ∈ R + × R d × S d 0 × R + ⊂ R × R d × S d 0 × R R N with N = 1 + d 2 (d + 1). In the following let A : C ∞ (R d+1 , R N ) → C ∞ (R d+1 , R d ) be the homogeneous first order differential operator implementing (4.1), i.e. (4.1) is equivalent to Az = 0. There is an ongoing joint project with Emil Wiedemann in which the uniform convergence to a certain set for generating sequences of Young measures is crucial. The considered generating sequences in this project are A-free with A as above. These bounds are provided by Corollary 2.4 if we find a homogeneous potential operator B of order at most two. For the case d = 3 Emil Wiedemann showed me in a private communication how to derive such a potential. His proof essentially generalizes to every d ≥ 1 which we now demonstrate. Set N := 1 4 (d + 1) 2 · d 2 . Proposition 4.1 Let A be as above. There exists a linear homogeneous partial differential operator B : C ∞ R d+1 , RÑ → C ∞ (R d+1 , R N ) of order two such that ker A = im B. is an isomorphism between vector spaces since M is trace-free. Using this identification the PDE (4.1) is equivalent to div (t,x) U (t, x) = 0. From now on the divergence will be understood with respect to (t, x) and hence we simply write div for div (t,x) . Every row of U is divergence-free. Thus, Poincaré's lemma provides us for every i = 1, . . . , d + 1 with an antisymmetric matrix-field ϕ i such that for all j = 1, . . . , d + 1 (div ϕ i ) j = U i j . (4.2) Since U is symmetric, we infer for all i, j = 1, . . . , d + 1 So, we may apply Poincaré's lemma once more to obtain antisymmetric matrix fields ψ i j such that Indeed, setting k = i or k = j in (4.3) and using the antisymmetry of ϕ yields the functions ϕ i i j for i, j = 1, . . . , d + 1. Now fix pairwise distinct i, j, k. As ϕ has three indices, there are six functions that have to be determined. For every triple i, j, k the corresponding six equations (4.3) decouple into two systems of three equations due to the antisymmetry of ϕ i , ϕ j , ϕ k . Without loss of generality assume i < j < k. One of these systems then reads (div ψ i j ) k = ϕ i jk − ϕ j ik , (div ψ ik ) j = −ϕ i jk − ϕ k i j , (div ψ jk ) i = −ϕ j ik + ϕ k i j , and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
7,137.2
2020-08-16T00:00:00.000
[ "Mathematics" ]
A new risk factor indicator for papillary thyroid cancer based on immune infiltration Increasing evidence has indicated a close association between immune infiltration in cancer and clinical outcomes. However, related research in thyroid cancer is still deficient. Our research comprehensively investigated the immune infiltration of thyroid cancer. Data derived from TCGA and GEO databases were analyzed by the CIBERSORT, ESTIMATE, and EPIC algorithms. The CIBERSORT algorithm calculates the proportions of 22 types of immune cells. ESTIMATE algorithm calculates a stromal score to represent all stromal cells in cancer. The EPIC algorithm calculates the proportions of cancer-associated fibroblasts (CAFs) and endothelial cells (ECs), which are the main components of stromal cells. We analyzed the correlation of immune infiltration with clinical characteristics and outcomes of patients. We determined that the infiltration of CD8+ T cells improved the survival of thyroid cancer patients. Overexpression of immune checkpoints was closely related to the development of thyroid cancer. In general, stromal cells were associated with the progression of thyroid cancer. Interestingly, CAFs and ECs had opposite roles in this process. In addition, the BRAFV600E mutation was related to the upregulation of immune checkpoints and CAFs and the downregulation of CD8+ T cells and ECs. Finally, we constructed an immune risk score model to predict the prognosis and development of thyroid cancer. Our research demonstrated a comprehensive panorama of immune infiltration in thyroid cancer, which may provide potential value for immunotherapy. Introduction In recent years, the incidence of thyroid cancer has been increasing worldwide. According to the tissue of origin and morphology, thyroid cancer can be divided into papillary thyroid cancer (PTC), follicular thyroid cancer (FTC), medullary thyroid cancer (MTC), poorly differentiated thyroid cancer (PDTC), and anaplastic thyroid cancer (ATC). Among them, PTC is the most common type, accounting for 60% of all pathological types. In addition, PTC and FTC are both types of differentiated thyroid cancer (DTC), which has a benign prognosis [1][2][3] . In contrast, the prognosis of ATC is very poor, and the median survival time of patients is only 7-10 months 4 . The prognosis of PDTC is between that of DTC and ATC 5 . In the past 10 years, the overall survival rate of patients with thyroid cancer of nearly 10% has not been significantly improved. Therefore, searching for new targets and therapies is required 6 . Tumor cells combined with their microenvironment function as a whole unit, which is also called the tumor microenvironment (TME). The TME is mainly composed of tumor cells, immune cells, stromal cells, microvessels, various cytokines, and chemokines. All components in the TME have important roles in tumor initiation and progression. Among them, immune cells are the most crucial cluster that may affect the clinical outcomes of thyroid cancer 7 . It has been reported that the composition and function of tumor-infiltrating immune cells vary with the host immune status and have latent prognostic value. Cancer-associated fibroblasts (CAFs), which are included in the stromal cells, are also an important component of the TME. By secreting a variety of growth factors, chemokines, and proteases, CAFs regulate the recruitment and function of immune cells 8 . In addition, tertiary lymphoid structures (TLSs) exist, which include clusters of immune cells around tumor tissue where T cell and B-cell responses occur. TLSs have been reported in various types of cancer and are associated with prognosis 9 . Notably, the TME could be effectively targeted by immunotherapy and associated with the clinical outcomes of patients 10 . Tumor immunotherapy is a treatment that eliminates tumor cells by restoring and maintaining normal antitumor immune responses and includes immune checkpoint inhibitors, therapeutic antibodies, cell therapy, and small molecule inhibitors. In particular, immune checkpoint inhibitors (ICIs) have changed the therapeutic landscape of advanced malignancies 11 . In thyroid cancer and other thyroid diseases, several studies have also demonstrated the potential value of ICIs 12,13 . Several studies have revealed immune cell infiltration and immune checkpoint expression in thyroid cancer. Many studies have confirmed the high expression of CD4 + T cells, CD8 + T cells, and CD69 in patients with thyroid cancer such as MTC, which indicates obvious T cell reaction in thyroid cancer patients. High expression of PDL1, the main ligand of PD1, was also found in DTC, ATC, and MTC subtypes. Additionally, Joyce JA found that dendritic cells, rarely found in normal thyroid tissues, increased in thyroid cancer tissues, thus inhibiting the immune response 14 . Myeloid-derived suppressor cells (MDSCs), Neutrophils, NK Cells, Mast Cells (MCs) were also confirmed to interact with thyroid tumor cells through chemokines, adipokines, and cytokines 7 . However, these studies have been basically limited to a single pathological type, single immune cell, or single immune checkpoint 15,16 . More research focusing on multiple aspects of the TME needs to be carried out. Our research investigated 22 types of immune cells, various immune checkpoints, stromal cells including CAFs and endothelial cells (ECs), and TLSs in thyroid cancer based on data from The Cancer Genome Atlas (TCGA) database and the GEO database. The correlation between immune infiltration and clinical characteristics, including survival, pathological types, pathological stages, and gene mutation, was also analyzed. Our research may provide potential value for the immunotherapy of thyroid cancer. Data acquisition In all analyses of PTC, the data were derived from the TCGA database (the TCGA database contains only PTC data). We identified and downloaded the transcriptome data from the TCGA database through the R package "TCGA-Assembler", including 58 cases of healthy thyroid tissues and 510 cases of PTC. Additionally, relevant clinical characteristics were also obtained and are shown in Table S1. In the analysis of pathological types, the data were derived from the GEO database. Four datasets based on the same RNA-sequencing platform, GPL-570, were merged, GSE33630 (49 cases of PTC and 11 cases of ATC), GSE65144 (12 cases of ATC), GSE76039 (17 cases of PDTC and 20 cases of ATC), and GSE82208 (27 cases of FTC) [17][18][19] . In addition, 33 cases of radiation-induced PTC (Exposed to Chernobyl Radiation, ECR + ) and 32 non-ECR (ECR − ) cases of PTC were derived from GSE35570 and analyzed separately 20 . All data derived from the TCGA and GEO databases were normalized to gene expression data through the R software package "Limma". Assessment of immune cells CIBERSORT is a deconvolution algorithm using the expression values of 547 genes to characterize the composition of immune cells in tissues. In this study, we used this algorithm to estimate the relative proportion of 22 infiltrating immune cell types based on gene expression. We uploaded the normalized gene expression data to the CIBERSORT website (http://cibersort.stanford.edu/) and set the algorithm to 1000 rows. P < 0.05 was considered to be statistically significant 21 . Assessment of stromal cells All stromal cells, including CAFs, ECs, mesenchymal stem cells (MCSs), and pericytes, are included in the stromal score provided by the ESTIMATE algorithm 22 . The proportions of CAFs and endothelial cells (ECs) were analyzed by the EPIC algorithm 23 . Similar to the assessment of immune cells, normalized gene expression data were uploaded to the EPIC website (https://gfellerlab. shinyapps.io/EPIC_1-1/) to acquire the final proportion based on the expression of a series of gene makers of CAFs (ADAM33, CLDN11, COL1A1, etc.) and ECs (CDH5, CLDN5, CLEC14A, etc.). Assessment of tertiary lymphoid structures The assessment of TLSs was performed according to the methods in previous research 24 . The geometric mean of a series of chemokines (CCL2, CCL3, CCL4, CCL5, CCL8, CCL18, CCL19, CCL21, CXCL9, CXCL10, CXCL11, and CXCL13) related to TLSs was adopted as the TLS score. Cases with TLS scores greater than the third quartile were classified as TLS + , and cases with TLS scores less than the third quartile were classified as TLS − . Gene ontology, KEGG pathway, and Gene set enrichment analysis Gene list was uploaded to Database for Annotation Visualization and Integrated Discovery (DAVID, david. ncifcrf.gov/) online tool for Gene Ontology (GO) and KEGG pathway analysis. Concreate pathways and P-values were got and visualized by R software. Gene set enrichment analysis (GSEA) was performed using gsea-3.0, downloaded from the GSEA database (http://software.broadinstitute.org/gsea/ index.jsp) with the built-in standard datasets. Patients and specimens Seventy-two PTC specimens were collected from July 2013 to July 2019. Patients with the following criteria were excluded from participation: had received adjuvant chemotherapy or radiotherapy prior to surgery; had additional cancer diagnoses. All patients were classified according to the 7th edition of the TNM staging system 23. Postoperative adjuvant therapies were performed, according to standard schedules and doses. All participating patients gave their written informed consent. This study was approved by the Ethical Committee of Shanghai Pudong Hospital. The clinical data of all CRC patients were shown in Supplementary Table S2. Immunohistochemical (IHC) staining IHC was performed on paraffin-embedded sections. The sections were deparaffinized in xylene and hydrated with decreasing concentrations of ethanol (100, 90, 80, 75%) for 3 min each time and microwaved-heated in sodium citrate buffer for antigen retrieval. Then, the sections were blocked in 5% BSA and incubated with anti-CD8 rabbit polyclonal antibody (1:1000, Abcam, UK) at 4°C overnight. Next, the sections were treated with horseradish peroxidase (HRP)-conjugated rabbit secondary antibody (1:200; Pro-teinTech Group, Inc., Wuhan, China) for 60 min at room temperature; then, 3,3′-diaminobenzidine development (DAB Substrate Chromogen System; Dako, Denmark) and hematoxylin staining were performed. The sections were fixed and images were obtained with an inverted microscope (Olympus IX71, Japan). Cell culture and co-culture Human umbilical vein endothelial cells (HUVECs) were purchased from Allcells, Inc. (Alameda, CA, USA) and cultured in Endothelial Cell Medium (ECM; ScienCell Research Laboratories, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (FBS, Invitrogen, Carlsbad, CA, USA). Human thyroid fibroblasts were purchased from ScienCell Research Laboratories and cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% FBS. TPC-1 cells (human PTC cell line) were obtained from the University of Colorado Cancer Center Cell Bank. All cells were cultured at 37°C in a 5% CO 2 atmosphere. All experiments were performed with mycoplasma-free cells. For cell co-culture, 2 × 10 5 TPC-1 were seeded in the upper chambers of a Transwell system (24-well insert, 4 μm pore size; BD Biosciences, Bedford, MA, USA), 10 5 HUVECs or fibroblasts were cultured in the lower chambers. After 24 h, CCK8 assay was applied to measure the proliferation of HUVECs or fibroblasts. CD8 + T cells apoptosis assay Peripheral blood mononuclear cells (PBMCs) from healthy human donors were isolated using Lymphoprep density gradient centrifugation (Accurate Chemical). CD8 + T cells were further purified from PBMCs by negative selection using the EasySep Human CD8 + T Cell Enrichment Kits (STEMCELL Technologies Inc.). Then, CD8 + T cells were activated by anti-CD3/CD28 Dynabeads (Thermo Fisher Scientific, Waltham, MA, USA) for 48 h. Activated CD8 + T cells were co-cultured with the TPC-1 cells at a 10:1 ratio for 24 h. Finally, CD8 + T cells were harvested and measured by flow cytometer using Annexin V-PE/7-AAD Apoptosis Detection Kit (BD Biosciences). Statistical analysis All analyses were performed using SPSS 23.0 and R 3.5.3. All statistical tests were two-sided, and a P-value < 0.05 was considered statistically significant. Continuous variables that conformed to the normal distribution were compared with the use of an independent t-test for comparison between groups, while continuous variables with skewed distribution were compared with the Mann-Whitney U test. The correlation matrix was constructed by R software based on the Pearson correlation coefficient. The relationship between immune cell infiltration and overall survival was analyzed through the Kaplan-Meier method, which was evaluated by the logrank test. Time-dependent ROC curves were used to analyze the sensitivity and specificity of the recurrence prediction model. The univariate regression model was used to analyze the effects of individual variables on survival, and the multivariate Cox regression model was used to confirm the independent impact factors associated with survival. The nomogram was constructed with regression coefficients based on the Cox analysis. Immune infiltration in papillary thyroid cancer is closely related to survival The proportion of 22 types of immune cells in PTC and healthy thyroid tissues was calculated by the CIBERSORT algorithm based on data from the TCGA database ( Fig. 1A). We also demonstrated a close negative or positive connection between each type of immune cell in PTC via a correlation matrix (Fig. 1B). Furthermore, we investigated the differences in immune cell proportions between PTC and healthy thyroid tissues. The proportions of naive B cells, memory B cells, CD8 + T cells, regulatory T cells (Tregs), gamma delta T cells (γδT cells), follicular helper T (TFH) cells, and M1 macrophages were significantly decreased. However, the proportions of M0 macrophages, M2 macrophages, resting dendritic cells (DCs), activated DCs, and resting mast cells were significantly increased (Fig. 1C). Subsequently, we investigated the relationship between each immune cell type and overall survival (OS) in PTC patients. We found that a low proportion of CD8 + T cells was associated with worse OS (P-value = 0.01), whereas a low proportion of neutrophils indicated better OS (P-value = 0.016, Fig. 1D). We also demonstrated the correlation of OS with CD8 + T cells and neutrophils in pan-cancer through the TIMER website tool (https:// cistrome.shinyapps.io/timer/) based on TCGA database 25 . CD8 + T cells and neutrophils were closely related to OS in various cancers, including bladder urothelial carcinoma (BLCA), cholangiocarcinoma (CHOL), and so on. Interestingly, the same correlation is not always reported in other tumors, especially in uveal melanoma (UVM). Totally opposite from that in PTC, a low proportion of CD8 + T cells but a high proportion of neutrophils was associated with better OS in UVM ( Supplementary Fig. S1). It seems the association of immune cells and survival was capricious in different tumors, which emphasized the uniqueness of immune infiltration in PTC. Furthermore, we extracted data on genes responsible for downregulation of CD8 + T cells and performed Gene Ontology (GO) and KEGG pathway analyses. Various pathways and annotations were identified as responsible for the downregulation of CD8 + T cells, including pathways involving cytokines, PI3K/Akt and PPAR, extracellular matrix, collagen catabolic processes, and so on (Fig. 1E, F). We further analyzed the association between genes responsible for the downregulation of CD8 + T cells and OS to identify key hub genes. First of all, we divided the patients into high and low-expressed groups according to the median expression of CD8 + T cells. In addition, we analyzed the differentially expressed genes between the high and low expression groups and selected the genes with low expression in the high CD8 + T cells group. Among these differentially expressed genes, only high expression of SLN and TNNT1 was associated with both downregulated CD8 + T cells and worse OS. GSEA was further performed to identify SLN-and TNN1-associated pathways. Immune-associated pathways (pathways involving cytokine receptors, NOD-like receptors, etc.) might be responsible for regulating CD8 + T cells, and SLN and TNN1 were also associated with various pathways responsible for the growth and invasion of tumors (apoptosis, p53, TGF-β, etc, Fig. 1G). Furthermore, we selected TNNT1 for verification, as it was closely related to apoptosis pathways. TNNT1 overexpressed cell line TPC1-TNNT1 and negative control TPC1-NC were constructed. CD8 + T cells were purified from PBMC and co-cultured with TPC-1 cells. As expected, TPC1-TNNT1 promoted the apoptosis rate of CD8 + T cells (Fig. 1H). Immune infiltration was associated with different clinical characteristics We further investigated the correlation of immune cells with different pathological stages of PTC ( Fig. 2A, detailed statistical values were showed in Supplementary Table S3). As the pathological T stage advanced, monocytes, eosinophils, and activated DCs increased, whereas plasma cells decreased. As the pathological N stage advanced, naive B cells and CD4 memory resting T cells increased, whereas CD8 + T cells and activated NK cells decreased. With the development of the pathological M phase, the number of activated DCs and neutrophils increased. With the advancement of the pathological stage, monocytes, resting dendritic cells, activated dendritic cells, resting mast cells and activated mast cells increased, CD8 + T cells and plasma cells decreased, follicular helper T cells and macrophages M1 decreased first and then increased. As BRAF V600E mutation is the most common gene mutation in PTC and is associated with poor prognosis, we further evaluated the correlation between immune cells and BRAF V600E mutation. We demonstrated that in PTC with the BRAF V600E mutation compared to PTC with wildtype BRAF, M0 macrophages were increased, whereas CD8 + T cells and TFH cells decreased (Fig. 2B). Subsequently, we further investigated the correlation between immune cells and different pathological types of thyroid cancer based on GEO data. All data were derived from five GEO data sets derived from the same sequencing platform. The pathological types of thyroid cancer were divided into DTC (PTC and FTC), PDTC, and ATC, with a progressively decreasing prognosis. We demonstrated activated CD4 + T cells and neutrophils showed a continual increase, whereas naive B cells, CD8 + T cells, Tregs, and resting NK cells showed a continual decrease from DTC to PDTC to ATC (Fig. 2C). The types of immune cells in each specific pathological type (PTC, FTC, MTC, PDTC, and ATC) are presented (Fig. 2D). Finally, we also analyzed the association of radiation in PTC by comparing between PTC exposed to chernobyl radiation (ECR + ) and sporadic PTC (ECR − ). TFH was decreased whereas Tregs were increased in ECR + PTC (Supplementary Fig. S2A). Validation of CD8 + T cells in clinical specimens As described above, we have demonstrated CD8 + T cells were associated with the survival and progression of thyroid cancer patients based on the TCGA database. To further verify our findings, 72 cases of PTC specimens were collected and performed with IHC (Fig. 3A). As expected, CD8 + T cells were decreased in advanced stages compared with early stage (Fig. 3B). Similarly, CD8 + T cells were also decreased in patients with distant metastasis (Fig. 3C). Further survival analysis also tallied with the TCGA database. The low proportion of CD8 + T cells was associated with poor OS (Fig. 3D). Immune checkpoints in thyroid cancer Immune checkpoints have an important role in the field of cancer immunotherapy and are a series of molecules that produce costimulatory or inhibitory signals in the immune response. We further investigated the expression of various checkpoints in PTC. We found that most checkpoint molecules, including LAG3, PD-1, ICOS, and IDO1, were significantly decreased in PTC compared with healthy thyroid tissues. Only TIM-3 was increased in PTC compared with healthy thyroid tissues (Fig. 4A). Subsequently, we investigated the expression of checkpoints in different pathological stages (Fig. 4B). Interestingly, lymph node metastasis was associated with the overexpression of various checkpoints, including PD-L2, TIGIT, TIM-3, ICOS, PD-L1, and CD27. Progression of the pathological stage was associated with overexpression of PD-1 and CD27. In terms of associations between pathological M stage and checkpoint expression, only PD-L2 was associated with distant metastasis (Supplementary Fig. S3A). We inferred that the subtle differences came from rare M1 patients in our data. In summary, the progression of the pathological stage of PTC was associated with the overexpression of various immune checkpoints. Furthermore, we investigated the association of immune checkpoints with the BRAF V600E mutation. We demonstrated that most checkpoints were overexpressed in samples with the BRAF V600E mutation, including PD-L2, TIGIT, TIM-3, ICOS, PD-L1, and LAG3, and only CD27 was repressed (Fig. 4C). Subsequently, we investigated the correlation of the immune checkpoints with each other, and significant co-expression was confirmed (Fig. 4D). We further investigated the correlation between immune checkpoints and immune cells. We demonstrated that neutrophils, memory B cells, Tregs, activated CD4 + T cells, M1 macrophages, TFH cells, plasma cells, CD8 + T cells, naive B cells, and γδT cells showed a positive correlation with immune checkpoints, whereas other immune cells showed a negative correlation (Fig. 4E). In addition, we investigated the expression of checkpoints in different pathological types of thyroid cancer. Interestingly, we found that most immune checkpoints in ATC were overexpressed compared with those in both DTC and PDTC, except for PD-L1 and CD27 (Fig. 4F). Specifically, there was no significant difference in the expression of CD27 in three types of thyroid cancer. In addition, PD-L1 was significantly higher in PDTC than in ATC. Finally, we investigated the association between ECR and immune checkpoints. CD27 and TIM-3 were downregulated in ECR + PTC compared with ECR − PTC ( Supplementary Fig. S2B). Tertiary lymphoid structures and stromal cells in papillary thyroid cancer Infiltrating stromal and immune cells form the major fraction of normal cells in tumor tissue not only perturb the tumor signal in molecular studies but also have important roles in cancer biology. Tertiary lymphoid structures (TLSs) are considered the germinal center of immune cells in tumors. Therefore, we further investigated them in PTC. The geometric mean of a series of chemokines known to be involved in the formation of TLSs was used to assess TLSs. First, we investigated the expression of these chemokines. Most chemokines were decreased in PTC compared with healthy thyroid tissues, including CCL3, CXCL13, CCL4, CCL5, CCL19, and CCL21, and only CCL18 was increased (Fig. 5A). Consistently, the TLS score of PTC was decreased compared with that of healthy thyroid tissues (Fig. 5B). Then, we investigated the correlation of TLSs and immune cells. TLSs were associated with an increase in various immune cells in PTC, including memory B cells, CD8 + T cells, resting memory CD4 + T cells, activated memory CD4 + T cells, TFH cells, and M1 macrophages. Only M0 macrophages were decreased in TLS + PTC compared with TLS-PTC samples (Fig. 5C). We also investigated the correlation of TLSs with immune checkpoints in PTC, and a rare difference was confirmed between TLS − and TLS + PTC samples (Supplementary Fig. S3B). Subsequently, we investigated the correlation of TLSs and pathological stage, and a negative correlation was confirmed ( Supplementary Fig. S3C). Interestingly, we demonstrated that the BRAF V600E mutation repressed TLSs in PTC (Fig. 5D). The score of stromal cells in PTC was calculated by the ESTIMATE algorithm. In contrast to TLSs, the stromal score was closely related to the pathological stage of PTC. With the advancement of both pathological N stage and pathological stage, the stromal score significantly increased (Fig. 5E). In addition, the BRAF V600E mutation also indicated a higher stromal score (Fig. 5F). In immune cell analysis, a high stromal score was associated with an increase in naive B cells, plasma cells, activated memory CD4 + T cells, M0 macrophages, and resting DCs and a decrease in resting NK cells, activated NK cells, monocytes, resting mast cells, and eosinophils (Fig. 5G). We also determined a close correlation between the stromal score and immune checkpoints. A high stromal score indicated the overexpression of various immune checkpoints, including PD-L2, TIGIT, TIM-3, ICOS, IDO1, PD-1, CTLA4, and LAG3 (Fig. 5H). Cancer-associated fibroblasts and endothelial cells in thyroid cancer Cancer-associated fibroblasts (CAFs) and endothelial cells (ECs) are both important components of the TME and participate in the immune regulation of tumors. Therefore, we investigated the expression of CAFs and ECs in PTC. Both CAFs and ECs were increased in PTC compared with healthy thyroid tissues. The increase in CAFs but a decrease in ECs was associated with the advancement of pathological N stage and pathological stage (Fig. 6A). Similarly, the BRAF V600E mutation was associated with an increase in CAFs but a decrease in ECs (Fig. 6B). Furthermore, wild-type BRAF cell line TPC1 was transfected with BRAF-wild-type (WT) plasmid and BRAF V600E mutation plasmid respectively, followed by coculturing with HUVECs or CAFs. As expect, TPC1-BRAF V600E promoted the proliferation of CAFs but inhibited the proliferation of HUVECs, compared to TPC1-BRAF wt (Fig. 6C). We further investigated the association of immune checkpoints with CAFs and ECs. Interestingly, CAFs were associated with the upregulation of various immune checkpoints, whereas ECs were associated with the downregulation of various immune checkpoints (Fig. 6D). Furthermore, we determined that CAFs were associated with an increase in monocytes and activated DCs and a decrease in M0 macrophages. ECs were associated with an increase in memory B cells, TFH cells, and M1 macrophages and a decrease in activated NK cells, monocytes, resting DCs, and activated DCs (Fig. 6E). Finally, we investigated the expression of CAFs and ECs in different pathological types of thyroid cancer. CAFs were mostly expressed in ATC, whereas ECs were mostly expressed in PDTC (Fig. 6F). Establishment of an immune risk score model for predicting the prognosis of thyroid cancer To establish a scoring system for predicting the prognosis of PTC, all differentially expressed immune cells and immune checkpoints were analyzed with a single-factor Cox regression model. P < 0.05 was set as the screening criterion. The expression of LAG3 and the proportions of CD8 + T cells, M1 macrophages, and activated DCs were ultimately adopted in the multifactor Cox regression model used to construct an immune risk score (Supplementary Table S4). Based on the median value of the risk score, we divided the patients into high-risk and low-risk groups. The distribution of immune risk score, patient survival status, and expression of risk factors in TCGA-PTC patients is presented in Fig. 7A-C. Figure 7B indicated the survival times of PTC patients in the TCGA population. Figure 7C showed the expression and distribution of LAG3, CD8 T cells, macrophages M1, and activated dendritic cells between high-and low-risk groups. The Kaplan-Meier analysis suggested that patients in the high-risk group had a poor OS (Fig. 7D). The ROC curve revealed that the risk model had good sensitivity and specificity in predicting survival risk (Fig. 7E). Figure 7F showed the expression and distribution of LAG3, CD8 T cells, macrophages M1, activated dendritic cells, and risk scores in different clinical characteristics groups. In addition, there was a significant difference in the risk score between the T stage and the pathological stage (Fig. 7F). To explore whether the constructed immune risk score model was independent of other clinical-pathological parameters, we performed univariate and multivariate Cox regression analysis for age, sex, stage, TNM stage, and risk score (Fig. 7G, H). Both univariate and multivariate Cox regression analyses showed that the risk score was an independent prognostic predictor for PTC. Finally, on the basis of the coefficients derived from the multivariate Cox regression analysis, we constructed a nomogram to visualize our model (Fig. 7I). According to the nomogram, the survival of PTC patients can be predicted by age, gender, TNM stage, stage, and risk scores. Discussion Immune cells in the TME have been proven to be crucial in the development of various tumors. Different types of tumors have different immune cell subpopulations. Even for the same pathological type, the subpopulation could be different among patients 26 . Therefore, it was crucial to investigate immune cell subsets to evaluate risk and tumor prognosis. Several studies have demonstrated the expression of some immune cells in thyroid cancer, including Tregs, DCs, macrophages, and T cells 15,27 . Recently, an increasing number of types and subtypes of immune cells have been reported. Therefore, previous research is not enough to reveal the whole picture of immune infiltration in thyroid cancer. CIBERSORT, a gene expression-based deconvolution algorithm, was developed to assess the proportion of 22 types of immune cells in a mixed-cell population. Due to its excellent performance, its application in researching immune infiltration has gained importance 28 . With large sample data from the TCGA and GEO databases, we conducted a comprehensive and detailed assessment of immune infiltration in thyroid cancer. We demonstrated that thyroid cancer was locally infiltrated with various immune cell subgroups. These characteristic immune cells constituted an individualized "immune signature map" for patients with thyroid cancer and provided new ideas for subsequent specific immunotherapy. By comparing PTC and healthy thyroid tissue, we found that naive B cells, memory B cells, CD8 + T cells, Tregs, γδT cells, TFH cells, M0, M1, and M2 macrophages, resting/activated DCs, and resting mast cells were differentially expressed. Most of these cell types have been reported in previous research and have different roles in the TME, such as immune evasion (Tregs, T cells, and DCs) and regulation of tumor growth and invasion (T cells and macrophages) 27,29 . However, few studies have focused on the role of naive B cells, memory B cells, TFH cells, and resting cells in PTC. These cells also take part in the development of tumors. For example, the infiltration of TFH cells was closely related to the survival of breast cancer and lung cancer 30,31 . We inferred that a similar role could also be confirmed in thyroid cancer. Subsequently, we demonstrated that infiltration of CD8 + T cells and neutrophils indicated better and worse OS, respectively. Similarly, CD8 + T cells were upregulated in DTC with a better prognosis, whereas neutrophils were upregulated in ATC with a worse prognosis. Neutrophils were reported to be associated with tumor size and invasion of thyroid cancer 32 . CD8 + T cells recognize and attack tumor cells expressing tumor antigens and are associated with improved disease-free survival (DFS) in DTC 33 . Therefore, our research further complements previous understanding. The BRAF V600E mutation is the most common genetic alteration in thyroid cancer and promotes the invasiveness, metastasis, and recurrence of tumors 34 . The BRAF V600E mutation was also associated with a decrease in CD8 + T cells. We also identified a series of immune cells associated with the advancement of the pathological stage. Among them, CD8 + T cells were associated with lymph node metastasis as well as the pathological stage. Meanwhile, we also confirmed CD8 + T cells were associated with the progression and survival of thyroid cancer patients in our clinical specimens. These results further proved the important role of CD8 + T cells in thyroid cancer. To determine CD8 + T cells regulatory mechanisms, we investigated CD8 + T cell-regulated molecules and pathways. The PI3K/Akt and PPAR pathways and SLN and TNNT1 genes were responsible for downregulated CD8 + T cells. Especially for TNNT1, we confirmed overexpression of TNNT1 in TPC-1 promoted the apoptosis of CD8 + T cells. Therefore, drugs targeting them may provide potential value in the treatment of thyroid cancer. The expression of immune checkpoints is important for immune escape and treatment with ICIs. Several key immune checkpoints were repressed in PTC compared with healthy thyroid tissues, including LAG3, PD-1, PD-L2, and IDO1. We inferred that the higher expression of immune checkpoints in healthy thyroid tissues than in tumor tissues prevented the killing effect of immune cells on normal thyroid tissue. Interestingly, with the advancement of the pathological stage (especially N stage), most immune checkpoints were upregulated. Similarly, the BRAF V600E mutation was also associated with the upregulation of most immune checkpoints. In addition, most immune checkpoints were significantly upregulated in ATC compared with their expression in DTC and PDTC. On the one hand, high expression of immune checkpoints promoted the development and invasion of thyroid cancer by the immune escape of tumor cells. On the other hand, high expression of immune checkpoints in advanced thyroid cancer also suggested increased sensitivity to ICIs. In addition, we also demonstrated a close correlation between each immune checkpoint, as well as between immune checkpoints and immune cells. Chronic lymphocytic thyroiditis (CLT) is an autoimmune disease that can coexist with thyroid adenoma. Considering that the incidence rate of PTC combined with CLT has been increasing in recent years, we can evaluate the prognosis of thyroid cancer through the study of chronic thyroiditis and actively treat CLT to avoid carcinogenesis 35 . In the thyroid tissue of CLT patients, there were different numbers of inflammatory cells (mainly lymphocytes) with focal or scattered infiltration, even forming lymph follicles of different sizes and obvious germinal center, namely the tertiary lymphoid structures (TLS) 36 . TLS, also known as ectopic lymphoid structures (ELS), usually refers to the ectopic lymphoid structures formed in the peripheral non-lymphoid organs (liver, lung, kidney) and other chronic inflammatory sites 37,38 . It is composed of a T cell region containing mature dendritic cells (DC), B-cell follicles, high endothelial venules (HEVs), germinal centers containing follicular dendritic cells (FDC), and so on 39 . Studies are gradually revealing the mechanism of TLS in an antitumor adaptive immune response. Studies have shown that TLS density has a beneficial effect on overall survival and disease-free survival. In pancreatic cancer, lung cancer, colorectal cancer, and non-invasive breast cancer, TLS can be applied as a predictor for overall survival 38,40,41 . However, rare studies focused on thyroid cancer. Therefore, the analysis and evaluation of the immune cell population in TLS of thyroid carcinoma can provide a useful reference for immunotherapy. The recruitment and retention of lymphocytes into TLSs require various chemokines. We first investigated the expression of chemokines contributing to the formation of TLSs. Most chemokines were downregulated in thyroid cancer, including CCL3, CCL4, CCL15, CCL21, and CXCL13. Correspondingly, TLSs were decreased in thyroid cancer. In addition, various immune cells were repressed with a decrease in TLSs. This result explains the role of TLSs in promoting immune infiltration. Interestingly, the BRAF V600E mutation was also associated with a decrease in TLSs. This is consistent with the poor prognosis associated with the BRAF V600E mutation. In addition to tumor cells and immune cells, there are abundant stromal cells in the TME. Stromal cells regulate immune infiltration by secreting cytokines or activating signaling pathways 42 . Through the ESTIMATE algorithm, each case of thyroid cancer was assigned a stromal score (no cases of healthy thyroid tissues received a stromal score, as the concept of stromal cells was not applicable in normal tissue). Interestingly, we demonstrated that an increase in stromal score was associated with the advancement of the pathological stage as well as the BRAF V600E mutation. Stromal cells seem to indicate a poor prognosis in thyroid cancer. In addition, an increase in stromal cells also promoted the expression of various immune checkpoints. This means that stromal cells promoted immune escape of tumor cells as well as increased sensitivity to ICIs. CAFs and ECs are the two most important stromal cells in tumors. We further investigated their roles in immune infiltration, and the two showed completely opposite characteristics. However, both CAFs and ECs were increased in thyroid cancer compared with normal tissues. Upregulated CAFs but downregulated ECs were associated with the advancement of pathological stage and the BRAF V600E mutation. In addition, CAFs promoted the expression of various immune checkpoints, whereas ECs repressed them. Similarly, CAFs promoted the infiltration of activated DCs, whereas ECs repressed it. DCs were reported to promote immune escape in PTC 27 . These results indicate that CAFs promoted immune escape and the development of thyroid cancer, whereas ECs showed opposite characteristics. Immunotherapy targeting these two cell types may provide new ideas. At present, effective biomarkers for predicting the prognosis of thyroid cancer are still lacking 43 . Based on our research, we constructed an immune risk score model to predict the prognosis of thyroid cancer based on a Cox regression model. The model showed the promising value in predicting both survival risk and pathological stage. Finally, we also established a nomogram to visualize the model. In conclusion, our study has revealed an immune infiltration map in thyroid cancer. We revealed the complex relationships between immune cells, immune checkpoints, tumor stromal cells, TLC, prognosis, survival, BRAF V600E mutation, pathological stage, pathological types, and so on. Our research may provide new opinions regarding the mechanisms of thyroid cancer development and immune therapy. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
8,296.4
2021-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Leukotrienes Are Upregulated and Associated with Human T-Lymphotropic Virus Type 1 (HTLV-1)-Associated Neuroinflammatory Disease Leukotrienes (LTs) are lipid mediators involved in several inflammatory disorders. We investigated the LT pathway in human T-lymphotropic virus type 1 (HTLV-1) infection by evaluating LT levels in HTLV-1-infected patients classified according to the clinical status as asymptomatic carriers (HACs) and HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP) patients. Bioactive LTB4 and CysLTs were both increased in the plasma and in the supernatant of peripheral blood mononuclear cell cultures of HTLV-1-infected when compared to non-infected. Interestingly, CysLT concentrations were increased in HAM/TSP patients. Also, the concentration of plasma LTB4 and LTC4 positively correlated with the HTLV-1 proviral load in HTLV-1-infected individuals. The gene expression levels of LT receptors were differentially modulated in CD4+ and CD8+ T cells of HTLV-1-infected patients. Analysis of the overall plasma signature of immune mediators demonstrated that LT and chemokine amounts were elevated during HTLV-1 infection. Importantly, in addition to CysLTs, IP-10 was also identified as a biomarker for HAM/TSP activity. These data suggest that LTs are likely to be associated with HTLV-1 infection and HAM/TSP development, suggesting their putative use for clinical monitoring. Introduction Human T-lymphotropic virus type 1 (HTLV-1), a complex retrovirus, is the causal agent of adult T cell leukemia (ATL), HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP) and other inflammatory disorders that develop after a variable period of latency ranging between months and decades [1,2]. Although the majority of HTLV-1-infected individuals remain asymptomatic carriers (HACs), the lifetime risk of developing HTLV-1-associated diseases may be close to 10%, and the incidence of HAM/TSP ranges from 0.3% to 4% [3]. HAM/TSP is a neuroinflammatory disease characterized by a chronic progressive myelopathy with infiltrating mononuclear cells in the areas of demyelination and axonal dystrophy [4,5]. It is not clear how HTLV-1 causes neurological damage, but spontaneous T cell proliferation and proinflammatory responses characterized by elevated ex vivo production of interferon (IFN)-c and tumor necrosis factor (TNF)-a by peripheral blood mononuclear cells (PBMCs) are associated with HAM/TSP [6,7]. In addition, patients with HAM/TSP display an increased proviral burden when compared to HACs, and high proviral loads have been associated with rapid disease progression [8][9][10]. Thus, few disease markers and prognostic predictors have been described for HAM/ TSP. Leukotrienes (LTs) are bioactive lipid mediators involved with inflammatory conditions [11] that may represent candidate biomarkers for HAM/TSP. Biosynthesis of LTs is triggered by stimuli such as antigen, cytokines, microorganisms and immune complexes [12]. Just after stimulation, arachidonic acid (AA) that is liberated from cellular membrane phospholipids through the action of phospholipase A 2 (PLA 2 ) is oxidized by 5-lipoxygenase in combination with 5-LO-activating protein (FLAP) to generate the leukotriene A 4 (LTA 4 ). The downstream enzymes LTA 4 hydrolase (LTA 4 H) and LTC 4 synthase (LTC 4 S) give rise to leukotriene B 4 (LTB 4 ) and leukotriene C 4 (LTC 4 ). LTC 4 is further converted to LTD 4 and LTE 4 , which are collectively termed cysteinyl leukotrienes (CysLTs) respectively. LTB 4 and CysLTs signal through distinct cell surface receptors named BLT 1 and BLT 2 and CysLT 1 and CysLT 2 , respectively [13]. Functionally, LTB 4 is recognized as a potent leukocyte chemoattractant that also displays leukocyte activating functions, whereas the CysLT are better known for leading to airway constriction, increased vascular permeability, mucus secretion and cell trafficking [14]. In addition, LTs have been shown to improve the host defense against pathogens [15][16][17][18]. Considering the importance of LTs as powerful mediators of inflammation, the present study was undertaken to test the hypothesis that HTLV-1 infection leads to an exacerbation of the 5-LO products formation and LT signaling in patients with HAM/TSP. We examined LT concentrations in plasma, the ability of PBMCs to produce LTs and LT receptor expression in lymphocytes from HTLV-1 patients. We also investigated the overall plasma LT, chemokine and cytokine signatures of HACs and HAM/TSP patients. Moreover, we investigated the correlations between LTs, chemokines and cytokines in HTLV-1-infected individuals and the capacity of LTs to modulate cytokine production. Our results demonstrate for the first time that LTs are upregulated during HTLV-1 infection, suggesting a role for LTs in HAM/TSP pathogenesis and presenting them as potential biomarkers for monitoring HAM/TSP development. CysLT is Upregulated in HTLV-1-associated Neuroinflammatory Disease LTs have been shown to function as inflammatory mediators [11]. To investigate whether HAM/TSP disease is characterized by elevated levels of LTs, we measured the amount of these mediators in the plasma of HTLV-1 patients. LTB 4 was increased in the plasma of HACs and HAM/TSP patients when compared to that of NI donors; however, no difference was observed in LTB 4 levels between HACs and HAM/TSP patients ( Figure 1A). Interestingly, HACs and HAM/TSP patients displayed increased amounts of CysLTs when compared with NI donors, but CysLT amounts were higher in the plasma of HAM/TSP patients than in the plasma of HACs ( Figure 1B). Thus, although HTLV-1 induces increased concentrations of LTs in the plasma of both HACs and HAM/TSP patients, these results associate increased CysLT concentrations with HAM/TSP. In addition, we explored the correlation between HTLV-1 proviral load and plasma LTB 4 ( Figure 1C) or CysLTs ( Figure 1D) and found a positive correlation. Thus, in infected persons, the plasma LTs are associated with the HTLV-1 proviral load in PBMCs. HTLV-1 Enhances LT Generation HTLV-1-induced LT generation was examined in PBMCs of NI donors. We found increased production of LTB 4 ( Figure 2A) and LTC 4 ( Figure 2B) when cells were challenged with cell-free virus. Next, because we found increased levels of LTs in the plasma of HTLV-1 patients, we measured LT generation by PBMCs from HTLV-1 patients. We observed increased production of LTB 4 by cells from HACs and HAM/TSP patients when compared to those from NI donors, with the highest amount of LTB 4 in the supernatant of cells from HACs ( Figure 2C). Moreover, as expected, the generation of CysLTs was increased in PBMCs from HACs and HAM/TSP patients when compared to those from NI donors, with the highest amount of CysLTs in cells from HAM/TSP patients ( Figure 2D). Next, we assessed 5-LO and LTC 4 synthase expression in PBMCs. Our results demonstrated that cells from HAM/TSP patients expressed higher levels of 5-LO than cells from NI donors or HACs ( Figure 2E); however, no differences in the expression of LTC 4 synthase were observed between the groups ( Figure 2F). Lymphocytes from HTLV-1 Patients have Altered LT Receptor Gene Expression We next analyzed the gene expression of LT receptors by detecting BLT 1 and CysLT 1 expression in both CD4 + and CD8 + T cells (Figure 3). In CD4 + T cells, BLT 1 expression was increased only in HAM/TSP patients than in NI donors ( Figure 3A) but CysLT 1 was expressed at higher amounts in HACs and HAM/ TSP patients than in NI donors ( Figure 3B). Analysis of CD8 + T cells showed no differences in BLT 1 gene expression among all of the groups ( Figure 3C), whereas decreased CysLT 1 gene expression was detected in HAC and HAM/TSP patient CD8 + T cells when compared to NI donor CD8 + T cells ( Figure 3D). Overall Plasma Signatures of LTs, Chemokines and Cytokines in HTLV-1 Infection We next sought to characterize the immune and inflammatory mediators in the plasma of NI donors to allow for further comparative analysis of HACs and HAM/TSP patients. We assessed the overall LT, chemokine and cytokine signatures by categorizing volunteers as ''low-'' or ''high-'' mediator producers to minimize the impact of individual concentrations on the final analysis and to make the data more homogeneous. The global median index of each mediator was calculated (CysLTs = 438.9; Figure 4A. The mediator signature curves of NI donors were used as a reference to identify changes in the overall mediator signatures of HACs and HAM/TSP patients. Analysis of the HAC signatures demonstrated that LTs, the majority of chemokines (MCP-1, IL-8 and MIP1-a) and some cytokines (IL-17, IL-23, IL-4, TNF-a and IL-12) are increased when compared to the values observed in NI donors ( Figure 4B). We also examined the signatures of HAM/TSP patients ( Figure 4C) and found that LTs and chemokines (MCP-1 and IP-10) were increased, and in contrast to our findings in HACs, cytokines were decreased when compared to the values observed in NI donors. Additionally, high producers of CysLTs and IP-10 were more frequent in HAM/TSP group than in HACs. In contrast, the frequency of high cytokine producers was lower in HAM/TSP patients than in HACs. Thus, our findings showed that LTs and chemokines are the prominent mediators in HACs and HAM/TSP patients. Association between Immune and Inflammatory Mediators in HTLV-1 Infection The differences in the concentrations of LTs, chemokines and cytokines between HACs and HAM patients prompted us to investigate the correlation between the concentrations of mediators in each group. The analysis of the HAC group demonstrated positive correlation between the concentrations of CysLTs with the concentrations of LTB 4 and IL-13 ( Figure 5A). In contrast to our findings in HACs, CysLT concentrations were not correlated with the amounts of other mediators, but LTB 4 concentrations were positively correlated with the levels of some chemokines, including MCP-1 and IP-10, and cytokines, including IL-17, IL-23 and IL-10 in HAM/TSP patients ( Figure 5B). Meanwhile, although no specific pattern associated with any kind of immune or inflammatory response was observed, the expression levels of several chemokines and cytokines were correlated in HACs and HAM/TSP patients ( Figure 5). Discussion The participation of LTs in several infections [19,20] and inflammatory disorders [21,22] has long been appreciated; however, the involvement of these lipid mediators in HTLV-1 infection and HAM/TSP development has not been studied previously. Here, we report for the first time that HTLV-1 infection dysregulates the LT pathway. Our results demonstrate increased LTB 4 and CysLT plasma concentrations in HTLV-1 patients, suggesting a role for LTs in several HTLV-1-associated inflammatory diseases. Furthermore, a key finding in our study was the association between plasma CysLT concentrations and HAM/TSP. The concentration of plasma CysLTs was increased more than 3-fold in HAM/TSP patients when compared to HACs. Studies have detected LTs in the central nervous system of patients with autoimmune diseases [23] and infectious diseases [24,25] and have suggested a potential pathophysiological role for these molecules. Specifically, the inhibition of 5-LO activity during experimental demyelination attenuates neuroinflammation and axonal damage [26]. Together, these observations are consistent with our results demonstrating that HAM/TSP patients display enhanced CysLT production, suggesting that these mediators contribute to HAM/TSP pathogenesis. As there is no effective therapy for HAM/TSP [27], CysLT signaling may represent a new therapeutic target. Although many investigators have concentrated their efforts on the discovery of HAM/TSP markers, previous studies have relied on ex vivo culture, and few associations have been established in vivo [28,29]. Thus, our work extends to the knowledge of in vivo HAM/TSP markers by presenting CysLTs as a putative biomarker of HAM/TSP. Therefore, we next tested the hypothesis that HTLV-1 proviral load is correlated with the concentration of plasma LTs. Using Pearson's correlation, we observed a positive correlation between LTB 4 or CysLTs and proviral load indicating that concentrations of LTs in plasma of infected individuals reflect proviral load. However, in the present study, our data did not demonstrate a strong association between LTs and disease activity or even clinical progression in HAM/TSP patients. In this pioneering investigation, we explored the complex pro-inflammatory network underlying the immunological profile of HLTV infected patients to find potential biomarkers of disease activity or even prognostic markers for monitoring purposes. We believe that LTs could be putative immunological biomarkers that could serve as prognostic markers or could be associated with disease activity. It is important to mention that the present investigation should be considered the first step toward the discovery of LT biomarkers for HLTV infection, as further studies will be necessary to validate this hypothesis. HTLV-1 asymptomatic carriers (HAC) and HAM/TSP patients (HAM) were cultured. (E,F) Quantitative PCR (qPCR) was performed to detect 5-LO (E) and LTC 4 synthase (F). The relative expression levels of these genes were determined in PBMCs from NI donors, HACs and HAM/TSP patients (n = 15 per group). Gene expression levels were normalized to the expression level of GAPDH mRNA in the same real-time PCR reaction. The data are presented as means 6 SEM. *p,0.05, compared with unstimulated samples or NI donors; # p,0.05, compared with HACs (t-test or one-way ANOVA as appropriate). doi:10.1371/journal.pone.0051873.g002 LTs are produced primarily by neutrophils, eosinophils, mast cells and monocytes/macrophages [30]. Among PBMCs, it is assumed that monocytes are largely responsible for LT generation. Moreover, B cells express 5-LO but do not synthesize LTs upon A23187 stimulation [31]. Notably, we found that cell-free viral particles induce LT generation by PBMCs after A23187 stimulation, suggesting that monocytes are a significant source of LTs during HTLV-1 infection. This also suggests that LT production is increased in the central nervous system of HAM/ TSP patients because infiltrating monocytes are found in the areas of demyelination [4] and may be stimulated by the virus to release lipid mediators. Importantly, HTLV-1 and HTLV-1-Tax antigen induce another lipid mediator, PGE 2 [32], demonstrating that HTLV-1 regulates both the 5-LO and the cyclooxygenase pathway of the AA cascade. Moreover, LTC 4 generation by PBMCs supports elevated concentrations of plasma CysLTs in HAM/TSP patients. However, despite the low number of HTLV-1 virions in vivo, a DNA insertion analysis showed that the virions can induce the production of LTs in PBMCs in vitro without infecting these cells (data not shown). On the other hand, LTs are involved in the control of host defense, including defense against HIV [33,34]; thus, the decreased LTB 4 generation observed in the PBMCs of HAM/TSP patients when compared to HACs may not represent a good method for controlling the high HTLV-1 set points seen in HAM/TSP patients. In addition, it was interesting to note the lack of correlation between 5-LO and LTC 4 S gene expression and LT generation by human PBMCs. Despite this dissociation between changes in 5-LO mRNA levels and protein expression, which has been reported previously [35,36], increased 5-LO mRNA expression indicates a positive regulation of the LT pathway as evidenced by the increased mRNA expression of 5-LO in HAM/TSP PBMCs. The upregulation of LT receptors has been noted previously in experimental neuroinflammatory disease and is thought to be involved in the pathogenesis of this disease [37,38]. It is noteworthy that LT receptor expression has been detected in both CD4 + and CD8 + T cells [12]. These cells are found in inflamed areas in HAM/TSP patients. Thus, we hypothesized that LT receptor expression may be increased in T cells in HAM/TSP patients. Our data demonstrate that gene expression of LT receptors is modulated by HTLV-1 infection. Specifically, BLT 1 was upregulated in CD4 + T cells from HAM/TSP patients. Meanwhile, CysLT 1 was upregulated in CD4 + T cells but downregulated in CD8 + T cells of HACs and HAM/TSP patients. In animal models, deletion of BLT 1 [39] and inhibition of CysLT 1 [40] signaling can suppress the recruitment of inflammatory cells into the central nervous system and thus inhibit experimental autoimmune encephalomyelitis. In this regard, we speculate that high LT amounts and high LT receptor expression levels in CD4 + T cells may bias the host toward cellular infiltration of inflamed tissues, worsening the HAM/TSP disease. Moreover, as HTLV-1 preferentially affects CD4 + T cells, the migration of non-infected CD4 + T cells to inflamed sites containing HTLV-1-infected lymphocytes could facilitate cell-cell contact and consequently the spread of infection. Studying the immunological response to HTLV-1 infection is important for the understanding of HAM/TSP pathogenesis. Others have shown that CXCL9 [41], CXCL10 [42], CCL22 [43], IP-10 [28], sCD30 [28] and IFN-c [28] are increased in the systemic circulation of HTLV-1-infected individuals. We hypothesized that dysregulation of the immune system is likely to be involved in the pathogenesis of HLTV-1 infection and that the clinical presentation of HAM/TSP may result from multifactorial immunological mechanisms. Impairment in the cytokine network has been found to be one of the determining factors in several human diseases. Because conventional strategies may not be suitable to capture minor changes in the immunological profile and because of the wide range of chemokines/cytokines/LTs, in this study, we employed an alternative strategy to assess the biomarker signature and to describe the dominant profiles associated with asymptomatic presentation and HAM/TSP caused by chronic HTLV-1 infection. This panoramic overview offers additional insight into the immunological events that are relevant for clinical studies of HTLV-1 infection. This approach may allow for a better understanding of the immunological parameters that control disease outcome and provide useful tool for prognostic monitoring. We therefore examined the concentrations of several chemokines and cytokines in the plasma of our cohort to further establish a signature curve for LTs, chemokines and cytokines. Using a signature curve of non-infected individuals as a reference, we demonstrated that LTs and chemokines are increased in the plasma of both HACs and HAM/TSP patients. The signature curves both confirm CysLT and identify IP-10 as biomarkers of HAM/TSP. The Th1-associated chemokine IP-10 belongs to the CXC chemokine superfamily. IP-10 has been shown to be a potential marker of inflammation and diseases [43][44][45] including HAM/TSP [28]. Furthermore, our results show that although the plasma concentrations of some cytokines are increased in HACs, the majority of the analyzed cytokines are decreased in the plasma of HAM/TSP patients. The low levels of these cytokines observed in HAM/TSP patients may reflect the attenuated inflammatory response observed in the central nervous system after a long period of HAM/TSP manifestation [46]. In contrast to these decreased cytokine levels, however, our work clearly demonstrates some plasma mediators of inflammation remain elevated even after a long period of disease manifestation. In addition, as a trigger for positive feedback regulation, LTs, chemokines and cytokines have been shown to influence the production of each other [16,18,47,48]. Supporting these findings, we have shown positive correlations between plasma LT, chemokine and cytokine concentrations but in the absence of a singular pattern of immune response. Importantly, CysLTs are increased in HAM/TSP patients, but no positive correlation was detected between CysLT levels and the concentrations of other mediators, presenting this family of lipid mediators as an independent biomarker of HAM/TSP. Representative scattergraphs were used to establish the concept of low biomarker producers (white) and high biomarker producers (black). The results from all groups studied (non-infected donors, HACs and HAM/TSP patients) were assembled to calculate the global median for each biomarker. Low biomarker producers were defined as having values lower than the global median, whereas high biomarker producers were defined as having values greater than or equal to the global median cut-off. Data from 3 of 18 molecules analyzed are shown. (B-D) The diagrams were plotted using the global median index of plasma biomarkers (measured by ELISA) as the cut-off to identify each volunteer as a low (%) or high (&) producer. The ascendant frequency of high producers found in the NI group was established as a reference curve (-%-) to identify changes in the overall biomarker signature of the other groups. Significant differences were defined as a shift to a distinct 25% quartile interval between the studied groups. *, significantly different values compared with NI donors; # , significantly different values compared with HACs. NI -non-infected healthy donors; HAC -HTLV-1 asymptomatic carriers; HAM -HAM/ TSP subjects. doi:10.1371/journal.pone.0051873.g004 In this report, we demonstrate that HTLV-1 dysregulates the LT pathway. This finding has important implications for the understanding of HAM/TSP. First, CysLTs could be used as a biomarker for HAM/TSP development. Second, the fact that LTs are mediators of inflammation suggests that the LT pathway might be involved in HAM/TSP pathogenesis. Therefore, further experiments will be required to elucidate a potentially pathogenic function of LTs in HAM/TSP. Moreover, it will be interesting to determine whether drugs targeting the LT pathway ameliorate HAM/TSP symptoms. Study Population The study subjects were classified as asymptomatic HTLV-1 carriers or HAM/TSP patients in accordance with the criteria proposed by the World Health Organization. Non-infected (NI) healthy volunteers were included as a control (Table 1). Biological specimens were obtained from HTLV-1 patients from the clinical cohort of the Neurology Department of Ribeirão Preto University Hospital, Brazil. Diagnosis of HTLV-1 infection was established by enzyme-linked immunosorbent assay (ELISA) (rp21e-enhanced EIA; Cambridge Biotech Corp.) and confirmed by PCR (tax and LTR regions). Subjects with HAM/TSP were selected from among a heterogeneous disease progression group. Individuals receiving therapies were excluded. All procedures were approved by the Ethical Committee of the University Hospital, School of Medicine of Ribeirão Preto, University of São Paulo (process number 1108/2008), and all subjects provided written informed consent. Isolation of Blood Leukocytes After separation of plasma from the heparinized venous blood, PBMCs were isolated using Ficoll-Paque (GE Healthcare) density gradient centrifugation. To isolate lymphocytes from PBMCs, magnetic beads conjugated with anti-CD4 or anti-CD8 antibodies (Mini-Macs Micro-Beads, Miltenyi Biotec) were used to separate CD4 + and CD8 + T cells by positive selection using the manufacturer's protocol. Phenotypic analysis performed by flow cytometry (BD-FACSCanto) using anti-CD4-FITC, anti-CD8-FITC and anti-CD3-PE antibodies (BD Biosciences) demonstrated a minimum of 80% purity of CD4 + and CD8 + lymphocytes. A hemocytometer chamber was used to obtain absolute cell counts, and cell viability was determined by trypan blue exclusion. HTLV-1 Proviral load HTLV-1 proviral load was quantified as previously described [49]. Genomic DNA samples isolated from peripheral blood of the HTLV-1 infected individuals were used to perform quantitative RT-PCR with the SYBR Green system (Applied Biosystems). The single-sample reactions for human b-actin and HTLV-1 tax were performed in duplicate on the same plate. The HTLV-1 proviral load was calculated using the following equation: average of tax/ average of b-actin62610 5 . The values obtained were Log10 transformed for the correlation analysis. Detection of Leukotriene Pathway Transcripts Total RNA was extracted from PBMCs and lymphocytes with TRIzol (Invitrogen) according to the manufacturer's instructions, and reverse transcription was carried out with 2 mg of total cellular RNA using a High-Capacity cDNA Reverse Transcription kit (Applied Biosystems). Thereafter, quantitative RT-PCR was performed using ABI 7500 Sequence Detection (Applied Biosystems). The reaction was performed in duplicate using TaqMan assay reagents (Applied Biosystems) (product reference for 5-LO: Hs_00386528_m1; LTC 4 synthase: Hs_00168529_m1; BLT1: Hs_00175124_m1; CysLT1: Hs_00929113_m1) and analyzed using 7500 System SDS software. The relative mRNA expression was determined using the DDCT method. Glyceraldehyde 3phosphate dehydrogenase (GAPDH: 4310884-E) was used as an internal control for PBMCs. The geometric means of the values obtained for b-actin (ACTB: 4326315E), GAPDH, b2 microglobulin (B2M: 4333766-0710013) and ribosomal protein L13a (RPL13a: 185720330-7) were used as internal controls for CD4 + T cells, and ACTB was used as an internal control for CD8 + T cells. Production of Cell-free Virus The human T cell line MT-2 was used as a source of HTLV-1 producing cells. For preparation of cell-free HTLV-I, cells were seeded at 5610 5 /mL and incubated at 37uC in a humidified CO 2 atmosphere for 2 days in RPMI supplemented with 10% FBS. The supernatants were passed through a 0.45-mm filter (Millipore) to remove cells and debris, and the virions were concentrated 10 times by ultracentrifugation for 2 hours at 100,000 g. The pellet containing virus particles was resuspended in RPMI and quantified before being subjected to HTLV-1 p19 ELISA (ZeptoMetrix). Stimulation and Culture of Cells We plated PBMCs (10 6 /well) in forty eight-well plates and maintained them overnight at 37uC and 5% CO 2 . The cells were provided with fresh RPMI 1640 containing 5% AB human serum (Sigma) and 100 U/mL penicillin and cultured for an additional 48 hours. For analysis of the effects of cell-free HTLV-1, cell cultures from healthy donors were challenged with 10 ng of virions particles (p19 equivalent) prior additional culture. For leukotriene detection, the supernatants were removed, and the cells were resuspended in HBSS containing Ca 2+ and Mg 2+ and stimulated for 30 minutes with 0.5 mM of the calcium ionophore A23187 (Sigma), and then, reactions were stopped using ice. Measurement of Leukotrienes, Chemokines and Cytokines A specific enzyme immunoassay (Cayman) was used to quantify LTB 4 and LTC 4 in cell-free supernatants and LTB 4 and CysLT in plasma per the manufacturer's instructions. For plasma measurements, samples stored at 270uC were purified on Waters C18 Sep-Pak cartridges (Waters Associates) prior to performing the assay. Moreover, the cell-free supernatants were tested for IP-10 and TNF-a, and the plasma samples were tested for MCP-1, MIP1-a, IP-10, IL-8, IL-5, IL-4, IL-13, IL-1, IL-6, GM-CSF, TNF-a, IL-12, IFN-c, and IL-10 using a Duoset ELISA Development kit (R&D Systems) and for IL-17 and IL-23 using an OptEIA ELISA kit (BD Bioscience) in accordance with the manufacturer's instructions. The reactions were performed in 96well ELISA plates (Corning), and the optical densities were determined at 450 nm using a microplate reader. The cytokine concentration in each sample was estimated by interpolation of sample optical densities with the cytokine standard using a fourparameter curve-fitting program. Leukotriene, Chemokine and Cytokine Signature Analysis A method for identifying low and high producers of mediators by analyzing cytokine profiles was previously reported by Luiza-Silva et al. [50]. The concentrations of LTs, chemokines and cytokines (pg/mL) were assembled to calculate the global median index ([values of NI donors+HACs+HAM/TSP patients]/number of samples), and plasma samples were characterized as low-or high-mediator producers. Low-mediator producers were defined as having values lower than the global median, whereas highmediator producers were defined as having values greater than or equal to the global median cut-off. The percentage of high producers was calculated for each analyzed molecule, and the ascendant frequency of the non-infected group was used as the reference curve to identify changes in the overall mediator patterns from all the groups. Statistics The data are presented as means 6 SEM of values determined from the indicated number of samples. The data were analyzed by Student's t-tests or ANOVA with Bonferroni's post-test as appropriate to identify significant differences between group means using GraphPad Prism version 5 (GraphPad Software). Spearman's correlation test was performed to assess the association between the levels (pg/mL) of LTs, chemokines and cytokines while Person's test was used to analyze the association of LTs and the HTLV-1 proviral load. In all cases, statistical significance was defined as p#0.05. The cytokine signatures analyses were performed using the non-infected signature as the reference curve, and differences were considered significant when the values fell outside of the quartile of the reference signature. The use of the 50th percentile as the limit to identify relevant differences in the chemokine/cytokine/LT signatures between the groups has been adapted from a pioneering study by Luiza-Silva et al. [50]. This approach has been shown to be relevant to detect, with high sensitivity, putative minor changes in the cytokine signatures that are not detectable by conventional statistical approaches.
6,276.2
2012-12-20T00:00:00.000
[ "Biology", "Medicine" ]
Decentralizing Health Care: History and Opportunities of Web3 This paper explores the relationship between the development of the internet and health care, highlighting their parallel growth and mutual influence. It delves into the transition from the early, static days of Web 1.0, akin to siloed physician expertise in health care, to the more interactive and patient-centric era of Web 2.0, which was accompanied by advancements in medical technologies and patient engagement. This paper then focuses on the emerging era of Web3—the decentralized web—which promises a transformative shift in health care, particularly in how patient data are managed, accessed, and used. This shift toward Web3 involves using blockchain technology for decentralized data storage to enhance patient data access, control, privacy, and value. This paper also examines current applications and pilot projects demonstrating Web3’s practical use in health care and discusses key questions and considerations for its successful implementation. Introduction The relationship between the development of the internet and health care is a complex one, marked by parallel growth and mutual influence.As we have witnessed with the recent surge in interest surrounding generative artificial intelligence (AI), merely responding to technological innovations within the health care sector is insufficient; instead, we must encourage proactive readiness for the future.Accordingly, in this piece, we aim to explore the current trajectory of internet technologies alongside health care and emphasize the need for a shared vision of the path forward. In many ways, the evolution of the internet parallels the iterative changes that have defined health care (Table 1).The internet's earliest form was dubbed Web 1.0, or the "read-only web," and was filled with static pages, unidirectional flows of information, and minimal opportunities for engagement.This is not unlike the early days of medicine, in which siloed physician expertise left little space for patients to have a voice in their care.Web 1.0 is exemplified by the early adoption of electronic health records (EHRs), which digitized patient records for easier access, albeit in a static form.Over the past several decades, we have transitioned from Web 1.0 to Web 2.0, which emphasizes bidirectional information flows with user-generated content, social networks, and the democratization of information.In the same way, medicine has evolved in light of advances in medical technologies and a growing recognition of the critical role of patients making decisions about their own care.Web 2.0 is characterized by enhanced interactivity and user-generated content, with platforms such as patient portals adding another layer of patient engagement.Regarding public consumption, social media platforms, for instance, facilitated health communities wherein patients could share experiences and build support networks.Further, this era saw the beginning of digital information dissemination in health care.Web-based health information repositories like WebMD provide patients and health care providers with access to medical information.Thus, as a field, health care has made significant progress.However, in the context of Web 2.0, we remain bound by limited interoperability while various entities benefit from the storage, use, and sale of personal information.We are now on the cusp of a new era of the internet-and potentially a new era of health care-with the advent of Web3.Physician-centric-the early stages of medicine were largely dominated by physicians and their expertise.The flow of information and decision-making were unidirectional, from physician to patient.Additionally, there were fewer roles within health care teams, thereby limiting the division of labor and wraparound service delivery. Static webpages have minimal opportunities for users to engage with content on the page or with other users. • There is a unidirectional flow of information from the web page to the user. • It is text-based with limited media. • Expertise is siloed by specialty or organ system, with limited crosstalk. • Limited patient engagement-patients had minimal input or agency in their own health care.They were often passive recipients of care.User-generated content-websites began to allow users to upload, modify, and share content. • User-driven applications-the development of more advanced web technologies allowed for user-driven applications and services, like blogs, wikis, and media-sharing platforms. • Interdisciplinary approach-the practice of medicine has become increasingly interdisciplinary, involving teams of diverse health care professionals working together to provide care. The future of medicine Web3 • • Data ownership-patients may have full control and ownership of their health data, which could be securely stored and accessed on decentralized networks, improving privacy and interoperability. Web3 uses machine learning, artificial intelligence, and natural language processing to understand and interpret information. • Decentralization-data are stored on decentralized networks of computers rather than controlled by individual entities. • Advanced health care technologies-the integration of artificial intelligence, blockchain, large language models, and other advanced tools may lead to novel health care solutions like smart contracts for health insurance, predictive health analytics, precision treatments, and so on. • User control-it is designed to give end users more control over their own data and web-based interactions.It eliminates the need for intermediaries, enabling secure, peer-to-peer interactions. • Patient autonomy-patients may have increased agency over their involvement in research and clinical care, reinforcing the shift toward truly patient-centered approaches. Web3 is also referred to as the decentralized web, or the blockchain web, and represents the next stage of the internet wherein data are stored on decentralized networks of computers rather than by individual, centralized entities.Web3 aims to create a more secure, transparent, and user-owned paradigm built on blockchain technology and peer-to-peer networks which enable users to securely interact with one another without the need for intermediaries.Web3 broadly may be thought of as a "new" internet-in which data and web-based interactions are owned and controlled by the end users. Sharing of patient data initially collected for clinical care or research between health care institutions and third-party companies where it might be used for commercial gain is common [1].While being cared for, patients may unknowingly agree to the deidentified sharing of their data with external entities.Explicit permission for such sharing is sometimes not required if the data are deidentified but nonetheless limits patient agency.As health care institutions increasingly recognize the commercial value of such data, they engage in activities to refine and monetize these data through specialized third-party organizations.For example, Truveta, founded by a consortium of health systems, exists to structure and commercialize their data assets on behalf of the health systems [2].The trend extends to hospitals creating and divesting their own spin-off companies for specific data like genetics and annotated pathology.Moreover, companies like Komodo Health aggregate diverse data sets to develop products and services such as clinical trial planning [3].These commercial use cases highlight the value of the data itself-and the potential for greater patient agency in deciding how their health care data are used.Web3's architecture could potentially offer more transparent and secure data management solutions for such uses. With a move to Web3, we may further shift power to patients from insurers, the government, and health systems.For health XSL • FO RenderX care, the advent of Web3 promises a transformative shift in how patient data are managed, accessed, and used.Central to this transformation is the use of blockchain technology, which provides a decentralized framework for data storage.This approach ensures that patient data are not confined to a single repository but are distributed across a network, thereby facilitating comprehensive and global access for patients [4].Moreover, the incorporation of smart contracts and advanced cryptography systems automates the access process, enabling efficient and timely retrieval of data [5].Web3 additionally empowers patients with bidirectional control over their data.With Web3 integration, patients might contribute to their health records by uploading data from personal health devices, ensuring a more comprehensive health profile that combines clinical and patient-generated data [6].Privacy protection is a cornerstone of Web3, achieved through advanced cryptographic methods like encryption and secure multiparty computation [7].These techniques ensure that patient data remain secure and accessible only to authorized individuals, while the immutable nature of blockchain provides a transparent record of access and modifications, enhancing data security.Additionally, Web3 opens avenues for patients to derive novel value from their health data.With proper consent, anonymized data can be used in medical research, with patients receiving compensation in digital tokens [8].This not only incentivizes data sharing but also allows patients to actively participate in and influence medical research, aligning it with their health interests and ethical preferences.Lastly, although patient data are anonymized for researchers, patients can still access insights derived specifically from their data.These insights include potential genetic risk profiles and early cancer detection.Thus, Web3's approach to data management redefines the principles of access, control, privacy, and value, pivoting toward a more patient-centric, collaborative, and secure health care ecosystem. Emerging examples and pilot projects in the health care sector are demonstrating the practical applications of Web3 technologies.For example, companies like Pfizer are leveraging blockchain technology for enhanced traceability in drug supply chains [9].In patient data management, platforms such as Patientory (Patientory Inc) are empowering patients to store and manage their health data on blockchain, offering unprecedented control and privacy.Furthermore, health care systems are adopting smart contracts to streamline insurance processes, reducing administrative burdens and increasing transparency [5].In scientific discovery, the field of decentralized clinical trials is also embracing Web3, with platforms like ClinTex using blockchain for secure data sharing and patient recruitment [10][11][12].Moreover, the integration of Web3 in biobanks is revolutionizing the management and security of genetic and health data for study [13].A notable instance of Web3's collaborative potential is Vibe Bio, a decentralized autonomous organization that unites patients, doctors, investors, and researchers in the pursuit of cures for rare diseases.Vibe Bio leverages blockchain for transparent and collective decision-making in research and funding, overcoming limitations related to the low sample size available for many rare diseases' studies [14]. Several health care standards, platforms, and techniques are actively paving the way for the implementation of Web 3.0 in health care.For example, the Solid (Social Linked Data) framework, pioneered by Sir Tim Berners-Lee (creator of the World Wide Web), seeks to facilitate a decentralized web by enabling individuals to store their personal health data in "pods" or personal data stores.This grants them the autonomy to manage and share their data securely and efficiently.This democratization of data management resonates with a vision of an open, collaborative internet [15]. Notably, many early Web3 use cases center largely around generating financial value-for example, buying and selling Bitcoin.In fact, countries such as Estonia, have accelerated the Web 2.0 to Web3 transition with significant public investment and widespread adoption of Web3 within the financial sector [16].However, we wish to focus our discussion on the technology undergirding these transactions and the unique prospect of individual ownership of data.Here, we seek to raise the promise and potential pitfalls of Web3, alongside the associated implications for health care. Potential of Web3 Central to the discussion of Web3 is the value of personal ownership of health care data.In the current health care landscape, there is growing concern regarding the ownership and use of patient data.Typically, these data are collected and managed by health systems, service providers, and health insurers.While primarily intended for clinical purposes, there have been discussions and concerns in the public and academic domains about the potential for these data to be leveraged in ways that extend beyond these original intentions [17].These concerns underscore the importance of clear data governance policies and highlight the ethical implications of data management in health care.The evolution toward systems like Web3 seeks to address these concerns by offering enhanced data ownership and control to patients, thus potentially mitigating the risks associated with data misuse.The global market for data monetization is projected to reach more than US $15 billion by 2030 [18].This growth highlights the increasing value and potential revenue from data monetization, though it must be noted that business models and strategies for data monetization vary significantly across industries and companies. While there are instances where health data may be consolidated and disseminated, potentially for profit, there exists a legal and ethical landscape governing these practices.Ethical and policy guidelines, such as those outlined by the US Centers for Disease Control, mandate the responsible use of such data, ensuring the protection of individual privacy and adherence to confidentiality norms.For example, we might consider the UK's National Health Service.There, a national data opt-out system is in place for the secondary use of confidential health data for research and planning.However, a study revealed that there is limited public awareness of this opt-out system, with many participants being unaware that their anonymized health data could be used for secondary purposes such as research and health planning by XSL • FO RenderX entities outside of the National Health Service, including academic and commercial organizations [19]. In addition to facilitating personalized ownership, Web3 offers the potential for digital transformation and an opportunity to overcome the limitations of Web 2.0.Perhaps the most appealing target lies in the reform of EHRs-originally a Web 1.0-based health care technology.Imagine a future in which any patient can view their health records on their cell phone that is hosted by a decentralized network that only they can access.They open the app and it not only shows all their records but also all the scientific papers they contributed to and the insights those papers have generated.Lastly, they can easily change the sharing access of different data points on the device itself.This level of engagement will not just change the relationship between the patient and the health care system, but also the general public with science and research. Current trends in EMRs already hint at the transition toward a decentralized web as they are evolving to become more patient-focused, with enhancements in data interoperability and security.These developments suggest a shift toward a model where patients play a more active role in managing their health care data.In the foreseeable future, EMRs are likely to further align with decentralized web concepts.This includes the adoption of blockchain technology for secure and transparent data management to streamline health care procedures and empower patient consent.Additionally, the incorporation of decentralized identity solutions in EMRs will enable secure and independent verification of patient identities, reducing dependency on centralized systems.These advancements, which are already being piloted, represent a significant step toward the next generation of web technologies in health care [20].In contrast to traditionally siloed patient data, Web3 offers the potential of reimagining EHRs as mutable, patient-centric, and patient-owned, where linked data will allow tailored treatments that account for a patient's unique health history, genetics, environment, and lifestyle.This collective potential may be too great to ignore. Five Key Questions to Answer Despite its theoretical benefits, at present the decentralized web is nascent, and the benefits are largely unrealized.Before Web3 reaches widespread adoption, we believe health care leaders should consider several key questions. How Can We Secure Privacy in a Web3 Context? While Web3 technologies enable ownership, without privacy protections for information sharing, ownership can only go so far.Even with the enhanced security features of Web3 technologies, they are not immune to vulnerabilities.The decentralized nature of blockchain, while reducing certain types of security risks, can also introduce new vulnerabilities.These vulnerabilities may be significant, particularly in light of blockchain's immutable nature.Researchers have attempted to tackle the challenge of privacy through numerous technical approaches-though many current solutions sacrifice computational speed for privacy [21].However, emergent solutions have begun to reach the stage of adoption [22][23][24]. For example, companies such as Onai achieve privacy protection through cryptographic techniques such as secure multiparty computation [25].Broadly, compliance with legal standards like HIPAA (Health Insurance Portability and Accountability Act) and General Data Protection Regulation is also a must, necessitating regular updates to the governance framework in response to evolving legal requirements.An example is the need to consider data fidelity in the context of General Data Protection Regulation's "right to be forgotten" in which individuals have the right for their personal data to be erased [26].Additionally, policies should be patient-centric, prioritizing patient needs and privacy, and include guidelines for ethical data use, especially in research. How Will We Facilitate Change Management? The health care field is conservative by design, and Web3-based technologies often mandate a high level of technical knowledge to implement and use.Further, the centralized model of Web 2.0 is largely incapable of operating alongside Web3's decentralized approach, and so both will exist in parallel for a time.Thus, there will exist challenges to seamless interoperability between blockchain systems and traditional EHRs leading to data siloes.Beyond this, existing legacy systems within health care institutions may not align seamlessly with the advanced capabilities of Web3 technologies, necessitating significant technical expertise and likely infrastructure upgrades.Practical considerations regarding infrastructure include establishing a robust network infrastructure with adequate bandwidth, setting up blockchain nodes with the requisite server capacity and storage solutions, decentralized identity verification systems to securely manage patient identities, and experience design for user-friendly interfaces.Accordingly, change management toward Web3 systems will require a staged approach supported by iterative stages of implementation. How Can Web3 Scale While Maintaining an Equity Lens? As new technologies arise and each offers novel opportunities, we must remain centered on the ethical mandates of medicine.Access to Web3 mandates the presence of new tools and technical expertise which few health systems possess [27].Thus, as technologies scale, an opportunity lies in creating an incentive schema that reimburses for equitable adoption without limiting the innovation potential.That is, payers, including insurance companies and government health programs, could establish incentive structures that financially reward health care providers for adopting and effectively using Web3 technologies.These incentives might be linked to specific metrics including, but not limited to, the level of data interoperability achieved, the extent of patient data control facilitated, or the efficiency gains in patient care and administrative processes.Similar to the meaningful use criteria established for EMRs, payers could define a set of criteria or benchmarks for Web3 implementation, offering reimbursements or bonuses to providers who meet these standards.This approach would not only accelerate the adoption of Web3 technologies across different health care settings but also ensure that their integration is aligned with improved patient outcomes and system efficiency.Additionally, there is likely to be user resistance due to the complexities of blockchain technology and the digital divide.Patients in underresourced or rural areas, or those who are less tech-savvy, might find it challenging to engage with Web3-based health care systems, potentially exacerbating existing health care disparities. How Can We Avoid Wasteful Spending? The adoption of Web3 in health care has the potential to help address the persistent "grand challenge" of escalating expenditure in US health care.Examples include enabling more personalized treatment plans to reduce wasteful procedures, improving interoperability to enable efficient care delivery, and enhancing fraud prevention with transparent transactions.However, to realize these benefits, the initial adoption of Web3 must prioritize empirical, high-value principles-focusing on efficiency, appropriateness, and patient-centeredness-to avoid generating new expenses in the rollout process.The integration of Web3 into health care systems, akin to any substantial systemic transition, presents a unique opportunity to reevaluate and realign the system with its foundational principles.Web3's decentralized architecture, characterized by technologies like blockchain, offers distinct advantages that can be harnessed to enhance these principles more effectively than current systems.Most notably, Web3 technologies provide a framework that can shift the focus back to patient-centered care by giving patients greater autonomy and control over their health data.Such a systemic reorientation during the transition to Web3 not only aligns with the ongoing evolution of health care but also ensures that these fundamental principles are more deeply ingrained in the fabric of health care systems.Through this transition, as described previously, it will be important to curtail costs.The scalability and performance of blockchain in processing large data volumes, given limitations in managing high-throughput data, must be assessed [27].This scalability challenge could lead to slower transaction processing times and increased costs, potentially hindering the widespread adoption of Web3 in large health care systems. How Can the Health Care System Support Policy Makers in Preparing for the Arrival of Web3? As a field, we must acknowledge that policy regulation is unlikely to keep pace with technological innovation.Accordingly, legal clarity and guardrails will be necessary to ensure industry players, health systems, and academic experts collaborate to create shared endpoints that center on consumer protection, data privacy, and improved care.With fast-growing interest in generative AI and the creation of large language models specific to health care, questions of data ownership, use, and incorporation into new Web3-based tools bring new challenges that have yet to be addressed at scale. Overview In the process of transitioning to Web3 systems in health care, a phased approach is paramount for effective change management, implementation, and adoption.Integrating Web3 technologies into health care systems begins with a thorough assessment of existing IT infrastructure to identify areas needing upgrades or changes for Web3 compatibility, which is already a prerequisite for any health system addressing cybersecurity and regulatory compliance.Implementation will also require data migration and ensuring seamless integration with existing health care databases and applications, a step crucial for maintaining data continuity and integrity.Further, it will be essential to define clear rules regarding data ownership, explicitly outlining patient rights and access conditions.Incorporating blockchain technology can facilitate granular control, allowing patients to specify access permissions, thereby creating an auditable trail of data access.This initial phase may be followed by defining specific use cases where Web3 can add significant value, such as in patient data management, supply chain transparency, or facilitating research collaborations.These use cases help to focus the direction of Web3 implementation. For a health system, initially, strategic planning forms the cornerstone of this process.After prioritizing Web3 opportunities, a detailed implementation roadmap with a sufficient budget must be developed.Stakeholder engagement should include technology experts and vendors, health care providers, IT staff, and patients. Throughout this process, it will be necessary to scaffold rollout with clinical staff training.As Web3 introduces advanced technologies like blockchain and smart contracts, health care providers must undergo specialized training to become adept in these new systems, focusing on a basic technical understanding and practical clinical application.This shift will also lead to changes in clinical workflows; for instance, the processes for accessing and sharing patient data will evolve, necessitating adaptations to new data retrieval and sharing protocols.Health care professionals will need to adapt to more dynamic decision-making processes due to the real time nature of data updates on blockchain platforms.Additionally, the accuracy and accessibility improvements provided by Web3 could boost clinical efficiency but will require new competencies in data management.Crucially, the patient-centric model of Web3, which grants patients greater control over their data, will transform patient-provider dynamics, placing a greater emphasis on shared decision-making and patient engagement. The challenges of enforcing privacy protection laws in the Web 3.0 era, where patient control over health care data is paramount, may be addressed through a combination of approaches.These include implementing robust encryption, smart contracts, and security protocols to safeguard patient data against unauthorized access, a measure that gains importance given the existence of extant and ongoing challenges with health care data breaches. To monitor such breaches, comprehensive incident response plans by technology platforms may be developed alongside IT leaders.At a policy level, the development of legal frameworks specifically designed for decentralized data management in health care will be central for providing clear guidelines on liability and actions in the event of a data breach.Finally, the launch of the Web3 system should be coupled with ongoing postimplementation support to address any technical issues or concerns, ensuring the long-term effectiveness and efficiency of Web3 technologies in the health care environment. Table 1 . Parallels between the evolution of the web and medical practice. • Limited technology-the use of technology in health care was limited or rudimentary, with only foundational electronic health record platforms.Technology primarily focused on direct patient care with little emphasis on data management or communication. • Social networks-the emergence of social networking platforms allowed users to connect, collaborate, and share information.• • Integration of technology-novel technologies have become deeply integrated into health care, leading to the development of telemedicine, patient portals, and wearable technologies.
5,521.6
2023-09-13T00:00:00.000
[ "Computer Science", "Medicine", "History" ]
Synthetic Data in Quantitative Scanning Probe Microscopy Synthetic data are of increasing importance in nanometrology. They can be used for development of data processing methods, analysis of uncertainties and estimation of various measurement artefacts. In this paper we review methods used for their generation and the applications of synthetic data in scanning probe microscopy, focusing on their principles, performance, and applicability. We illustrate the benefits of using synthetic data on different tasks related to development of better scanning approaches and related to estimation of reliability of data processing methods. We demonstrate how the synthetic data can be used to analyse systematic errors that are common to scanning probe microscopy methods, either related to the measurement principle or to the typical data processing paths. Introduction Scanning probe microscopy (SPM) is one of the key techniques in nanometrology [1][2][3]. It records the sample topography-possibly together with other physical or chemical surface properties-using forces between the sharp probe and sample as the feedback source. SPM has an exceptional position among nanometrological measurement methods when it comes to topography characterisation. Apart of versatility and minimum sample preparation needs its main benefit in nanometrology is the simple metrological traceability compared to some other microscopic techniques. Achieving a very high spatial resolution is, however, a demanding task and instruments are prone to many different systematic errors and imaging artefacts. The goal of nanometrology is to provide metrological traceability, i.e., an unbroken chain of calibrations starting from top level etalons, down to the microscopes. An important part of this task is expressing the measurement uncertainty, which means understanding these systematic errors and artefacts and which is one of the crucial aspects of transition from qualitative to quantitative measurements. Measurement uncertainty in microscopy consists of many sources and to evaluate them we usually need to combine both theoretical and experimental steps. This includes measurements of known reference samples, estimation of different influences related to the environment (thermal drift, mechanical, and electrical noise), but also estimation of impact of data processing as the raw topography (or other physical quantity) signal is rarely the desired quantity. One of the approaches to analyse the uncertainty is to model the imaging process and data evaluation steps on the basis of known, ideal, data. Such an approach can be used at different levels of uncertainty analysis -at whole device level this is related to the virtual SPM construction [4,5], trying to incorporate all instrumentation errors into a large Monte Carlo (MC) model for uncertainty propagation. There are, however, many finer levels where ideal, synthesised, data can be used and which are becoming more popular. As one of the software tools that can be used for both SPM data synthesis and analysis, the open source software Gwyddion [6,7], was developed by us and is being used already by different authors for artificial data synthesis tasks, we would like to review the state of artificial data use in SPM. Artificial data can be used for a multitude of purposes in the SPM world. Starting with the measurement aspects, they can be used to advance scanning methodology, for example adaptive scanning, like in Gwyscan library [8] focusing on sampling data for optimal collection of statistical information about roughness. Similarly, generated data can be used for development of even more advanced sampling techniques, e.g., based on compressed sensing [9,10]. In contrast to measured data, generated datasets allow estimating the impact of different error sources on the algorithm performance in a more systematic manner. Similarly, generated data were used for analysis of uncertainties related to the instrumentation, like better understanding of effects related to the feedback loop [11] in tapping mode measurements. Going even further in the instrumentation direction, generated data can be used to create novel and advanced samples or devices for metrology purposes. Generated data were used for driving a calibration platform that is mimicking the sample surface by moving up and down without lateral movement, either for creating a virtual step height or defined roughness sample [12,13]. Such approach is one way how to provide traceability to commercial microscopes that have no built-in interferometers or other high level metrology sensors. Synthetic surface model was also used to create real samples with know statistical properties, e.g., self-affine surfaces designed for cell interaction studies using two-photon polymerisation [14] or isotropic roughness designed for calibration purposes using by a focused ion beam [15,16]. The largest use of artificial data is, nonetheless, in the analysis of data processing methods, where they can serve for validation and debugging of the methods and to estimate their sensitivity and reliability. They were used to test data processing methods and to determine uncertainties related to the imaging process, namely tip convolution and its impact on different quantities. The impact of tip convolution on statistic properties of columnar thin films [17,18], fractal properties of self-affine surfaces [19,20], and size distribution of nanoparticles [21] were studied using entirely synthetic data. Novel approaches for tip estimation using neural networks were developed using artificial rough data [22] and methods for using neural networks for surface reconstruction to remove such tip convolution errors were developed using simulated patterned surfaces [23]. Algorithms for double tip identification and correction in AFM data were tested using synthetic data with deposited particles of different coverage factors [24]. Simple patterns combined with Gaussian roughness were used for development of non-local means denoising for more accurate dimensional measurements [25]. Combined with real measurements, synthetic data were used to establish methodology for spectral properties evaluation on rough surfaces [26] and spectral properties determination from irregular regions [27]. Synthetic data were used to help with results interpretation and for finding relationship between mound growth and roughening of sputtered films [28]. Combination of real and synthetic datasets was used for determination of impact of levelling on roughness measurements [29] and for development of methods for splitting instrument background and real data in analysis of mono atomic silicon steps [30]. They were used to develop methods for grating pitch determination [31] and reliability measures for SPM data [32]. In the area of data fusion, they were used for construction of methods for low and high density data fusion in roughness measurements [33]. Even more general work was related to the impact of human users in the SPM data processing chain on measurement uncertainty [34]. Artificial data can also be used to estimate the impact of topography on other SPM channels. Most of the techniques used for measuring other physical quantities than length are influenced by local topography, creating so called 'topography artefacts'. To study them one can create a synthetic surface, run a numerical calculation of the probe-sample interaction and simulate what would be the impact of a particular topography on other data sources. This approach was used for simulation of C-AFM on organic photovoltaics on realistic morphologies similar to experiment [35], for simulation of topography artefacts in scanning thermal microscopy [36] and for simulation of impact on lateral forces in mechanical data acquisition [37]. In this work we review the methods used for generation of synthetic data suitable for different SPM related tasks and give examples how these methods can be run in Gwyddion open source software. Many of the presented methods are more general and can be applied also to the analysis of other experimental techniques based on interaction with surface topography, for example to study the impact of line edge roughness in scatterometry of periodic structures [38]. Artificial SPM Data Artificial SPM data can be produced by many different procedures. Some of them attempt to capture details of the physical and chemical processes leading to the surface topography formation. Others are designed to mimic the final shape of the structures without any regard to the underlying processes. Frequently the algorithms lie between these two extrema, trying to preserve a physical basis while being fast enough for practical purposes. In the following we group the methods to several broad classes, more or less corresponding to the character of the generated data and types of phenomena simulated. We give particular attention to models implemented as Gwyddion modules-this is noted by giving the module name in italics. Results of the data synthesis algorithms presented below are usually height fieldsregular arrays of height data in which for each coordinates (x, y) there is only one z value. This is different from general 3D data, but it is also the standard SPM measurement output. Geometrical Shapes and Patterns Well-defined geometrical objects are frequent in nanotechnological samples, whether coming from microelectronics, MEMS or other fields. They need to be measured and reconstructed in SPM simulations. Both can be done within a single framework. Fitting shapes to measured topographical data is the inverse (i.e., harder) problem to their construction from given parameters. Therefore, construction comes more or less free with shape fitting. This is also the approach taken in Gwyddion, where the fit shape module can be also used directly to create artificial AFM data representing the ideal topographies. Modelling of ideal shapes like steps, trenches, pyramids or cylinders usually only involves elementary geometry and the model is just z = f (x, y), where f is an explicit function such as z = (x 2 + y 2 ) 1/2 for a cone (cones with other parameters are then created by coordinate transformations, such as rotation, scaling, or folding). Overlapping neighbour features and faceted shapes can be modelled using f = min i f i or f = max i f i , where f i are simpler functions (i = 1, 2, . . . ). Still, some geometrical models can become rather involved, for example for rounded indentation tips [39]. The microscope tip is an extremely important geometrical shape in SPM. Widely used models include cylinder with cone and spherical cap [40], pyramid with spherical cap [41], hyperboloid [42,43], and paraboloid [44]. Parametrisable tip models are needed namely when mechanical properties are evaluated from force-distance curve measurements; models such as linearly interpolated tip or n-quadratic sigmoidal are then used [45]. Different tip models can be created using Gwyddion model tip module. More irregular tips can be produced by using any other synthesis module, e.g., particle deposition and cutting a suitable part of generated surface. Although true geometrical shapes are used in detailed simulations [46], commonly the tip shape is discretised, i.e., approximated as a height field, in particular when subsequent operations are defined in the pixel representation [47][48][49]. The patterns often have certain dimensions which are precise and intended for calibration, whereas other dimensions are not guaranteed. For instance usually the period of a grating (or its inverse, the pitch) is specified. Its other parameters, such as fill ratio, height, or side slopes are unspecified. Artificial data generation methods need to reflect this. Gwyddion's pattern module creates regular patterns with exactly specified periods. Other parameters, such as width, height, slope, or placement of features can be also perfectly regular-or varied, but the variation is local, not disturbing the pitch. A subset of patterns corresponding to common calibration samples is illustrated in Figure 1. Realistic shapes require adding defects such as surface and line roughness. These will be discussed in Section 2.4. Siemens star Ideal Realistic Extreme Pillars Holes Grating Amphitheatre Figure 1. Selected examples of regular geometrical shapes generated by the pattern module. The top row (Ideal) depicts the ideal models. The middle row (Realistic) shows slightly non-ideal shapes exhibiting variability, line roughness or deformation. The last row (Extreme) illustrates the expressive power of the models using extreme variability and odd settings. Finally, it is useful to generate regular lattices, for instance to represent atomic lattices. Of course, a lattice alone is not sufficient to simulate a technique such as STM. Solid state physical computations are necessary (generally DFT-based) [52][53][54] together with Tersoff-Hamann approximation for the STM signal [55]. However, the investigation of a data processing method behaviour may not require ab initio results as input data and lattices can be useful in other contexts. Artificial data can be then produced by first generating points corresponding to a regular and semi-regular tilings, Penrose tilings [56], the Si 7 × 7 surface reconstruction or any other interesting two-dimensional pattern. The point locations can be randomly or systematically disturbed. The actual neighbourhood relations are then obtained using Delaunay triangulation [57,58] and Voronoi tessellation [58,59] and quantities, such as distance from the nearest point or nearest boundary are used to render the image (lattice, Figure 2). Deposition and Roughening Random roughness and nearly-stochastic surface textures are ubiquitous in material science as they can arise from almost any processing-for instance deposition, etching, mechanical contact, or crystallisation. The influence of roughness increases at nanoscale where its characteristic dimensions can be comparable to the dimensions of the objects and it can even become the dominant effect influencing surface properties [60][61][62][63][64]. Frequently it is crucial to include it in simulations. The importance of surface roughness is reflected by the extensive literature published on this topic, including many approaches and algorithms for simulation of roughening processes [65,66]. Models of growth of rough surfaces during depositions are among the most studied. At nanoscale, deposition is probably the most common roughening process (whereas at macroscale subtractive processes such as machining are more common). Roughness growth models are also of significant theoretical interest as the scaling exponents are related to the underlying physical processes [65]. Most practical growth models are discrete, i.e., realised in a grid, usually one matching image pixels, and formulated in terms of individual pixel values -some can even be considered cellular automata. The height dimension can be treated differently with respect to discretisation, value scale, etc. The distinction between 3D and 2+1-dimensional models is not always clear in this case. Still, they generate topographical images, i.e., height fields. The second major difference from the previous section is that most models considered in this and the following sections are inherently random. This can mean growth and roughening simulation using stochastic partial differential equations (PDE), such as the Kardar-Parisi-Zhang (KPZ) equation [65,[67][68][69] ∂ t z = ν∆z + (λ/2)(∇z) 2 + η. Parameter ν and λ characterise the surface tension and lateral growth; η is uncorrelated white Gaussian noise. A KPZ modification known as the Kessler-Levine-Tu (KLT) model [70] has been used for simulation of the etching process producing rough light-trapping surfaces [71]. However, even more commonly the model is an MC simulation of some kind of process, at least in a loose sense. Both approaches produce random instances of surfaces by sampling a probability space. The simplest MC deposition model is random deposition [65,72], in which small particles fall independently onto the surface at random positions, increasing the height at that position. It produces uncorrelated noise with Gaussian distribution, which can be easily generated directly (see also noise models in Section 2.4). When the particles are allowed to relax to the lowest neighbour position, lateral correlations appear-nevertheless with scaling exponents α = β = 0 in 2D (the scaling is logarithmic). The simplest classical model with an interesting behaviour is thus ballistic deposition (Ballistic) [65,73,74], in which the particles immediately stick to the surface at the first contact-see Figure 3a for an illustration. The particle positions are generated randomly with a uniform distribution over the area. Although ballistic deposition is simple, it produces self-similar surfaces in the same universality class as the KPZ equation and has been used for modelling of colloidal aggregates [75]. It can also be seen as the base for other models. Models considering additional particle behaviour after it touches the surface have successfully reproduced a number of real phenomena. A variety of models have been proposed for molecular beam epitaxy, with different relaxation rules, taking into account diffusion and possibly desorption [65,76,77]. The growth of columnar films can be reproduced if the incident particles do not fall vertically, but at random oblique angles, creating a shadowing effect (columnar) [65,74,78]. After hitting the surface the particle relaxes locally to a lower, energetically preferably position (Figure 3b, see also Figure 7 for an example). Oblique incidence in columnar film growth creates as shadowing effect as the incoming particle encounters the tallest column (at A'); it then relaxes to an energetically preferable site; (c) In the top view of diffusion limited aggregation simulation, particle A has to break one bond to move in indicated direction while B has to break two-particle C has to pass the Schwoebel barrier to move to second layer. On the other hand, if particles can travel long distances across the surface to find an energetically preferable site, this correspond to the DDA type of models (deposition, diffusion, and aggregation), which reproduce structures seen in sub-monolayer deposition, as well as other structures formed by particle aggregation [65,67,74]. In the diffusionlimited aggregation (DLA) model simulated particles can hop between surface sites, facing a barrier E 0 . Neighbour particles increase the barrier by additional energy E N , making dimers and larger clusters unlikely to break (Figure 3c). Using a Metropolis-Hastings type algorithm [79,80], the effect of energy barriers is the reduction in hopping probabilities by factor exp(−∆E/k B T) for a barrier ∆E > 0; k B and T being the Boltzmann constant and temperature. The atomistic simulation can also include a non-zero probability of passing the Schwoebel barrier, allowing particles to move between layers [81,82] (diffusion, Figure 2). Even models mentioned in the previous paragraph often exhibit different submonolayer and multilayer growth regimes, with a transition between them. The spectrum of phenomena observed in sub-monolayer and few-layer film growth is surprisingly rich and a variety models have been used to study specific processes, such as island ripening processes and roughening transitions. [65,[83][84][85][86]. The deposition models have inspired several other random surface texture generation methods. A texture formed by random protrusions of given shape is generated by the following method (objects, Figure 2). A shape with finite support, for instance a pyramid, is generated at a random location. Surface minimum m over its support Ω is found: where h i is the pyramid height at pixel i. The procedure is repeated until given coverage by the shapes is reached. This model has been used for instance for the modelling of pyramidal solar cell surfaces and can be used to reproduce the textures of TG1 or PA series tip sharpness calibration samples. For single-pixel features the model is equivalent to random deposition, but larger features give raise to lateral correlations. Numerical simulations in 2+1 dimensions suggest scaling exponents α = 1/2 and z = 2 which differ from both random deposition with relaxation and KPZ. Nevertheless, in practice the model is used in the sub-monolayer regime up to a few layers. Several other models follow the same scheme of choosing a random object and location. The surface is modified by the placed object according to a local rule-usually the height increases, but holes can be created instead for 'etching'. A slightly more sophisticated version, which places real 3D shapes instead of 2D functions, for instance ellipsoids or rods, but still simply sticking to the place they touch, has been implemented in Gwyddion (pile up, Figure 7). An actual physical simulation of interacting 3D objects is used to reproduce selforganised conglomerates formed by the settling of larger particles on the surface [21]. In this case the 3D objects are relaxed using an integration of Newton equations similar to a molecular dynamics simulation [87]. Interactions between particles and between a particle and substrate are modelled using the Lennard-Jones potential. An Anderson thermostat is used to simulate the Brownian motion of the particles; in addition the nanoparticle velocities are damped during the computation to simulate the decreasing mobility. The Verlet algorithm is used to integrate the Newton equations. By stopping the algorithm before convergence it is possible to simulate a partial relaxation, often observed in practice. This model is used in particles and rods modules in Gwyddion ( Figure 2 and also Figure 7). In the case of rods relaxation, each rod is represented as a rigid configuration of three spheres, using the Settle algorithm [88], which is sufficient for simulation of behaviour of rods of small aspect ratio. Order and Disorder A different family of models is used to represent contrast patterns in non-equilibrium systems with spontaneous symmetry breaking and long-range organisation. Typical examples include waves in excitable media [89] which produce characteristic patterns found in diverse systems such as chemical reactions [90,91], vegetation patterns [92], propagating flame fronts [93] or cardiac tissue [94] (Figure 4); or static Turing patterns that play role in developmental biology [95][96][97] (Figure 4). More directly relevant for nanometrology are the patterns of magnetic domains in multilayers used as reference samples in magnetic force microscopy [98][99][100][101] or the morphology of phase separation [35]. We consider all models related to self-organisation, order-disorder transitions, phase separation and related phenomena to be part of this family. Hybrid Ising Coupled PDEs Phase separation Direct -ordered Direct -disordered A distinct feature of this class is that the data fields are non-topographical. Whether the values represent chemical concentrations or spin orientations, the computations occur in the xy plane with no notion of the third dimension. An important classical model is quenched disorder in a regular solid solution, known also under many other names, such as Ising or lattice gas model [102][103][104][105][106][107]. It can reproduce patterns forming due to separation of phases or domains (a similar approach is also used to simulate the morphology of chains in a polymer-blend films [108]). Cooling from a high temperature, in which the system is in a disordered state, long-range correlations starts to appear as it nears the critical temperature T c . A phase transition to ordered domains would occur at T c (in dimensions D > 1, whereas for D = 1 a gradual change is typical [105]). However, if the cooling is fast, the energy barriers can become large compared to k B T before the transition finishes and the system is frozen in an intermediate state. The patterns can be generated using simulated annealing, a Metropolis-Hastings type algorithm [79,80,109] (annealing, Figure 4). Each image pixel is in one of two (or possibly more) states. Random transitions, such as swapping two neighbour pixel states, can occur with probability min 1, exp(−∆E/k B T) if the configuration energy increases by ∆E. The slow convergence for low T motivated the search of alternative algorithms (some of which are mentioned below). For the formation of the eponymous patterns, Turing originally proposed a reactiondiffusion model [95]. However, since systems with local activation and long-range inhibition (LALI) are in general capable of forming such patterns [97,110,111], many other models have been presented over the years which exhibit similar behaviours. The standard modelling approach is coupled PDEs that for the reaction-diffusion model can be written ∂ t c = D ∇c + f(c), where c is the vector of component concentrations, D is a diagonal matrix of diffusion coefficients and f is a non-linear function describing the reaction kinetics (coupled PDEs, Figure 4). Two components are sufficient to reproduce a wide variety of interesting phenomena, although some types of behaviour require three or more components [112]. Alternative methods for patterns production have been proposed, often aiming to improve the efficiency of the long-range inhibition simulation. A so-called kernel based Turing model replaces the PDE with an explicit shape of activation-inhibition kernel convolved with the concentration variable [97]. Hybrid LALI models employ combinations of differential equations and cellular automata or other local discrete rules [113][114][115]. A hybrid non-equilibrium Ising model can be constructed by combining a discrete short-scale Ising model for two-state variable u with continuous slow inhibitor v described by a differential equation [113]. Variable u is updated using the standard scheme, with flipping probability min 1, exp(−∆E/k B T)/2 . The state energies E are computed from the number of different neighbours n but also biased using u as E = Buv + Jn, where B determines the bias and J the interaction strength. Depending on the effective inhibitor diffusivity, v can be described either by a linear reaction-diffusion partial differential equation with macrogrid averaging (fast diffusion), or a local ordinary differential equation (non-diffusing inhibitor). In the second type (domains, Figure 4), v follows τv = −v − ν + µu, where ν and µ are bias and inhibitor strength, and τ is the characteristic time. This defines the relative timescales of u and v as they are alternately updated. The model has several regimes and can reproduce both the spiral waves and phase separation-like patterns. Extensive simulations require generating large amounts of artificial data and models involving any kind of time evolution may be too time-consuming. Depending on the application, fast models which abandon the simulation path and just directly reproduce the basic features of the patterns may be preferable. An example is the model mimicking Turing pattern type textures of MFM calibration samples [100]. It is based on the peculiar frequency spectra in which one spatial frequency strongly dominates due to Turing instability [95]. The construction has two steps. Synthesis in the frequency domain provides data with the narrow frequency spectrum. Morphological post-processing then refines the local morphology to resemble more closely the real patterns (phases, Figure 4). Instrument Influence A special class of data synthesis methods models the various artefacts related to the measurement principle and measurement process. In the SPM world the most prominent type of modification is so called tip convolution, a distortion of measured morphology related to the fact that the SPM probe is not infinitely sharp. The resulting dataset is a convolution (mathematically, a dilation) of the probe and sample [47]. For a known probe the effect can be simulated using algorithms presented in Reference [47], producing simulated AFM result from true (ideal) topographical data. Thermal drifts are present in nearly all the SPMs [116]. They are related to thermal expansion of different microscope components before the thermal equilibrium is reached after instrument start or when the temperature in the laboratory is not sufficiently stable. They can be simulated by adding some x, y and z components to the simulated data. Another source of distortion are scanning system imperfections. In open loop systems SPM scanners are subject to systematic errors related to the piezoelectric actuators hysteresis, non-linearity, and creep [117]. In closed loop systems errors arise from non-linearity of sensors and inaccuracy and linear guidance system imperfections [118]. A quite general technique for distortions in the xy plane is displacement field (displacement Field, Figure 5). Consider a vector field v(r) defined as a function of planar coordinates r = (x, y). The distorted image z is created from original image z using z (r) = z r + v(r) with suitable handling of z(r) for r falling outside the image (periodic, border value extrapolation or mirroring). A slowly varying v represents drift and similar systematic effects, for instance in xy plane by putting and φ are shift and rotation with respect to centre r 0 (R denotes rotation matrix). The time dependencies of b and φ define the drift-in the simplest case of linear drift they can be taken proportional to c · r, where c is a constant vector formed by inverse scanning speeds. On the other hand, v varying on short scales can model non-instrumental effects, such as line and surface roughness [119]. Both are illustrated in Figure 5. There are many possible useful choices for v: explicit functions (polynomials), random Gaussian fields or other correlated noise, or even other images. We formulated the distortion for images and this is how it usually applied. Nevertheless, for explicit functions like those in Section 2.1 the displacement can be applied directly to the coordinates, without intermediate pixelisation. Displacement Line roughness Simple noise Scars/strokes Random tilt Noise is ubiquitous and is related to different effects-noise in the electronic circuits, mechanical vibrations and feedback loop effects. The spectral properties of noise depend on its source, like 1/ f noise being related to light fluctuation [120] or shot noise to detection of light beam reflected from cantilever that is frequency independent. Often it has some dominant frequencies, either related to the electrical sources from the power line, characteristic mechanical vibration frequencies of the tip-sample system, or acoustic noise from the environment [121]. In artificial data preparation, noise can be generated independently and added to the synthetic data in post-processing. For simple simulations independent noise in each pixel can be sufficient (noise, Figure 5). Correlated noise with given power spectrum is generated using spectral synthesis [74], i.e., constructing the Fourier coefficients with given magnitudes and random phases and using the inverse fast Fourier transform (FFT) to obtain the correlated noise (spectral, see Figure 9 for image examples). A special type of noise related to the feedback loop effects are scars/strokes, short segments of the scanned line that are not following the surface (line noise, Figure 5). Another important type of noise related to the scanning process is the line noise, which causes shifts between individual lines scanned in the fast scanning axis that form the image. The source of this noise is not very well understood, but most probably it is a mixture of low frequency noise, drift, impact of changing the tip motion direction, and impact of tiny changes of the tip behaviour (contamination, tip wear, etc.). It can also be added to synthetic data using line noise in Gwyddion (see Figure 10b in Section 3.2 for an example). An important error source in optical detection based SPMs (i.e., nearly all commercial systems) is the interference of light which misses cantilever and is reflected off the sample surface towards the beam deflection detector [122,123]. It can be modelled using simple geometrical optics [122]. However, in practice also diffraction effects can play a role as diffraction pattern can be often seen in the beam reflected from cantilever. The effect can be visible namely on very flat sample measurements, creating a pattern in the topography channel resembling interference fringes. This is usually a reason for re-aligning the laser position on the cantilever. However, residuals of this effect cannot be so easily noticed yet they still affect the topography measurements. In the simplest approximation this error source can be expressed as a harmonic function of the height and in Gwyddion can be added using data arithmetic. The procedure includes taking the surface, separating details and polynomial background out of it, calling the background b(x, y), adding A sin(4πb/λ) to it and merging it again with the details. Feedback loop effects are related to undershoots and overshoots of the proportionalintegral-derivative (PID) controller that is used to keep the probe-sample interaction constant via z motion of the tip or sample. These can be simulated for synthetic data by establishing a virtual feedback loop based on a model force-distance dependence and calculating the cantilever response using a suitable model, e.g., damped harmonic oscillator for tapping mode measurements [124] combined with modelling the time evolution of the feedback loop response. A simple feedback loop simulation is also included in Gwyddion (PID). Further Methods Preceding sections introduced several classes of surface and texture generation methods which are natural candidates for artificial SPM data because of physical or metrological reasons. However, the options are not limited to simply running them. Highly complex artificial data can be obtained by chaining several algorithms. In particular, ideal patterns are frequently combined with defect generators for realistic data. A generated precise geometrical pattern can be modified in sequence by added particles, displacement field, feedback loop effects, and line or point noise. Two such examples can be seen in Figure 6, a slightly distorted and uneven Si 7 × 7 surface reconstruction (with scanning artefacts added) and sequential 'deposition' of large and small particles (again with artefacts). Gwyddion makes chaining particularly easy as all synthetic data generators can take an existing image as the starting point. An example is shown in Figure 6 where simulated columnar film growth was seeded by a grating generated by another module. This also allows using the same generator multiple times, for instance to create multiscale patterns. Furthermore, the generators can be combined with other standard image and morphological processing methods-edge detection, opening, closing, or Euclidean distance transform (EDT) [125]. The 'Lichen' image in Figure 6 was created using DLA, post-processed with edge detection and correlated Gaussian noise. 'Ridges' originated as a simple sum of sine waves, which was then thresholded and EDT was applied to the result. Combination of patterns generated at different scales is a standard technique in noise synthesis-although here usually only simple linear summation is used. Noise generators are an important classic category of which Section 2.4 listed a few but at least a few others need to be mentioned. The Perlin noise generator (and its newer alternative the simplex noise) [126,127] produce spline-based isotropic locally smooth noise. They are frequently used as multi-scale, combining outputs at different lateral scales to obtain a somewhat self-affine result. Stochastic midpoint displacement [74,127,128] (Brownian) is another a direct-space construction, in this case top-down. The generation starts with at a coarse scale with sparse grid. The grid is then progressively refined using midpoint interpolation with random value variation obeying a scaling law. The result approximates, to some degree, fractional Brownian motion. As an alternative to FFT-based spectral synthesis the Mandelbrot-Weierstrass method also sums a sine series, but with frequencies forming a geometric progression ( f n ∼ c n ) instead of an arithmetic one ( f n ∼ n) [74,129]. Other correlated noise generation approaches include sparse convolutions or wavelet synthesis [127]. SPM techniques are naturally connected to surfaces and processes on them. Modelling of processes occurring at material boundaries is a vast field. Many models have been developed for simulations at different scales that are typical in nanoscience-such as boundary front propagation in disordered media such as wetting, burning, or growth of bacterial colonies [65,130], dendrite formation simulated using the phase field method [131], ground surface topography by simulating material removal by active grains [132], pitting corrosion texture using stochastic cellular automata [133], or dune formation by random transportation of sand 'slabs' in a lattice [134]. Nevertheless, they may produce data useful for testing and evaluation of SPM data processing algorithms. This applies to pattern synthesis methods in general. Processes at different scales and with different underlying physical or chemical details give raise to similar structures-as was already noted in Sections 2.2 and 2.3. When one is looking for a generator producing test data with particular spectral characteristics, connectivity, anisotropy or multi-scale properties, it can sometimes be found in unexpected places. This extends even to procedural textures developed originally for computer graphics. Some do not have any basis in physics, such as maze generation using evolving cellular automata [135]-even though the results share some characteristics with Turing patterns. A round tile pattern can be created by recursively solving the mathematical three circle touching problem and morphological post-processing (discs, Figure 2). However, many apply simulation method from physics and engineering fields to create patterns resembling natural phenomena. For instance, realistic crack patterns were generated using a mesh simulation by iterative addition of new cracks (based on the highest priority material failure) and relaxation of the stress tensor in the mesh [136]. Lichen growth was simulated using a DLA-based model, including light and water flow simulation [137]. A very active related area of research in computer graphics is texture synthesis, i.e., production of textures similar to a given example (in some statistical sense). The generated image can then have useful properties the original lacks, such as being tileable (periodic). Impressive results have already been achieved [138][139][140]. If we have a sample of surface texture and wish to generate more samples, existing texture synthesis methods allow to do this with relative ease. The downside is, of course, the absence of physical interpretation of any texture properties. After all, these methods have been developed for visual impression, not physical accuracy. In general it is not possible to control physically interesting parameters of the surface (sticking coefficient, scaling exponent, etc.). However, there are cases when these techniques can be still useful even in a simulation, for example to render a much larger version of surface texture, which may be difficult to acquire otherwise. A completely different approach to surface texture construction is the modification of existing data to enforce values of specific parameters. The basic case is the adjustment of value distribution to a prescribed one (coerce). It has been used for the creation of tunable random roughness for simulations [141], but also in the production of physical roughness standards [142]-in this case iteratively to ensure the produced standard conforms to the design. More complex iterative procedures can generate textures with multiple prescribed statistical parameters [143]. Impact of Tip on SPM Results Tip sharpness is a crucial factor of successful SPM measurements. Ideally, an infinitely sharp tip would provide undistorted image. Finite tip size leads to distortion and the resulting data are convolution (dilation) of the tip and surface shape [47]. Examples of these distortions can be seen in Figures 7 and 8a (both simulated). Since the tip can evolve during scanning due to wear and its radius determined previously or provided by the manufacturer is, therefore, no longer valid, it is necessary to estimate its geometry from data. Without synthetic data, it would be very hard to develop such estimation algorithms. By generating known data and known tip shape and by performing convolutions and tip estimations under different conditions we can analyse how the results are affected by influences like noise, feedback loop faults, or scan resolution. As an example of such procedure, a radial basis neural network was trained using simulated data and then applied on scanned gratings data to reduce the influence of tip convolution [23]. It pointed out the problem of training the network, namely using suitable tip for training and also to points on surface where the information is lost. In Reference [22] neural networks were used to speed up the tip convolution impact analysis, namely to obtain a certainty metric to quantify the quality of local tip-sample contact. For this, rough surfaces with wide range of roughness parameters were generated. In Reference [24] the impact of tips having two asperities instead of one was investigated, focusing on analysis of fibril structures. The image blur related to tip convolution was estimated using Hough transform. Bayesian blind deconvolution was then used to remove it from the measured data. Synthetic data were used to demonstrate the versatility of the method for other types of surfaces, namely particles on a flat substrate. An illustration of utilisation of synthetic data in tip convolution and blind estimation [47] is in Figure 7. Four synthetic topographies were generated, each with a different type of surface features. They were dilated by the same known tip and the results were used for blind tip estimation. The results are compared in the last row. The impact of surface character on the reconstructed shape is dramatic and shows how the lack of certain directions or slopes is reflected by the corresponding lack of tip geometry information in the convolved data. Although the effect of tip convolution on direct dimensional measurements, like the width of a particle, can be almost intuitively understood, statistical quantities, like roughness parameters, are frequently impacted in a counter-intuitive way. Synthetic surfaces are invaluable for addressing this problem. Columnar thin films were synthesised in References [17,18] using ballistic deposition with limited particle relaxation. The goal of this procedure in Reference [17] was to better understand the growth-related roughening processes when conformal films are deposited on rough surfaces. Reference [18] studied the evolution of different statistical parameters of roughness when sample is convolved with tips of different radius, showing that the impact of wrong measurements on pores between individual columns using bigger tips has significant influence on both the height and lateral statistical quantities. A similar analysis was done for fractal surfaces [19], where limitations of fractal dimension analysis methods were identified, as well as large discrepancies between different analysis methods. Multi-fractal properties turned out to be even more complicated as tip convolution seems to add a sign of multi-fractal behaviour also to surfaces that were originally of mono-fractal nature [144]. Such analysis could not be done without using synthetic surfaces of known fractal properties. Synthetic particle deposition was used to estimate the reliability of different automated particle analysis methods in Reference [21], addressing the problems of analysis of nanoparticles on rough surfaces, where many of the segmentation methods can fail. It was found that the most critical are samples with medium particle coverage, where self-assembled arrays of particles have not yet developed (that could be analysed using spectral methods), but particles can no longer be treated as individual either. Consider, now in more detail, the example of estimation of tip impact on statistical quantities of rough surfaces. The distortion of measured morphology depends on the tip radius r but also roughness character (in Figure 8a it is illustrated for synthetic columnar film data). The influence of roughness character was explored using simulated convolution with parabolic tips of varying radius for a standard Gaussian rough surface generated by spectral synthesis and a columnar rough film simulated using the Columnar deposition model. In both cases several 2400 × 2400 pixel images were generated, with correlation length T ≈ 19 px and rms roughness σ = 1.85 px. Tip convolution was simulated with tip apex radii covering more than two orders of magnitude, from very sharp to quite blunt. For all output images we evaluated the root mean square roughness (Sq), mean roughness (Sa), root mean square surface slope (Sdq), surfaces area ratio (Sdr), skewness, excess kurtosis, and correlation length (T), and averaged them over individual images. The resulting dependencies are plotted in Figure 8b. Since the parameter ranges differ considerably and some are not even commensurable quantities, the curves were scaled to comparable ranges (preserving the relation between Sq and Sa). A few observations can be made. For the Gaussian surfaces, which are locally smooth, the parameters stay more or less at the true values up to a certain radius, and then they all start to deviate. In contrast, for the columnar film several quantities have noticeably non-zero derivatives even for the sharpest tip used in the simulation. This is the result of deep ravines in the topography, with bottoms inaccessible even by quite sharp tips. Some results are more puzzling, for instance the peculiar non-monotonic behaviour of kurtosis for Gaussian surface. We can also see that convolution with blunt tips makes the measured skewness of Gaussian surfaces positive. However, for columnar films, which are positively skewed, the measured skewness decreases and even becomes negative for blunt tips. Such effects would be difficult to notice without simulations. This is even more true for the results of a larger-scale simulation with Gaussian rough surfaces, in which all r, σ and T were varied. Careful analysis revealed that the convolution problem is in fact characterised by just a single dimensionless parameter σr/T 2 . This is demonstrated in Figure 8c where the ratios of measured σ and T to true values are plotted as functions of σr/T 2 . It is evident that even though all three parameters varied over wide ranges the data form a single curve (for each σ and T). This result can be explained by dimensional analysis. When we scale the lateral coordinates by b and height by c, then r, σ and T scale by c/b 2 , 1/b and 1/c, respectively. The only dimensionless number formed by r, σ and T which is preserved is σr/T 2 . However, the surface must scale like the tip to preserve their mutual geometrical relation. Meaning it needs to also be locally parabolic-which is satisfied by the locally smooth Gaussian surface (but not by other self-affine surfaces). Levelling, Preprocessing, and Background Removal Raw SPM data are rarely the final result of the measurements. In most cases they are evaluated to get a quantitative result, like size of nanoparticles, volume of grains, pitch of the grating or surface roughness. Preprocessing steps are usually needed for this, to correct the misalignment of the z-axis of the instrument to the sample normal, to correct the impact of drift or to remove scanner background. Synthetic data can be used to both develop the data processing methods and estimate their reliability or uncertainties. An example of use of synthetic data for estimation of systematic errors in SPM data processing is the study of levelling-induced bias in roughness measurements [29,141]. Levelling is done as a pre-processing step in nearly all SPM measurements in many variations, from simple row mean value alignment up to polynomial background removal. Since the data are altered by levelling, it has impact on the roughness measurement results. Using synthetic data this impact was quantified, showing that in a large portion of SPM related papers the reported roughness values might be biased in range of tens of percents as the result of too small scan ranges. The problem is illustrated in Figure 9 for the mean square roughness σ and scan line levelling by polynomials. The ratio σ 2 meas /σ 2 expresses how much the measured roughness is underestimated-ideally it should be 1. The underestimation is of course worse for shorter lengths L. However, it is clear that even for quite long scan lines (compared to the correlation length T), the roughness can be considerably underestimated, especially for higher polynomial degrees. A procedure for choosing a suitable scan range to prevent this problem already during the measurement is provided in Reference [141]. Gaussian autocorrelation function Profile with corresponding L/T Exponential autocorrelation function Figure 9. Ratio of measured roughness σ 2 meas to true value σ 2 depending on the ratio of profile length L and correlation length T. The bias is plotted for Gaussian and exponential autocorrelation and several common levelling types-subtraction of a few low-degree polynomials and the median. Vertical slices through the images illustrate the corresponding ratio L/T (image height is L). In Reference [26], methodology for using spectral density in the SPM data evaluation was studied. Synthetic data allowed discussion of various influences on its accuracy, including sampling, tip convolution, noise, and windowing in the Fourier transform. In Reference [27] the spectral density evaluation was extended to irregular regions, allowing the method to be used on grains or terraces covering only part of the SPM image. The method was validated using data generated by spectral synthesis. Synthetic data were also used to test novel algorithms for using mono atomic silicon steps as a secondary realisation of the metre. Following the redefinition of the SI system of units the increased knowledge about silicon lattice spacing led to acceptance of this approach for SPM calibration [145]. To develop reliable methods for data levelling when tiny steps are evaluated from SPM data measured on large areas, synthetic data were used [30], namely to verify algorithms separating various scanning related errors like line roughness and scanner background from the sample geometry. In addition to the effect of concrete data pre-processing algorithms, there is also the freedom in which of them to use. Several paths can lead to similar results (e.g., images with aligned rows), but their impact on the data can be different. SPM users seldom choose based on rigorous criteria-the choice is more often the result of availability, discoverability, and habit. The impact of this user influence was studied in Reference [34] using combination of multiple data synthesis methods to create complex, but known data as illustrated on the step analysis example in Figure 10. A group of volunteers was then processing the data to determine the specified parameters, resulting in a rather worrying spread of determined values. Furthermore, data processing methods were classified on the basis of amount of user influence on them. It was done using an MC setup somewhat atypical for SPM as it included data processing steps carried out by human subjects. Batches of 100 synthetic images were generated, with known parameters-but unknown to the users. The generated images contained roughness, tilt, and defects, such as large particles. The users were then asked to level them as best they could, using levelling methods from prescribed sets. It was found that while humans are good at recognising defects (and marking them for exclusion from the processing), giving them more direct control over the levelling is questionable. The popular 3-point levelling method did not fare particularly well and humans also tended to over-correct random variations (although all levelling methods are guilty of this, even without user input [29,141,146]). Figure 10. Study of user influence on SPM data evaluation: (a) The step to measure created using pattern-with smooth and somewhat wobbly edge, but otherwise ideal. (b) The image users actually received, with tilt and scanning artefacts added. (c) Distribution of user submitted evaluated step heights as a stripchart with jitter (the vertical offsets do not carry information; they only help to disperse the points) and a boxplot overlay. Non-Topographical SPM Quantities So far the examples involved simulated topography. It is perhaps the most common case, but not a fundamental limitation. Any physical quantity measurable by SPM can be addressed given a physical model of the sample and a model of the probe-sample interaction, which can have different level of complexity. As an example of use of simple tools applied to non-topographical data, magnetic domain data were simulated during development of methodology for tip transfer function reconstruction in magnetic force microscopy (MFM) [100] using the phases module. Since the purpose of the procedure was the estimation of an unknown function from non-ideal data, verification and quantification of systematic errors related to data processing using artificial data with added defects were key steps. Another major benefit of synthetic data was that virtual MFM data of any size and resolution could be used, even beyond what is feasible to measure. Reference [100] used simulations to study and optimise several aspect of the reconstruction. The performance of different FFT window functions was compared using artificial data-with somewhat surprising results. Although FFT windows had been extensively studied for spectral analysis, their behaviour in transfer function estimation was not well known. The study found that beyond C 0 window smoothness and even shape did not play much role and the key parameter was the L 2 norm of window coefficients (and simple C 0 windows, such as Welch and Lanczos, are thus preferred). Simulations were also used to evaluate the influence of different regularisation parameter choices and to improve a procedure used to estimate the true magnetic domain structure from the measured image. A very frequent use of synthetic data in non-topographical SPM measurements interpretation is related to understanding of topography artefacts in the other quantities channels. Artefacts related to sample topography can be found in all the regimes (electric, magnetic, thermal, mechanical, and optical) and belong among the largest uncertainty sources when other quantities are evaluated. Synthetic topographical data are only the first step. A physical model of the interaction has to be formulated and we must simulate the signal generation process. This can be for example finite element method (FEM) modelling which was used for simple topographic structures to simulate their impact on Kelvin Force Probe microscopy in Reference [147]. Here it was found that the topography impact on surface potential measurements is relatively small. This, however, is not the case in many other SPM techniques and topography artefacts can dominate the signal. Simple 1D synthetic structures were used to simulate topography artefacts in aperture-based scanning near field optical microscopy [148], where a model based on calculating the real distance of the fibre aperture from surface was used. Even if the probe follows trajectory preserving a constant distance to the surface, this keeps constant the shortest distance from any point on the probe to the surface. However, the distance between surface and the aperture that is in the centre of the probe apex varies when scanning across topographic features (e.g., when following a step). Combined with the fast decay of the evanescent field, it produces topography artefacts in the optical signal. More complex analysis was done using 2D synthetic patterns in Reference [149], where Green's tensor technique was used to calculate the field distribution in the probe-sample region, showing that it is easy to misinterpret topographical contrast as dielectric contrast (which is the target measurand). Topography artefacts were also modelled in the scattering type scanning near field optical microscopy, which is an even higher resolution optical SPM technique [150]. For this, a simple synthesised topographic structure representing a nanopillar array was combined with an idealised scattering tip and the interaction was handled using a dipole-dipole theoretical model. This allowed examination of different aspects of far-field suppression using higher harmonic signals. Additionally, in scanning thermal microscopy (SThM), the topography artefacts can dominate the signal and methods for their simulation are needed [36]. In Figure 11, the typical behaviour of the thermal signal on a step edge is shown, together with the result of a finite difference model (FDM) solving the Poisson equation. Synthetic data were used here to create the simulation geometry, taking the simulated surface and probe from Gwyddion, converting it to a rectangular mesh, and calculating the heat flow between probe and sample. The process was repeated for every tip position, producing a virtual profile (or virtual image in the 2D case). Synthetic data were used to create simple structures that could be compared to experimental data, as shown in the figure, and were an important step in validation of the method and moving towards simulations of more realistic structures. Figure 11. Topography artefacts in SThM: (A,B) topography and thermal signal on a step height structure, (C) experimental data and result of the FDM calculation using simulated step height structure [36], simulating a single profile. Simulated signal is scaled to match the raw SThM signal coming from the probe and the Wheatstone bridge. Finally, numerical analysis can be used for the non-topographical data interpretation, modelling the probe-sample interaction using a structural model of the sample and creating virtual images that are compared to real measurements. During development and testing of the numerical models used for these purposes various synthetic datasets are also frequently used. As an example, spectral analysis of measured surface irregularities was studied in Reference [151], where STM signal was simulated for synthetic nanoparticle topographies. The results were used to assist in the interpretation of data coming from different laboratories and affected by different imperfections, like noise and feedback loop effects. More complex examples include models for addressing mechanical response in nanomechanical SPM regimes were developed using synthetic data. Such methods have potential of sub-surface imaging on soft samples, which however needs advanced data interpretation methods. Numerical calculations based on a synthetic structural model and FEM was used to interpret data measured on living cells [152] when responding to external mechanical stimuli. In general, physical models of probe-sample interaction can be quite complex and simulation of SPM data can be a scientific area on its own. For example, quantum-mechanical phenomena need to be taken into account when very high resolution ultra-high vacuum measurements are interpreted, which has been done using a virtual non-contact AFM [153]. Simulations at this level are however beyond the scope of this paper which focuses on synthetic data that can be easily generated, e.g., using Gwyddion open source software and then used for various routine tasks related to SPM data processing. Use of Synthetic Data for Better Sampling Synthetic data can also be used for development of better sampling techniques or techniques that allow fusion of datasets sampled differently. Traditionally SPM data are sampled regularly, forming a rectangle filled by equally spaced data points lying on a grid. However, the relevant information is usually not distributed homogeneously on the sample. Some areas are more important than others for the quantities that we would like to obtain from the measurement. Development of better sampling can focus on better treatment of steep edges on the surface [154], reduction in the number of data points via compressed sensing [155], or better statistical coverage of roughness [8]. As an example of synthetic data use in this area, in Reference [33] randomly rough surfaces with exponential autocorrelation function were generated to simulate measurements with low and high density sampling. Gaussian process based data fusion was then used and tested on these data. Using this method, the simulated datasets could be merged, when the accuracy of the low and high density measurements was different, which is often encountered in practice. Data fusion results were then compared with synthetic data that were used to create the input sets, making the analysis of method performance straightforward. Synthetic data were also used for the development of methods for generation of non-raster scan paths in the open source library Gwyscan [8], focusing on scan paths that would better represent the statistical nature of samples, e.g., by addressing larger span of spatial frequencies while measuring the same number of points as in regularly spaced scan. Use of general XYZ data instead of regularly spaced samples allows creation of more advanced scan paths, measuring only relevant parts of the sample. An example of scan path refinement is shown in Figure 12. Here, data similar to flakes of a 2D material were generated and the simulated scans were covering an area of 5 × 5 µm. The coarsest image (see Figure 12A) with only 50 × 50 points laterally spaced by 100 nm was used to create the next refined path, based on the local sample topography variance. The refined path points were laterally spaced by 10 nm and were used to create a next refinement, laterally spaced by 1 nm. At the end, all the XYZ points were merged and image corresponding to the desired pixel size of 1 nm was obtained, in total measuring only about 25 % of points compared to a full scan. Discussion In the previous section we recapitulated various styles and strategies for use of synthetic data. It can be seen that the methodology varies from task to task. The unifying idea is to simulate how an effect related to SPM measurement or data processing alters the data, which can be done best using known data. Generation of the data involves randomness in most cases and resembles the use of Monte Carlo (MC) in uncertainty estimation. The MC approach is used in metrology when measurement results cannot be formulated analytically, using an equation (which would be differentiated for the error propagation rule to get the uncertainty contributions of different input quantities). MC is an alternative in which input quantities are generated with appropriate probability distributions and the result is computed for many of their combinations, forming the probability distribution of the results. More guidance is provided in the guide to the expression of uncertainty in measurement (GUM) [156]. The method then provides both the result and its uncertainty. Sometimes the procedure applied to synthetic data can differ quite a lot from how we imagine the typical MC, but still be built using the same principles. For instance, in a part of the user influence study [34], large amounts of data were generated and then processed by experienced Gwyddion users manually. The goal was to see how the user's choice of algorithms and their parameters influences the results (see also Section 3.2). Therefore, a human was the 'instrument' here. However, for the rest the MC approach was followed. It should be noted that even in a classical uncertainty estimation MC may not be the most efficient computational procedure. If there are only a handful of uncertain or variable input parameters, non-sampling methods based on polynomial chaos expansion [157] can be vastly more efficient. Their basic idea is the expansion of output parameters as functions of input parameters in a polynomial basis, with unknown coefficients. Determining the coefficients then establishes a relation between input and output parameters and their distributions [158]. In so-called arbitrary polynomial chaos the technique is formulated only in terms of moments of the distributions, avoiding the necessity of postulating specific distributions and allowing data-driven calculations [159]. However, huge numbers of random input parameters-quite common in the SPM simulations-present a challenge for polynomial chaos and using MC can have significant benefits. More importantly, standard uncertainty analysis may be exactly what we are doing and GUM exactly the appropriate methodology to follow, though it also may not be. We need a more nuanced view on procedures described collectively as 'generate pseudorandom inputs and obtain distributions of outputs'. Consider what would be the result of running the simulation with infinitely large data. There are two basic outcomes: an infinitely precise value and nothing. An example of the former is the study of tip convolution in Section 3.1. In the limit of infinite image, tip convolution changes the surface area of a columnar film (for instance) by a precise amount, independent on the sequence of random numbers used in the simulation, under an ergodicity assumption. In fact, the images used in Section 3.1 were already quote close to infinite from a practical standpoint. The relative standard deviations of the parameters in Figure 8b were around 10 −3 or smaller, i.e., a single MC run would suffice to plot the curves. This gives context to the large number of MC runs suggested by GUM (e.g., 10 6 ). For the surface part, more sampling of the probability space can be done either by generating more surfaces-or by generating larger ones. The second is usually more efficient and also reduces the influence of boundary regions that can cause artefacts. Large images do not help if tip radius is uncertain (although here polynomial chaos could be utilised). Therefore, we should use the adaptive approach, also suggested in GUM, increasing surface size up to the moment when statistical parameters of the result converge. How is the hypothetical sharp value we would obtain from an infinite-image simulation related to uncertainties? It may be the uncertainty (more precisely, its systematic part) if we simulate an unwanted effect which may be left uncorrected and we attempt to estimate the corresponding bias. The hypothetical sharp value may also be simply our result-and then we would like to know its uncertainty. This brings us to the second possible outcome of infinite-image simulation, nothing. An example, polynomial background subtraction, was discussed in Section 3.2. Subtraction of polynomials has no effect in the limit of infinitely large flat rough surfaces for any fixed polynomial degree. Similarly, the MFM transfer function could be reconstructed exactly given infinite data, more or less no matter how we do it (Section 3.3). In this case we study the data processing method itself and its behaviour for finite measurements. It is essential to use image sizes, noise, and other parameters corresponding to real experiments. The distribution of results is tied to image size. Scaling results to different conditions may be possible, but is not generally reliable especially if boundary effects are significant. Both biases and variances frequently scale with T 2 /A, where A is image area and T correlation length or a similar characteristic length-not as 1/N with the number of image pixels Nand size effects can be considerable even for relatively large images [29,141,146]. Several practical points deserve attention when using artificial data. Running simulation and data processing procedures manually and interactively is instructive and can lead to eye-opening observations. Proper large-scale MC usually still follows, involving the generation of many different surfaces, SPM tips, and other objects. This would be very tedious if done manually. Most Gwyddion functionality is being developed as a set of software libraries [6], with C and Python interfaces. This allows scripting the procedures, or even writing highly efficient C programs utilising Gwyddion functions. This is also how most of the examples were obtained. The same input parameters must produce identical artificial surfaces across runsbut also operating systems and software versions. For geometrical parameters this is a requirement for using the models with non-intrusive polynomial chaos methods. Most models have random aspects, ranging from simple deformations and variations among individual features to stochastic simulations consuming a stream of random numbers. The sequence of numbers produced by a concrete pseudorandom generator is deterministic and given by the seed (initial state). However, reproducing a number sequence is not always sufficient. A stronger requirement is that a small change of model parameter results in a small change of the output, at least where feasible (it is not possible for simulations in the chaotic regime, for instance). In other words, the random synthetic data evolve continuously if we change parameters continuously. This is achieved by a combination several techniques, in most cases by (a) using multiple independent random sequences for independent random inputs; (b) if necessary, throwing away unused random numbers that would be used in other circumstances; and (c) filling random parameters using a stable scheme, for instance from image centre outwards. Examples of continuous change of output with parameters can be seen in Figure 9. Each column of the simulated roughness image came from a different image (all generated by spectral synthesis). They could be joined to one continuous image thanks to the generator stability. Theoretical modelling of SPM data is an active field. Much more is going on which lies out of the focus of this work. For example detailed atomistic models are now common in STM [52][53][54] or interaction with biomolecules and biological samples [160]. Every sample and each SPM technique has its own quirks and specific modelling approaches [3]. The methods discussed here do not substitute the models that are related to physical mechanisms of the data acquisition in SPM. However, they can be used to feed them with suitable datasets like in our SThM work [36] where Gwyddion data were directly used to create the mesh for FDM calculations. Conclusions Use of synthetic data can not only significantly save time when evaluating uncertainties in quantitative SPM, but can also allow analysis of individual uncertainty components that would be otherwise jumbled together if only experimental data were used. This helps with improvement of the quantitative SPM technique from all points of view: data collection (e.g., compressed sensing), processing of measured data (e.g., impact of levelling and other preprocessing) and even basic understanding of phenomena related to the method (e.g., tip convolution). In all these areas the generation of reliable and well understood synthetic datasets representing wide range of potential surface geometries is useful, as demonstrated in this paper. The data reliability here means that the synthetic data should be deterministic and predictable (at least in the statistical sense) and should be open for chaining them to simulate multiple effects. The demonstrated implementations of discussed algorithms in Gwyddion have these properties. Another benefit of using synthetic data is its suitability for testing and comparing different algorithms performance during the data processing software development. This is often done on basis of real SPM data in the literature as authors want to demonstrate practical applicability of the algorithms on realistic data. However, as is discussed in this paper, the data synthesis methods are already so mature that known data that are very similar to real measurements can be generated and different SPM error sources can be added to them in a deterministic way. This can make the software validation much easier than in the case of experimental data with all influences fused together. Even further, whole software packages can be compared and validated on basis of synthetic data. Author Contributions: D.N. focused on methodology, software and writing the manuscript, P.K. focused on literature review, software and writing the manuscript. All authors have read and agreed to the final version of the manuscript. Funding: This research was funded by Czech Science Foundation under project GACR 21-12132J and by project 'Nanowires' funded by the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. Data Availability Statement: The software used to generate all the data [7] is publicly available under GNU General Public License, version 2 or later. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
15,891.4
2021-07-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
The Newly Discovered Neoproterozoic Aillikite Occurrence in Vinoren (Southern Norway): Age, Geodynamic Position and Mineralogical Evidence of Diamond-Bearing Mantle Source : During the period 750–600 Ma ago, prior to the final break-up of the supercontinent Rodinia, the crust of both the North American Craton and Baltica was intruded by significant amounts of rift-related magmas originating from the mantle. In the Proterozoic crust of Southern Norway, the 580 Ma old Fen carbonatite-ultramafic complex is a representative of this type of rocks. In this paper, we report the occurrence of an ultramafic lamprophyre dyke which possibly is linked to the Fen complex, although 40 Ar/ 39 Ar data from phenocrystic phlogopite from the dyke gave an age of 686 ± 9 Ma. The lamprophyre dyke was recently discovered in one of the Kongsberg silver mines at Vinoren, Norway. Whole rock geochemistry, geochronological and mineralogical data from the ultramafic lamprophyre dyke are presented aiming to elucidate its origin and possible geodynamic setting. From the whole-rock composition of the Vinoren dyke, the rock could be recognized as transitional between carbonatite and kimberlite-II (orangeite). From its diagnostic mineralogy, the rock is classified as aillikite. The compositions and xenocrystic nature of several of the major and accessory minerals from the Vinoren aillikite are characteristic for diamondiferous rocks (kimberlites/lamproites/UML): Phlogopite with kinoshitalite-rich rims, chromite-spinel-ulvöspinel series, Mg- and Mn-rich ilmenites, rutile and lucasite-(Ce). We suggest that the aillikite melt formed during partial melting of a MARID (mica-amphibole-rutile-ilmenite-diopside)-like source under CO 2 fluxing. The pre-rifting geodynamic setting of the Vinoren aillikite before the Rodinia supercontinent breakup suggests a relatively thick SCLM (Subcontinental Lithospheric Mantle) during this stage and might indicate a diamond-bearing source for the parental melt. This is in contrast to the about 100 Ma younger Fen complex, which were derived from a thin SCLM. Introduction Although ultramafic lamprophyres (UML) are volumetrically insignificant rocks, they may play a crucial role in the understanding of deep (mantle) melting events. UML form dyke swarms and rarely pipes commonly associated with continental extension, commencing during the initial stages of continental rifts evolution. UML often occurs together with alkaline mafic-ultramafic and carbonatitic intrusive complexes [1]. UML are classified as melanocratic rocks with abundant olivine and phlogopite macrocrysts and/or phenocrysts and can be subdivided into three rock types depending on a third essential mineral [2]. (1) Alnöits are melilite-bearing UML; (2) aillikites contain primary carbonate; and (3) damtjernites are nepheline-and/or alkali feldspar-bearing. Clinopyroxene and/or richteritic amphibole might be present in all three types, whereas spinel, ilmenite, rutile, perovskite, Ti-rich garnet, titanite, apatite are typical minor and accessory phases. UML show similarities to other volatile-rich rocks, such as kimberlites, lamproites and silicocarbonatites in terms of the occurrences and mineralogy. Nevertheless, some compositional differences between the rock types and their distinctly different geodynamic settings (rift-related for UML and stable cratonic for kimberlites and lamproites) suggest that they have different magma sources and petrogeneses. Similar to kimberlites and lamproites, UML may contain diamonds [3][4][5][6][7], indicating that the depth of magma generation for UML can be in excess of 130 km. In this paper, the mineralogy, whole rock compositional data and the age of the recently discovered Vinoren UML dyke within the Kongsberg silver district, Kongsberg lithotectonic unit, Southern Norway, are presented. Based on the new data, the origin of the dyke and the geodynamic implications of the discovery will be discussed. Geological Setting The major part of the crust in Southern Norway is built up of Paleo-to Mesoproterozoic rocks that underwent multiphase reworking along the Fennoscandian margin during the Sveconorwegian Orogeny, between 1140 and 920 Ma ago [11][12][13]. This orogeny was one of several orogenic events worldwide that resulted in the formation of the supercontinent Rodinia, and it has been inferred to result from the collision between proto-Baltica and Amazonia (e.g., [14][15][16][17]). However, an accretionary and non-collisional model for the formation of the Sveconorwegian Orogeny has also been proposed [18,19]. The orogenic belt has been sub-divided in five orogen-parallel lithotectonic units, which are separated by major Sveconorwegian shear zones: The Eastern Segment, Idefjorden, Kongsberg, Bamble and Telemarkia units [20]. The Kongsberg silver district is situated within the Kongsberg lithotectonic unit and includes a variety of gneisses (1600-1400 Ma) and granitoids (1171-1146 Ma) [17,21]. The silver district is characterized by subvertical zones enriched in sulfides (predominantly pyrite and pyrrhotite), inferred to be of hydrothermal origin. These zones, which are called fahlbands (e.g., [22,23]), are up to 900 m wide and subparallel to the foliation of the surrounding lithologies. The fahlbands and the older lithologies are crosscut by E-W trending dolerite dikes, quartz veins and silver bearing calcite veins of Permian age [24][25][26]. Already in the early days, the miners realized that the silver mineralizations occur almost exclusively at the intersections of the calcite veins and the fahlbands (e.g., [27]). Neumann [28] referred to the mineralized veins as calcite-nickel-cobalt-arsenide-native silver veins. The veins vary from a few millimeters up to 0.5 m in thickness, although up to several meters thick zones have been observed [28]. In a recent study of the silver mineralizations, Kotková et al. [29] gave an update of the paragenetic sequence presented by Neumann [28]. The UML dyke reported here occurs in the Klausstollen adit, adjacent to the Ringnesgangen underground silver mine, S. Vinoren, which is located in the northernmost part of the Kongsberg silver district (Figure 1). The dyke strikes toward NE with a dip of approximately 35° toward NW ( Figure 1). The dyke, which is about 50 cm thick, is fractured and tectonized; however, significant parts appears to be undeformed (Figure 2a). In places, the contact between the dyke and the host-rock appears as an undeformed and sharp intrusive contact. Some of the fractures within the dyke are filled with calcite. Accessory mineral identification and qualitative composition of grains and mineral inclusions less than 20-30 µm was performed using a LEO-1450 SEM (scanning electron microscope) (Carl Zeiss AG, Oberkochen, Germany) equipped with XFlash-5010 Bruker Nano GmbH EDS (energy-dispersive Xray spectroscopy). The system was operated at 20 kV acceleration voltage, 0.5 nA beam current, with 200 sec accumulation time. Materials from minerals forming possible pseudomorphs after olivine close to points analyzed by microprobe were examined by the X-ray diffraction (XRD) method (Debye-Scherer) by means of an URS-1 (Bourevestnik JSC, Saint-Petersburg, Russia) operated at 40 kV and 16 mA with RKU-114.7 mm camera and FeKα-radiation. Whole Rock Analyses Whole rock compositions were obtained at the Kola Science Center in Apatity, Russia. Most of the major elements were determined by atomic absorption spectrophotometry; TiO2 by colorimetry; K2O, Na2O, Cu, Ni, Co, Cr, V, Rb, Cs, and Li by flame photometry; FeO and CO2 by titration (volumetric analysis); and F and Cl by potentiometry using an ion-selective electrode (for the full description of the methods, see [30]). 40 Ar/ 39 Ar Analyses Fragment of phlogopite with diameter about 1 mm was hand-picked from one phenocrystic sample of the dyke rock, cleaned by ultrasonic bath and dried up at 40 °C. The mineral fragment was in cadmium foil. The grain was placed in a capsule made of 99.999% aluminum. The sample was irradiated for neutron activation at the CLICIT (cadmium-lined-in-core irradiation tube) facility at the Oregon State TRIGA reactor (OSTR), Oregon State University, Oregon, USA. To obtain the degree of neutron activation (J), the neutron flux monitoring mineral Fish Canyon Tuff sanidine (27.5 Ma [31,32]) was used. To correct possible interference of Ar isotopes produced by the reaction of K and Ca, crystals of K2SO4 and CaF2 were irradiated separately. Irradiation time was 4 h, and the fast neutron flux was 2.47 × 10 13 n/cm 2 /s. After irradiation, the sample was cooled down for one month and transported to the Ar/Ar laboratory at the University of Potsdam, Germany. The sample was analyzed with a Gantry Dual Wave laser ablation system by the stepwise heating method until total melting. The system work with a 50 W CO2 laser (wavelength of 10.6 µm), using a defocused continuous laser beam with a diameter of maximum 1500 µm during 1 min for heating and gas extraction. The released sample gas was exposed to the SAES getters and cold stainless trap cooled at −90 °C through the ethanol by electric cooler in order to purify the sample gas to pure Ar for 10 min in a closed ultra-high vacuum purification line. The pure argon gas was analyzed by a Micromass 5400 noble gas mass spectrometer with high sensitivity and ultra-low background. The spectrometer operates with an electron multiplier for very small amounts of gas. During the measurements, blanks were measured every third step. The software Mass Spec, designed by Dr. Alan Deino of Berkeley Geochonology Center, Berkeley, CA, USA was used for processing the data. The recommended atmospheric 40 Petrography and Mineral Compositions In hand specimen, the UML rock is massive and characterized by anhedral phenocrysts of phlogopite (up to 1 cm in diameter) and calcite (up to 1 mm in diameter), and rounded aggregates of a serpentine-like mineral (up to 3 mm in diameter) in a fine-grained, grey groundmass ( Figure 2b The carbonate in the groundmass is represented by almost pure calcite with <0.1 wt.% MgO (Table 1). We infer that the mineral is primary as it forms triple-junction boundaries between intergrown grains (Figure 3f). Secondary calcite occurs in aggregates with serpentine-like minerals and is characterized by high SrO content (up to 2 wt.%). Phlogopite occurs both as phenocrysts and as grains up to 1 mm in the groundmass ( Table 2). The phenocrystic phlogopite is homogenous, whereas two types of chemical zonation can be observed in the groundmass phlogopite. In back-scatter electron (BSE) images, the first type of zonation is characterized by a dark core and brighter rim of phlogopite ( Figure 3e). The bright rim typically shows higher BaO than the core. The second type of zonation is represented by a few µm thick bright rims in BSE images (Figures 3e and 4), reflecting elevated FeO and lower Al2O3 and MgO in the thin rims. The groundmass phlogopites are sometimes bent suggesting that the mineral already had formed when the magma was emplaced as a crystal mush. The serpentine-like aggregates consist of a mixture of a mineral that is closer in composition to saponite than serpentine, and minor talc ( Table 3). The presence of saponite has been confirmed by XRD analysis. The formation of saponite after olivine and serpentine during low-temperature hydrothermal alteration has been reported from some kimberlite occurrences (e.g., in the Arkhangelsk province, [34]). Ilmenite is present as the two solid solution series geikielite-ilmenite and ilmenite-pyrophanite. The first one occurs as ca. 200 µm rounded resorbed grains with titanite rims (Figure 5e). The composition of the grains varies from core to rim mainly in MgO (from 12 to 2 wt.%), FeO (from 31 to 42 wt.%) and MnO (from 0.4 to 3.9 wt.%) ( Table 5). The mineral is characterized by the presence of Al2O3 (0.44-0.57 wt.%), NiO (0.12 wt.%), Cr2O3 (up to 0.09 wt.%) and CaO (up to 0.13 wt.%). Ilmenite of similar Mg-rich composition is an indicative mineral for diamondiferous kimberlites. The compositional zonation revealed for ilmenite from the studied dyke is similar to that from Torngat UML. Ilmenite corresponding to the ilmenite-pyrophanite series (up to 16 wt.% of MnO) occurs as single 10-20 µm grains included in titanite (Figure 5f). Ilmenite compositions like this are characteristic for carbonatites. Rutile is a relatively abundant accessory mineral, found in titanite in association with lucasite-(Ce) (Figure 5b,g,h). The replacement of rutile by titanite apparently took place during a late-magmatic carbonatization stage with high Ca-and REE-activities. Rutile is characterized by a moderate Nb2O5 content (0.4-0.6 wt.% (Table 5)) that is different from typical Nb-rich kimberlitic rutile. The associated lucasite-(Ce) belongs to the same stage and occurs as needles included in titanite. Lucasite-(Ce) is a characteristic mineral of diamondiferous lamproite, e.g., from Argyle, Western Australia [37]. Vinoren lucasite-(Ce) differs from the lamproitic mineral by elevated CaO (3.3-5.5 wt.%, Table 5). Garnet is a secondary minor mineral formed as bud-shaped grains associated with saponite and in interstices between grains of phlogopite (Figure 6e,f). EMPA data (Table 3) indicates that the mineral is hydroandradite [Ca3Fe 3+ 2(SiO4)3-x(OH)4x] with low to moderate TiO2 content (0.3-1.2 wt.%), in contrast to the Ti-rich garnets that is characteristic for UML. Zircon occurs as needles of about 20 µm long, assembled in subparallel aggregates ( Figure 6a,b). The skeletal form of zircon indicates rapid growth of the mineral. A Ni-Fe-S mineral phase with the composition 30.8 wt.% S, 38.9 wt.% Ni, 27.1 wt.% Fe and 3.1 wt.% Co (possibly godlevskite: (Ni,Fe)9S8), which occurs as numerous rounded grains of 1-2 µm in diameter in the saponite-talc aggregates (Figure 6c), is inferred to be an alteration product after olivine. Secondary quartz occurring in the saponite-talc aggregates is also inferred to be an alteration product after olivine. Barite and strontianite form anhedral grains, 1-3 µm in diameter, occur as inclusions in calcite. Other accessory phases that were observed (pyrite, galena, chalcopyrite, sphalerite, pentlandite, and celestine) in couple with other sulfides and sulfates indicate a relatively high S activity during the formation of the studied dyke. Note. bdl-below detection limit. Note. P-phenocryst; G-groundmass; na-not analyzed; bdl-below detection limit. Note. na-not analyzed; bdl-below detection limit. Note. bdl-below detection limit; host-main part of spinel grain. Note. na-not analyzed; bdl-below detection limit. Figure 7. Whole rock compositional field for ultramafic lamprophyre, kimberlite and melilitite rocks (after [38]). Blue Gray circles show data from this study for the Vinoren occurrence. 40 Ar/ 39 Ar Geochronology Results and measurement conditions of 40 Ar/ 39 Ar analyses of Vinoren phlogopite are given in Table 7. Plateau was not obtained. But an arithmetic average age of 686 ± 9 Ma was calculated from the last 5 steps which show very similar ages (Figure 8a). The integrated 40 Ar/ 39 Ar age is 689 ± 3 Ma. The measured Ca/K ratios were very stable, indicating that phlogopite has not been affected by alteration or degassing processes. In the normal isotope correlation diagram in Figure 8b, the data yields an age of 679 ± 6 Ma. Geochemical Constrains for Rock Affinity From its diagnostic mineralogy (carbonate-rich, but nepheline-and/or alkali feldspar-and melilite-absent; see section 4.1) and whole rock geochemistry (low SiO2 and Al2O3, high TiO2, CO2, P2O5, Ba and Sr; see section 4.2), the rock is classified as aillikite. According to [2], aillikite is a carbonate-rich member of the UML group derived from a volatile-rich, potassic, SiO2-poor magma. The affinity and a possible source of the studied rock can be constrained by comparative studies. The nearest UML occurrences of similar age and tectonic setting are from the Labrador-Greenland areas, which are the parts of NAC. Two aillikite occurrences in these areas, i.e., Aillik Bay and Torngat, were chosen for comparison as their parental magmas originated at different depths [5,35,39]. The Aillik Bay aillikites are diamond-free, whereas the Torngat rocks are diamond-bearing with accessory mineral and xenocryst assemblages indicating a deep source. The Vinoren rock shows similar contents of SiO2, Al2O3, K2O, CO2 and P2O5 as the Torngat aillikite, but lower MgO, Na2O and higher CaO (Figure 9). At the same time the studied aillikite is differing from the Aillik Bay rocks by most components. It has been proposed that the Torngat ailikite was related to partial melting of metasomatized mantle (assemblages similar to MARID = mica-amphibole-rutile-ilmenite-diopside xenoliths from kimberlites [40]) during CO2 fluxing [7]. MARID nodules and veins are highly enriched in volatiles and incompatible elements [41,42] and according to [43], they crystallize within the diamond stability field, i.e., >4 GPa. Although aillikites are rich in MgO and Ni, their low SiO2 content and high contents of alkalis and volatiles suggest that they cannot be produced by melting of pure mantle peridotite. Foley [44] suggested a vein-plus-wall-rock melting mechanism for the generation of lamproitic magma. Accordingly, potassic and hydrous lamproitic magma can be produced by remelting of phlogopite-richterite-clinopyroxene dominated veins accommodated in peridotite of subcontinental lithospheric mantle (SCLM). Later, Foley et al. [45] and Tappe et al. [39] developed a similar model for the generation of UML melts, using a phlogopite-carbonate vein assemblage with minor apatite and Ti-oxide. Their remelting can produce potassic, hybrid carbonate-ultramafic silicate magma batches corresponding to aillikite melts. This has not been directly demonstrated yet, but the process is confirmed by experimental data [43], and encouraged by proximity of diamond-bearing aillikite and model MARID (see Figure 9). Both phlogopite and K-richterite can be present in MARID assemblages. However, the extremely high K/Na of the Vinoren aillikite combined with its strongly Si-undersaturated character indicate a dominating role of phlogopite in the source, because melting of a richterite-dominated source would have given more Si-rich melts. The difference in Na and K composition between the natural products and model MARID-like material (Figure 9) can be explained by the extremely different proportions of amphibole and mica in MARID. The low MgO/CaO ratio (<1) of aillikite suggests that calcite is the dominating carbonate in the source. The high TiO2 content of aillikite (2.75 wt.%) cannot be explained by melting of Ti-rich phlogopite only, suggesting the presence of ilmenite and/or rutile in the source [46]. Figure 9. Major element oxide vs. SiO2 (wt.%) of the Vinoren aillikite (gray circles). Also shown are the compositional fields of the diamond-bearing Torngat aillikite [35] and the diamond-free Aillik Bay aillikite [39] in Labrador which are of similar ages as the Vinoren rock. The black box shows the experimental melt compositions produced from MARID-type material [43]. Mineralogical Constrains for Rock Genesis Minerals belonging to the phlogopite, oxyspinel and ilmenite groups may give important information about the mechanisms responsible for the genesis of volatile rich ultramafic rocks. The chemical zonation observed for the groundmass phlogopite shows high kinoshitalite and tetraferriphlogopite components along the rim of the mineral. Kinoshitalite-rich rims are characteristic of kimberlitic mica [47], while tetraferriphlogopite rims are typical of lamproitic mica [36]. The elevated BaO content in phlogopite from Vinoren (up to 2.3 wt.%) is much lower than what is observed from kimberlites, but higher than what is typical for phlogopite from aillikites. BaO content of 3.5 wt.% has been recognized in UML, including diamondiferous ones, from Australia [48,49]. The high TiO2 (4-7 wt.%) in phlogopite from Vinoren is distinctly different from phlogopite from kimberlites and orangeites, but close to the compositions of phlogopite from UML and lamproites ( Figure 10). Furthermore, the Al2O3 content in Vinoren phlogopite is different from high-Al kimberlitic phlogopite and low-Al orangeitic and lamproitic phlogopite. Phlogopite from orangeites and lamproites typically shows an evolutionary trend with an increase in Fe coupled with a decrease in Al toward pure tetraferriphlogopite. For phlogopite from the Vinoren rock, this trend is very weakly developed. In conclusion, phlogopite from Vinoren shows a hybrid character with some similarities to phlogopite from kimberlites and lamproites, but it is more similar to UML phlogopite, and it shows some affinity to MARID-like phlogopite ( Figure 10). . Compositional fields and evolutionary trends of phlogopite from kimberlites, orangeites, lamproites and lamprophyres are after [50]. MARID (mica-amphibole-rutile-ilmenite-diopside) compositional field is after [40] and [51]. Phlogopite compositions from Torngat ultramafic lamprophyres (UML) are from [35]. The compositional variations of ilmenite from Vinoren indicate a hybrid nature also of this mineral ( Figure 11). The Mg-rich core (up to 12 wt.%) is typical for kimberlitic ilmenite, while the more marginal part of the mineral is similar ilmenite from UML. The elevated MnO content (up to 3.9 wt.%) may be considered as a result of the reaction trend in kimberlitic ilmenite as shown in Figure 11 [47,52,53]. Moreover, similar Mn-rich ilmenites have been observed as inclusions in diamonds from Brazil [54,55]. Thus, phlogopite, ilmenite and spinel from the studied rock show compositions that suggest a hybrid and multistage origin of the rock. It is inferred that a primary melt originated from deep (kimberlitic) and possibly diamond-bearing mantle levels. Phlogopite compositions indicate that the melt originated from MARID-like source. During the ascend, the residual silicate melt with significant carbonate content was still reactive and resulted in the formation of ilmenite, manganilmenite and titanomagnetitic spinel at shallower (UML) mantle levels. Fe 2+ /(Fe 2+ + Mg) of spinel from the studied rock. The compositional fields for magnesian ulvöspinel/Cr-spinel from kimberlites (trend 1) and titanomagnetite from lamproites and UML (trend 2) are from [39] and [50]. Possible Geodynamic Setting of Vinoren Aillikite The North Atlantic Craton of Rodinia is composed of Archean blocks surrounded by Paleoproterozoic mobile belts covering large areas in the Northeastern Quebec, Labrador and Western Greenland ( [15], and references therein). Widespread lithospheric thinning occurred throughout eastern NAC along the Laurentian margin during the Late Neoproterozoic [59][60][61][62], resulting in continental breakup and subsequent opening of the Iapetus Ocean at 600 Ma, which was associated with rift-related UML-carbonatite-kimberlite magmatism. In central Labrador, this episode of continental stretching is recorded by remnant graben structures forming the eastward continuation of the St. Lawrence valley rift system [63]. Although Baltica today is separated from Laurentia, the two continents probably shared a common drift history during the time interval 750-600 Ma. Studies of Neoproterozoic sedimentary systems along the northwestern region of Baltica, and geochemical and geochronological studies of magmatic rocks in the same region, have been used to constrain the break-up of Rodinia [60,64,65]. Prior to the active rift-related drift at ca. 600-550 Ma [66,67], this margin was inferred to have faced Laurentia (e.g., [68][69][70]). During this stage, with thin SCLM and shallow asthenosphere, several carbonatitic-ultramafic complexes formed, including the Fen Complex in South Norway [71,72], the Seiland Igneous Province in North Norway (e.g., [73]) and the Alnö Carbonatite Complex in Sweden [74,75]. The initiation of rifting along the Baltic margin is marked by the 650 Ma Egersund tholeiitic dykes (SW Norway) which probably were derived from a mantle plume [60]. The emplacement of the Vinoren aillikite pre-dates this event. This is in accordance with the concept of [76] suggesting that continental extension was going on from 750 to 530 Ma, but separated in two distinct phases: (1) At 750-680 Ma, and (2) at 615-550 Ma. The first phase marked a failed rifting event between Laurentia and Amazonia, while the second phase led to the final breakup of Rodinia and the opening of the Iapetus ocean. Our data show that the first phase was active also between Laurentia and Baltica. The geochemical and mineralogical data presented here suggest that the parental magma of the dyke originated under a relatively thick SCLM, and that the continental root might have reached the depth of diamond stability.
5,236
2020-11-18T00:00:00.000
[ "Geology" ]
QoS-Based Data Aggregation and Resource Allocation Algorithm for Machine Type Communication Devices in Next-Generation Networks Machine Type Communication (MTC) becomes one of enablers of the internet of things, it faces many challenges in its integration with human-to-human (H2H) communication methods. To this aim, the Long Term Evolution (LTE) needs some adaptation in the scheduling algorithms that assign resources efficiently to both MTC devices (MTCDs) and H2H users. The minimum amount of LTE resources that can be assigned to one user is much larger than the requirements of a single MTCD. In this paper, a QoS-enabled algorithm is proposed to aggregate MTCD traffic coming from many sources at the Relay Node (RN) that classifies and aggregates the MTCD traffic based on the source type and delay requirements. In this study, three types of MTCD and one H2H sources will be considered. Each type of MTCD traffic will be grouped into a separate queue, and will be served with the appropriate priority. Resources are then assigned to the aggregated MTC traffic instead of an individual assignment for each MTCD, while the H2H users will be directly connected to the LTE. Two schemes of resource partitioning and sharing between the MTCDs and the H2H users will be considered: one proportional and the other moving-boundary. Simulation models will be built to evaluate the proposed algorithms. While the obtained results for the first scheme showed a clear improvement in LTE resource utilization for the MTCDs, a negative effect was noticed in the performance of the H2H users. The second scheme achieved a positive improvement for both MTCDs and H2H users. I. INTRODUCTION An increasing demand for high data rates, high capacity, and low latency to support a fully connected networked society that offers access to information and the sharing of data anywhere and anytime for anyone and anything, has led to the introduction of a new type of communication paradigm called machine-to-machine communication (M2M) or machine type communication (MTC). This type of communication implies that machines have the ability to communicate with each other in a smart approach without or with a minimum of human intervention [1]. Interest in MTCDs has increased in recent decades because they exist The associate editor coordinating the review of this manuscript and approving it for publication was Miguel López-Benítez . in many applications of the internet of things, such as but not limited to e-Healthcare, smart metering, smart cities, intelligent transportation systems, supply chains, surveillance monitoring systems, the prediction of natural disasters, and many social applications [2]. LTE-Advanced (LTE-A) is a candidate as the most suitable cellular technology to support MTC, due to its high data rates, large coverage area, high capacity, and spectrum efficiency. However, there are many challenges in the integration of MTCDs in an LTE-A network [3]- [5]. LTE-A has largely been designed to support H2H devices, which typically require high data rates and a small delay, have a small number of users (compared to MTCDs) and transmit a large volume of data packets. In contrast, MTCDs have different characteristics, such as a large number of devices, a low data VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ rate, a small data packet size, upload-centric applications, and power constraints [5]. This contradiction between the characteristics and requirements of H2H and MTCD is considered one of the biggest challenges in the use of LTE-A. Cisco estimates that the number of MTCDs globally will increase to 14.7 billion by 2023, and they represent 50% of all connected devices [6]. A large number of MTCDs trying to get access simultaneously to a base station (BS) in an LTE-A system brings another challenge to the integration of MTCDs into LTE-A networks. Radio resource allocation is one of the largest challenges facing the integration of MTCDs in an LTE-A system. The main difficulty is inefficient resource allocation, which is due to several factors. Firstly, H2H and MTCDs have quite different characteristics. H2H traffic is download-dominant, with a small number of users and a large data packet size. MTCDs, on the other hand, have upload-dominant traffic and a huge number of devices with a small data packet size. Second, in LTE-A, the minimum amount of resource blocks (RBs) that can be allocated to one User Equipment (UE) exceeds the requirements of MTCDs. For example, the smallest amount of RBs that can be allocated to one UE in LTE/LTE-A is one physical resource block (PRB), which contains (12 × 7) resource elements. This can be used to transmit hundreds of bits of data; however, the requirement of most MTCDs does not require this amount of resource due to the small size of their packets. This makes it inefficient to assign one PRB to one MTCD. Therefore, a new mechanism should be designed to manage radio resource allocation for MTCDs in LTE-A systems in a more efficient manner, without creating negative effects for H2H traffic. The third challenge is power consumption due to the power constraints in MTCDs, in particular when it is difficult to recharge the battery of the MTCD, or it cannot be recharged, when the MTCDs are placed in a critical environment. Therefore, MTCDs require efficient power management. Data aggregation is one of the most practical solutions used to solve the problem of resource allocation to MTCDs. This is achieved by clustering and multiplexing MTCD traffic from many MTCDs into an aggregator, which in turn sends the aggregated data to the next stage. This aggregator has a powerful capability in terms of energy, computation, and storage; it may be a cluster head of a capillary network, or it may be a cellular-based design within an LTE RN. The issues of aggregation, multiplexing, and resource allocation have been examined by many researchers, as can be seen in the related works in Section II. In this paper, a QoS-based data aggregation algorithm is proposed for LTE-A networks, which incorporate both MTCD and H2H users. The proposed data aggregator is cellular-based, and it has been designed within the LTE-A RN. It aggregates data from different types of MTCD with different types of QoS requirements then classifies the traffic based on their QoS to different queues, buffering the aggregated traffic until an adaptive time threshold or until an adaptive buffer size. At that point, the aggregator implements a frame formulation and multiplexing technique by accumulating the traffic from each buffer, based on their priority, into a new, large LTE frame, and then transfers the accumulated large frame to LTE Evolved Node Base Station (eNB). Therefore, the LTE eNB assigns resources to the aggregator RN instead of individual MTCDs. The rest of this paper is organized as follows: the next section presents the related works, and the contribution of this paper is introduced in section III; Section IV introduces the system model, resource management in the proposed schemes is presented in section V; section VI presents the aggregation function; the performance metrics are defined in Section VII; the simulation model and configuration are presented in Section VIII; the analysis of the results is introduced in Section IX; and, finally, references are listed. II. RELATED WORKS The LTE resource allocation has been studied extensively in recent years, and many approaches have been proposed in the literature. They can be divided into two main categories based on using or not using aggregation for MTCDs in the resource management process. The next subsection explores the first category solution, while the second subsection explores the second category, and finally the third subsection presents the data aggregation based on Software Defined Networks (SDN) and Fog Computing. A. RADIO RESOURCE ALLOCATION FOR M2M WITHIN LTE-A WITHOUT DATA AGGREGATION In this category, the proposed solutions include relaying MTCDs to the eNB while connecting H2H users directly to the eNB, with an orthogonal resource partitioning between the access link and backhaul link [7], an Energy-aware radio resource management (RRM) [8], an energy-efficient resource allocation algorithm with the objective of maximizing bits-per-joule [10]. A context-aware resource management approach for MTCD gateways is proposed in [9], while in [10], a delay-aware radio resource scheduler algorithm that satisfies the QoS requirements for MTCD and H2H is presented, and a hierarchical RRM approach is proposed in [11]. In [12], a type-2 fuzzy logic controller mechanism is used for radio resource allocation for MTCDs in co-existence with H2H within LTE, and a real-time spectrum analyzer is used for resource management in [13]. A tree-based algorithm is used in [14], and a Maximum energy efficiency is investigated in [15]. Each of these studies is explored in more detail below. In [7], the authors propose a radio resource partition pattern for the downlink transmission of LTE-A cellular networks with MTCD communications. Multi-hop transmission is defined for MTCDs, which are connected through a machine type communication gateway (MTCG) to the eNB to mitigate the massive competition for radio resources. MTCD to MTCG and MTCG to eNB links are assigned orthogonal parts of the radio resources, while all other links are directly associated with the eNB and share the remaining resources of the channel. A user utility function was defined in terms of the achievable data rate, and though its maximization the corresponding radio resource allocation matrix was determined. The limitations of [7] in the cases of low traffic rate and delay tolerant features of MTCD, were addressed in [8] by presenting an energy-aware RRM scheme for MTCD/H2H co-existence scenarios in LTE networks, with guaranteed QoS requirements for different users. This was achieved through minimization of the overall transmission power and maximization of tolerable packet delay for MTCD. Two heuristic algorithms based on the steepest descent approach were proposed to solve this optimization problem. The first shows how to effectively achieve the goal of transmitting H2H and MTC data at the minimum power, while the second takes into account only the minimization of transmission power for H2H traffic. The authors in [16] extend the work in [7] further by proposing an energy-efficient resource allocation algorithm with the objective of maximizing bits-per-joule capacity under statistical QoS provisioning. The proposed scheme was analyzed using mixed-integer programming, and the optimization problem was solved with canonical duality theory. A context-aware resource management approach for MTC gateways was proposed in [9] to achieve QoS provisioning by analyzing data on the traffic flow generated by H2H and MTC users. Various classes of H2H/MTC traffic were considered, namely: conventional, streaming, interactive, background, priority alarm, time tolerant, and time controlled. Also, dynamic contextual information was taken into consideration, such as service type, MTCD type, and network status, and then the MTC services were adapted to these diverse contexts. The main achievements were a mitigation of congestion and overload conditions in the system by satisfying the MTC services without degrading QoS for existing H2H services. In [10], the authors proposed a delay-aware radio resource scheduler algorithm, which satisfies the QoS requirements for MTCD while ensuring a minimal impact on the QoS of H2H traffic. The MTCD and H2H flows are grouped into n different classes according to their remaining times to serve (RTTS), defined as the time within which the flow should be served by the scheduler to meet its delay tolerant time. The RBs are assigned to classes according to a priority that is inversely proportional to the RTTS values. Moreover, within the same class, the scheduler gives a higher priority to H2H over MTCD to avoid the negative impact of MTCD on H2H. Although this approach satisfies the QoS requirements of each flow in terms of delay and data rate, the grouping of MTCD and H2H devices is managed at the traffic flow level. There is no grouping for the device itself, no details about the location, the mobility, or the power consumption. In addition, this approach assumes direct access between MTCD and the eNB, which is not suitable for a massive number of devices. Moreover, starvation may occur for the delay tolerable MTCDs in the case of high congestion. In [11], the authors propose a hierarchical RRM approach. As in typical MTCD applications, the amount of data consumed is relatively small, the RBs granted to MTCD are not fully consumed. Consequently, C-UEs can exploit this unused portion that would otherwise be wasted. In the proposed scheme, a two level hierarchy is proposed. In the first level, a PRB is allocated to MTC as well as to C-UE, while in the second, the MTCD delegates a portion of its unused resources to a neighboring C-UE. The results showed that, in the case of the high load of MTC, limited gain was achieved. In [12], the authors present a radio resource allocation mechanism in LTE for MTCDs co-existing with H2H devices and using a type 2 fuzzy logic controller. They assume an ideal channel where the failure of any access request can only occur as a result of its collision. Two categories of applications were considered: real-time (RT) applications, which are sensitive to delay, and non-real-time (NRT) applications, which are delay tolerant but have a minimum power requirement. This mechanism consists of two stages. In the first stage, the system evaluates the data flow based on the decision factors, while in the second step, RBs are allocated by first assigning them to RT users and then assigning the remaining RBs to NRT users. The impact of different channel conditions on radio resource utilization in real LTE networks was analyzed in [13]. A commercial RT spectrum analyzer was used to analyze the uplink LTE resource utilization, which was computed as a function of the number of RBs, as well as the data rate and spectrum efficiency. The main goal was to minimize the impact of MTC traffic on H2H traffic, which were co-existing on the same LTE network. This was achieved by allowing the MTCDs to transmit data on the channel with both high probability and high quality. Another variant using a persistent resource allocation algorithm for MTCD was proposed in [14]. The resources of the MTCDs were allocated periodically in a recursive manner based on a tree structure. This scheme does not use any resource for RACHs; instead, it assigns all resources as uplink data channels without any additional control signaling during the life of a machine. The concept of the persistent resource allocation scheme was to multiplex as many machines of different periods as possible onto a single channel. The tree-based algorithm was used to determine if the state of machines with different periods can be multiplexed. This scheme has shown potential performance gains in supporting a larger number of devices in comparison to coordinated access schemes for small packet transmissions. However, it was only beneficial for periodic traffic and was not useful for aperiodic or bursty data. In [15], the authors investigated the maximum energy efficiency of MTCD data packet transmission with the uplink SC-FDMA in LTE-A. They formulated the problem of energy efficiency as an optimization problem that includes modulation and coding scheme assignment, resource allocation, power control, and other constraints in the uplink of an LTE-A network. The problem was then converted into an NP-hard mixed-integer linear fractional programming problem to reduce the computation complexity and find the final VOLUME 9, 2021 optimum level of energy efficiency. They assumed different types of MTCD with different types of sensors generating different types of data packets. In this way, it was not possible to aggregate data into one large packet, since each sensor has to report its data in a determined time interval. The results of the simulation showed that, with limited RBs, the proposed algorithm achieved a low packet dropping rate with optimal energy efficiency in the case of large number of MTCDs. B. RESOURCE ALLOCATION FOR M2M DEVICES USING DATA AGGREGATION The second category of research into resource allocation and management for MTCDs in a co-existent network with H2H users, explores the research undertaken in data aggregation and multiplexing for MTCD. Data aggregation can be achieved in three ways: 1) Data aggregation at the MTCD level, in which the MTCD delays and aggregates its data by itself before transmitting it to the eNB. This method can be used to increase the efficiency of resource allocation. However, in most cases, MTCD traffic flows are periodic, sending their data at predetermined intervals and with only a small amount of data sent at each time interval, which makes this solution impractical. 2) The regular H2H mobile users can be used as mobile aggregators to aggregate and attach the MTCD traffic to its own data using its own unused resources [17], [18]. This can increase resource utilization by exploiting the unused resources of the traditional user, which otherwise would be lost. However, this solution is not suitable for high priority MTCD traffic that cannot wait for the availability of unused resources assigned to traditional users. 3) The most practical solution is the aggregation, clustering, and multiplexing of MTCD traffic from many devices into an aggregator (cluster head/gateway/RN), which in turn transmits the aggregated traffic to the LTE eNB which assigns its resources to the aggregator node instead of individual MTCD. This solution needs a number of algorithms to manage resource allocation, address how to aggregate the MTCD flows into one node, handle multiplexing issues, manage power consumption, and select the appropriate aggregator. The benefits of data aggregation are not only in resource allocation efficiency, but also in other areas such as reducing power consumption [19], [20], increasing system capacity, increasing the scalability of the system to serve a massive number of MTCDs, and decreasing the signaling overhead [21], [22]. Much research has been conducted in relation to data aggregation, clustering, and multiplexing [23]. Data aggregation can be categorized in terms of the type of aggregator as either fixed data aggregator (FDA), or mobile data aggregator (MDA), or cooperative data aggregation (CDA). Alternatively, data aggregation can be classified based on radio access technologies into two types: cellular-based aggregators and capillary-based aggregators. In the former, the MTCDs are equipped with a subscript identity module and connected to the network through the cellular gateway using a licensed frequency band [16], [24], [25]. In the latter, MTCDs are connected to the network through a capillary gateway using an unlicensed frequency band (e.g., ZigBee or Bluetooth Low Energy), while the aggregator itself is connected to the BS using a licensed band such as LTE-A [22], [23], [26], [49]. As this classification has been the mostly accepted and used, we will present the two categories in more details. 1) FIXED DATA AGGREGATOR Fixed data aggregators (FDA) can be further categorized into two types: single fixed data aggregator (SFDA), and multiple fixed data aggregators (MFDAs). In the former type, only one aggregator is used, while in the latter, many aggregators are used. In a single data aggregator, the signaling overhead between the aggregator and the eNB is reduced; however, the risk of being a single point of failure is increased. In addition, single data aggregators increase the delay of aggregated packets, and MTCDs may overwhelm the aggregator with huge numbers of packets, increasing the ratio of dropped packets. In contrast, using multiple RN aggregators increases the signaling overhead between MTCDs and the eNB, but it provides more reliability. Single data aggregators were proposed in [27]- [29]. In [27], the small data packets form MTCDs are aggregated, delayed, multiplexed, and reformatted to a large packet at the Packet Data Convergence Protocol (PDCP) layer within the RN. Resource utilization improved at the cost of delay. In contrast, a hierarchical energy-efficient data aggregation model for MTCD uplink to minimize the average energy density consumed was proposed in [28], where a multi-stage and a hierarchal structure were used to select some MTCDs in a probabilistic way to work as aggregators to the data packets from other nodes. At each stage, there is a new hierarchy of aggregators that receives data from the aggregators of the previous stage. Finally, in [29] a data aggregation for massive MTC in a large-scale cellular network was introduced. The authors investigated the signal to interference ratio (SIR) for both the aggregation phase and the relaying phase. They also analyzed the performance of the system in terms of the average number of successful MTCDs and the probability of successful channel utilization using a stochastic geometry framework. Two resource scheduling approaches were used: a random resource scheduling (RRS) algorithm, and a channel-aware resource scheduling (CRS) algorithm. The results showed that the CRS algorithm outperforms the RRS algorithm. The MFDAs scheme were presented in [30]. Here, the MTCD can be connected to one or more MTCGs at the same time. Two types of relaying techniques were introduced. In the first, an SIR based, the signal from the MTCD can be decoded by one or more MTCGs; therefore, the packet may be duplicated at the eNB. In the second, a locationbased, the packet duplication drawback was overcome by allowing the MTCDs to transmit only to the closest MTCGs. This improvement was accomplished at the cost of increasing the information exchanged between the MTCD and MTCGs. This work was applied only to homogeneous types of MTCDs with the same type of traffic, and the QoS and delay tolerant MTC services were not taken into account. 2) MOBILE DATA AGGREGATOR In a Mobile Data Aggregator (MDA), one or more mobile data aggregators were used to first aggregate the data from the MTCDs and then relay it to the eNB. The mobile data aggregator can be a mobile RN installed on a mobile vehicle (e.g., public bus, taxi), a UE that allows MTCDs to connect and send their data through it [17], [18], or an RN installed on a drone/mobile unmanned aerial vehicle (UAV) [31], [32]. Because of the mobility of MDAs, when they enter the vicinity of MTCDs and allow the MTCDs to connect and sent their data through them, they reduce the communication distance between the MTCD and MDA gateway, and thus decreasing the transmission power needed. This scheme is best suited to the aggregation of periodic and delay tolerant MTC traffic [18], such as smart metering, due to the fact that the MTCD has to wait for the MDA to arrive at its trajectory during its journey. The use of a UE as an MDA has been introduced in many research studies in the field [17], [33], [34], although some researchers prefer not to use a UE as an MDA because it causes fast depletion of the UE's battery. Some authors have suggested implementing energy harvesting for mobile UE to overcome battery depletion issues [35]. Multiplexing the bandwidth between MTCDs and regular UEs has been proposed by 3GPP Release 13 and beyond, so that the MTC traffic can be trunked and multiplexed within the resources assigned to regular Device to Device (D2D). Using only one gateway as an MDA is referred to as a single mobile data aggregator (SMDA), while using more than one gateway as an MDA is referred to as a multiple mobile data aggregator (MMDA). Using a UE as SMDA has been proposed in [17], [33], [34], [36]. In [17], the conventional UE is used as a single mobile gateway aggregator, and the communication between D2D is exploited in the cellular system to aggregate and multiplex the traffic from surrounding MTCDs. The UE attaches its own data and then uses a Time Division Multiple Access (TDMA) to relay all data to the eNB. Through this method, the mobility of regular D2D is exploited to decrease the transmission distance between MTCDs and the eNB, thereby decreasing the power consumption of MTCD transmission. It also mitigates the capacity drawback in the large-scale system by grouping the MTCDs to regular users. Its drawback however, is the increase in MTCD traffic delay. In [33], the authors use two applications to investigate the potential usage of the smartphone as a mobile gateway for MTCDs using standard middleware. They show improvement in system connectivity but at the cost of smartphone battery depletion and increased delay for MTCD traffic. In [36], the authors propose a scheme for MTCD traffic aggregation and trunking within the resources of D2D users in a largescale system. They introduce a comprehensive stochastic geometry framework to analyze the coverage area of regular users, to make sure that the MTCDs send their data using the shortest path to a nearest regular user. The model assumes that an MTCD is connected to only one UE to ensure that the aggregation process is achieved in a distributed manner. Multiple mobile data aggregators (MMDAs) have been proposed in [31], [32], [41]- [43]. In [31], the authors proposed a resource allocation and scheduling scheme for cluster-based MTCD. The goal was to increase the power efficiency of the system while meeting the rate requirement for each MTC device. Each MTCD group had a cluster head (CH) that worked as both coordinator and aggregator to collect data packets from the MTCDs and send them to a flying BS on a UAV. Orthogonal frequency division multiple access (OFDMA) was used for uplink, and the queue rate stability approach was used to determine the minimum number of required UAVs to serve the CHs. Although this study showed good results in terms of power consumption for the CH and the minimum number of UAVs required, it required other protocols and algorithms such as obtaining the positions of CHs and computing the dwell time of UAVs over the CHs. The work in [32] is an extension to [31], where an efficient deployment and mobility model for the UAVs was introduced. The mobility of UAVs was determined and the power consumption by the UAVs was minimized, while the MTCDs were also served with minimum transmission power. In [37], co-existing H2H users and MTCDs were considered with the H2H users acting as MDAs to collect data from the MTCDs within their vicinity and relay it to the eNB. The resources for MTCDs were allocated based on the residual energy in the MTCD: high priority was given to the MTCD with less residual energy. Results showed that the delay constraints for both H2H and MTCD were satisfied, and an improvement in system performance in terms of energy efficiency was achieved, thereby extending the network lifetime. In [38], the authors introduce a stochastic geometry-based framework to analyze the coverage probability and average data rate of a three-hop MTCD distributed in co-existence with regular UE (H2H users). The UEs were used to relay the data of MTCDs in multi-hops to the eNB without aggregating data from different MTCDs. The results showed an improvement in terms of data rate and network area coverage, due to the fact that MTCDs out of range can be relayed using UE by exploiting D2D links. The mobility of UE was addressed by using a space-time graph to predict the location of UEs and exploit it to design a cost efficient multi-hop D2D topology. Good results were achieved in terms of data rate and extending the coverage area of the network. However, the study did not take into consideration the transmission delay, which should have been taken into consideration. The work in [17] was extended in [39], with the proposal of three aggregation schemes: one fixed, one random, and another greedy. In all these schemes, the UE is used as an VOLUME 9, 2021 aggregator gateway to aggregate the traffic from the MTCDs and then relay them to the eNB. The authors introduce a mathematical model to evaluate the end-to-end outage probability for the uplink data at the UEs. They show that the greedy scheme outperforms the other schemes in term of outage probability at the MTCD. A load balancing relay algorithm is introduced in [40], in which mobile MTCDs are grouped randomly and their data is aggregated to an MTC gateway. The MTCDs are regrouped based on the load of each gateway to balance the load and resources for each gateway. Dynamic resource allocation for MTCDs in the link between MTCD and MTCG is studied and system performance is evaluated in terms of system capacity and outage probability. The results show good performance. However, the authors assume an information exchange (e.g., location information, grouping decision) between MTCDs and BS to achieve the dynamic grouping of MTCDs, where the decision of grouping is assigned by the data aggregation center at the BS, which results in a huge signaling overhead in the backhaul link. Furthermore, QoS is not included in this study. 3) COOPERATIVE DATA AGGREGATION (CDA) While MDA is suitable only for tolerable delay traffic, it shows an improvement in power efficiency and data rate. Meanwhile, FDA is suitable for delay intolerant traffic, although it requires high power consumption compared to MDA since the location of the aggregator is fixed, and therefore the distance between the aggregator and the MTCD is not optimal. Therefore, it has been suggested to build a new approach that combines the two schemes into one scheme to satisfy the advantage of both; this third scheme is called CDA [18], where both fixed and mobile data aggregators cooperate to aggregate data from massive MTCDs (mMTCDs). The FDA is assigned to aggregate data from delay intolerant mMTCDs, while the MDA is used to aggregate data of delay tolerable mMTCDs. The single point of failure and the suboptimal location of the FDA are avoided. A dynamic resource allocation based on the priority of MTCDs is presented. Although the results show good performance in term of outage probability, energy efficiency, and system capacity, resource allocation managed by the eNB and the aggregator play no role in resource assignment; it simply forwards the resource request from the MTCDs to the eNB. In particular, resource allocation is assigned based on the availability and the number of resources requested by MTCDs individually, which contradicts the concept of aggregation-that the resources blocks are assigned to the aggregator instead of individual assignment to each MTCD. 4) DATA AGGREGATION IN CAPILLARY NETWORK Data aggregation in capillary networks connected to an LTE is introduced in [23], [26], [41]. In [23], fixed MTCDs are grouped to one fixed aggregator with a capillary connection; the aggregator is connected to the LTE BS by a cellular channel. A fixed aggregation period is considered, which creates an increase in packet delay. The trade-offs between random access interaction, resource allocation, and communication latency are presented, and the results show a clear reduction in access interaction and resource allocation, at the cost of increasing the packet delay during transmission. Similar results are presented in [22], in which an experimental study is implemented to evaluate the impact of data aggregation on the signaling overhead and delay. The results show a significant reduction in the signaling with data aggregation; they also show that the signaling load reduction improves as the aggregation level increases (the number of aggregated MTCDs). The study also shows a trade-off between delay and aggregation level, since the aggregation level increases as the traffic delay increases. However, it does not provide any details about resource management or QoS differentiation for different types of MTCD traffic. The work in [22] is expanded by [42]. The author proposes a priority-based data aggregation scheme for MTCD communication over the cellular network; three types of MTCD data traffic with different priorities based on their delay requirements are presented. The author also validates the study by introducing an analytical model for the aggregator using an M/G/1 queue. The study shows good performance in terms of average waiting time and system delay, but this comes at the cost of increasing the power consumption. However, this study does not address the issues related to LTE resource allocation or MTCD traffic modeling. In addition, the study supposes that the MTCD traffic has a higher priority than H2H in the case where they approach their tolerable delay threshold; therefore, in the event of a high MTCD traffic rate, the improvement in the performance of MTCD will be at the cost of degrading the performance of H2H traffic. In [26], the authors propose a group-based radio resource allocation model, in which MTCDs are grouped based on identical transmission protocols (such as WiFi, wireless personal area network (WPAN), ZigBee) and QoS requirements (data rate and delay) to ensure QoS levels for MTCDs. The authors take into consideration the following assumptions: the uplink of SC-FDMA based LTE-A networks, WiFi grouping for MTCDs, and common service features of MTCDs. They utilize an effective capacity concept to model a wireless channel in terms of QoS metrics. The authors formulate a framework as a sum-throughput maximization problem, which satisfies all the constraints associated with SC-FDMA RBs and power allocation in LTE-A uplink networks. They solve the resource allocation problem by transforming it into a binary integer programming problem and then formulate a dual problem using Lagrange duality theory. In [41], an energy harvesting gateway is proposed as an aggregator, which is connected to the eNB through an LTE interface, while it is connected to MTCD through a capillary communication technology such as ZigBee IEEE 802.15.4. SC-FDMA resource allocation is studied, and the performance of the system in terms of data transmitted, the number of RBs, and the drop rate is evaluated. The evaluation of the system is expressed as an optimization non-deterministic polynomial-time (NP-hardness) problem, and two transforms are applied to express the problem in a linearly separable format. A heuristic algorithm for resource allocation is also introduced and compared to the optimization solution. The data energy causality, delay, and SC-FDMA constraints are taken into consideration. TABLE 1 summarizes the comparison between some data aggregation studies in the literature. C. DATA AGGREGATION BASED ON SDN The rapid increase of data traffic in the core network, requires data aggregation to improve the performance of the system, in particular, for balancing the link loads. An aggregation approach with an admission control to provide a QoS for the SDN is introduced in [43]. The authors suggest rejecting the incoming data flows if it causes a degrading in the performance of the already admitted flows. Based on the performance metrics of the already admitted flows, the SDN controller is used to take a decision of accepting or rejecting the incoming flows. This study shows a reduction in packet loss ratio and delay. An aggregation and scheduling approach in the flow-level for smart metering is proposed in [44]. The authors focus on investigating the fairness for traffic flows using SDN's flow-level features. Although the flow aggregation proposed for smart metering improves the overall throughput of the system, it experiences a problem of unfairness. So the authors used NS-3 and Mininet based evaluation to prove that their aggregation and scheduling approach achieves fairness for smart meters. An efficient approach of flow aggregation for the delayinsensitive traffic control based on SDN framework is proposed in [45]. The study focuses on the case of massive number of small delay-insensitive traffic flows. The authors introduced a new data structure called flow tree, which is used to aggregate and decompress traffic flows according to the flow size in such a way to be adaptive to the changes in network conditions. This approach reduces the cost of communication between the controller and OpenFlow switches, and the cost of storage in switches memory. Due to the expected increase in the data traffic from a huge number of sensors used in IoT applications, and given that the header in IoT packets consumes a large percentage of the total packet's size, this causes high overhead. The data aggregation based on SDN was one of the effective solutions to reduce the message delivered to the SDN controller. An aggregation/disaggregation approach based on SDN has been introduced for data sensors in IoT applications [46]. The authors exploited the (P4) switches proposed in [47]. Two P4 switches were used. One switch was used for receiving all data packets from IoT sensors, buffering them, and concatenating them with some metadata into a large packet transferred to the second P4 switch. The second switch in turn performs disaggregation to extract the original packets. A noticeable delay is shown in the process of disaggregation. The authors analyzed their work using IoT talk platform. They showed a decrease in packet loss, improvement in system throughput, and reduction in communication between the SDN controller and switches. The work presented in [46] introduces a mathematical analysis of the generated streams from the gathered packets, without including the designing and implementation issues, and without reporting the maximum throughputs. A similar work in [48] proposed by the same authors, involves implementation and design issues related to the aggregation and disaggregation approaches and their measured throughputs. The results show an improvement in the maximum throughput during aggregation, but a noticeable delay was incurred during disaggregation process. Moreover, they extended their work in [49] by solving the limitation of fixed payload size and the maximum number of aggregated packets, by supporting different payload sizes and allowing any number of aggregated packets as long as it does not exceed the maximum transmission unit (MTU). In addition, the aggregation and disaggregation throughputs were improved and can reach the line rate (i.e., 100 Gbps). The authors in [50] proposed a second layer (L2) communication protocol for the Internet of Things programmable data planes referred to as Internet of Things Protocol (IoTP). The main goal of this protocol is to achieve the data aggregation algorithms within the hardware switches, at the network level. This process takes into consideration the network status and information such as MTU, delays, link bandwidths, and underlying communication technology, to enable the data aggregation algorithms dynamically. It provides support for different IoT communication technologies, different aggregation algorithms, and implementations of multi-level data aggregation. They implemented IoTP based on P4 language and using emulation-based Mininet environment. They showed a noticeable improvement in data aggregation. In [51], the authors proposed an LTE-WiFi spectrum aggregation (LWA) based on the M-CORD platform which is used as an SDN platform to provide network function virtualization (NFV), cloud computing, edge computing, and virtualized RAN capabilities. They integrated WiFi with LTE in a very tight coupling scheme. Data from both networks is aggregated at the LTE PDCP layer, while a top-level network configuration is supported to the network orchestrator (XOS) of the M-CORD. They showed a significant improvement in system throughput compared to other similar scenarios. The traffic was split between LTE and WiFi based on the packet number: the even number are sent to LTE and the odd numbers to WiFi. This reordering function caused an increase in the packet delay. In [52], the authors proposed an LTE-WiFi data aggregation on the RAN level based on the assistance of SDN (LWA-SA). They supposed a dual connectivity UE to both LTE and WiFi. Traffic was then split between LTE and WiFi based on the QoS requirements, and the best WiFi access point (AP) was elected using a Genetic algorithm (GA). SDN A novel SDN based smart gateway (Sm-GW) was introduced in [53]. A Sm-GW was inserted between small cell eNBs and the multiple operators' gateways such as LTE S/P-GWs. In order to manage the backhaul link capacity, a scheduling algorithm was suggested for backhaul resource sharing with the assistance of SDN orchestrator. The results showed that SDN orchestrator provided flexible resource management between the Sm-GWs, and hence improved the utilization of the backhaul bandwidth. A Fog computing based Sm-GW for IoT e-Health application was presented in [54]. The proposed system exploits its position between the LAN/PAN/BAN and WAN to collect health and context information from different sensors. It included different services such as local data processing, local storage, data mining, data security and privacy, in addition, to data transmission controlling, enabling efficiency in term of energy and communication bandwidth. An intelligent intermediate layer was introduced between sensor nodes and the cloud to provide smooth and efficient e-Healthcare services while supporting patients' mobility. Complete system implementation was presented, in addition to an Early Warning Score (EWS) notification system to inform for any emergency case. In [55], a gateway for the Cloud of things (CoT) was introduced for managing things and to represent data for the end user. The lightweight virtualization technologies were exploited to improve the efficiency of the designed gateway and to decrease the impact on the performance. It mitigated the unnecessary communications between the gateway and Things, and therefore, reduced energy consumption. However, this study has some limitations, as it needs more adaptation algorithms to reduce the communication between things and the cloud [56]. Fog Computing platforms with Sm-GW has been proposed for IoT devices and wireless sensors in [57]. The main purpose of Fog Computing is to insert an intermediate layer between underlying devices and cloud network to provide preprocessing, monitoring, storage, and security. The Sm-GW plays an important role in achieving these functions. Furthermore, Sm-GW used to filter and mitigate the IoT communications by performing data pruning before sending them to the cloud server while meeting the constraints of the underlying devices and satisfying the requirement of high-level applications. A Sm-GW based on Fog Computing was proposed in [58]. It has the ability to analyze the data before transmitting it to the cloud, and can differentiate between real-time data and non-real time data. Thus in order to utilize the available bandwidth efficiently, it responds to real-time data and sends it to the cloud directly, while the non-real time data is pre-processed, filtered and only the meaningful data is sent to the cloud. III. THE CONTRIBUTION OF THIS PAPER This paper introduces a QoS-based data aggregation algorithm for MTCDs and resource allocation in an LTE-A network. Various types of MTCDs with different QoS requirements are considered. An aggregator is designed inside the RN (a layer-3 in-band LTE-A RN) to aggregate data from different types of MTCDs, process it, reformat it, and then relay it to the LTE eNB. The processing task consists of classifying the data into three priority classes, then buffering it so as not to exceed its delay tolerance threshold, and then sending it to the LTE eNB. The priority for each class is assigned based on its level of tolerance to delay. Unlike previous research, this paper uses an adaptive maximum aggregation delay and an adaptive Transport block size (TBS) threshold. These two parameters are very important for controlling the aggregation process to increase resource utilization efficiency with a minimal cost of delay. Two resource allocation and scheduling schemes are used in this paper, a data buffer aware scheduling scheme and a moving boundary point scheme. In the former, the LTE resources are partitioned between aggregated users (MTCDs connected to RNs) and regular users (H2H) in a proportional approach to their data buffer size. In the second scheme, the LTE resources are shared and partitioned in a hybrid manner to guarantee a minimum requirement RB for H2H, while also preventing the MTCDs from entering a starvation state. A simulation model using MATLAB is designed to analyze the system performance in terms of throughput, utilization, loss ratio, and average packet delay. This paper also presents a survey of the literature covering the majority of works in the field of study, including smart gateway, data aggregation based on SDN and Fog Computing and the new works in 2020 (i.e., [41], [40], [50]- [53], [58]). IV. SYSTEM MODEL The system considered to evaluate the proposed algorithm is as shown in FIGURE 1. It consists of one LTE Base Station eNB a number of MTCDs, coexisting with a number of H2H devices supported by LTE. The H2H devices are assumed to connect directly to LTE BS, while the MTCDs are first connected to the RN acting as aggregator then to the LTE BS. Three fixed Layer-3 In-band RNs characterized according to the 3GPP specifications in [59] are installed within the coverage area of LTE BS. Each RN works as an intermediate node to serve the MTCDs within its coverage area. Each RN has dual interfaces and dual functions: works as a base station from the point of view of users through Uu interface, and as UE from the point of view of LTE base station through Un interface. In order to efficiently manage the resources of the LTE eNB base station and the RN, we assumed that the MTCDs within the coverage area of each RN are clustered and aggregated using an aggregator implemented inside the RN as shown in FIGURE 2. The aggregator collects the packets sent by the MTCDs connected to the RN, may delay them, reformates them and forwards them to eNB. Through this process, the small packets from MTCD are aggregated and reformatted such that the LTE RB assigned to RN can be exploited more efficiently since a single RB has more capacity than what is needed by one MTCD. The aggregator is added since one RB allocated by eNB to MTCD provides more capacity than what may be needed by a single MTCD. Thus packets generated by MTCDs are first aggregated, as shown in FIGURE 3, and then allocated RBs according to some policies that will be defined later on. This technique will be very efficient for MTCD applications generating small packets that are delay tolerant. Three types of MTC traffic sources, namely: e-Healthcare, traffic monitoring and smart metering, and one H2H application with video traffic will be considered. The MTC sources will be served according to semi-priority scheme, where the e-Healthcare source will have the highest priority, while the smart metering will have the lowest. The traffic source characteristics of each one of the four types is as shown in TABLE 2. V. RESOURCE MANAGEMENT IN THE PROPOSED SCHEMES The resources in eNB are allocated to UEs and MTCDs in two stages. In the first stage, the eNB PRBs are partitioned between direct UEs and RNs, the RBs assigned to RN are exploited to transmit the data buffered within aggregator to eNB through the backhaul link (link between RN and eNB), while in the second stage, the active MTCDs reuse the subcarrier that doesn't used in backhaul link to transmit their data to RN, in such to avoid the self-interference. We assumed there is no interference between the access links from one RN cluster to another RN cluster. Two schemes were used for partitioning the resources between regular users UEs and RNs. In the first scheme, denoted here by the proportional fairness, the LTE resources are partitioned between regular users and RNs proportionally to the data buffered in each one. In the second, however, a moving boundary point is used to split LTE resources into three. One part is reserved for MTCDs, a second for H2H users, and the third is shared between H2H and MTCDs according to their requirements. A hard threshold value is used to partition the resources between H2H users and MTCDs. The next sub-sections explain the two schemes of resource partitioning. A. PROPORTIONAL FAIRNESS RESOURCE PARTITIONING SCHEME In the proportional fairness scheme, the LTE resources are partitioned between H2H UE users, and MTCD based on their buffered data size. The proposed resource allocation scheme is implemented in two stages. In the first stage, the LTE resources are partitioned between H2H direct users and the backhaul link of the relay users based on the size of data buffered at each H2H and each RN respectively. A buffer aware proportional fairness algorithm is used, it is similar to the algorithm in [60]. While resources are partitioned in [60] between the RN and direct users based on the number of users attached to each RN and the number of direct users, in our proposed algorithm the resources are partitioned based on the data buffered at the aggregator inside the RN and the data buffered at each regular user. The Buffer State Report is used to inform the eNB about the amount of data buffered within its clients, then the LTE resources are assigned to RNs and UE according to the following equations: where RB tot is the total number of Resources Blocks in one LTE sub-frame; BF RNj is the size of data buffered in the jth RN; BF UEm is the size of data buffered in the mth H2H user; N is the number of RNs; H is the total number of H2H devices; RB RNj is the number of RBs assigned to the jth RN; RB UE is the portion of RBs assigned to all regular users. The RB UE is distributed to all regular users (H2H users) in a Round Robin manner. RN exploits the resources that have been granted by eNB to send the aggregated packets at the aggregator's queue through the backhaul link to eNB, by serving the aggregator's queues based on their priority, starting to serve the high priority queue which has buffered the e-Healthcare traffic, then serving road monitoring traffic, finally serve the smart metering traffic. An additional improvement is added to provide a balance between the second priority and third priority in the case where the delay of Head of Line (HOL) packet at the third queue reaches the threshold value, while the HOL packet at the second queue has a tolerable delay, more explanation in section VI. In the second stage, the RBs for MTCDs connected to RN in the access link are assigned, we suggest that MTCDs use LTE SC-FDMA, the RN manages the available resources by reusing the frequencies used by other RNs. In more details, we suggest that the RN are spatially isolated, where the RBs used in the access links of RN can be reused by the other RNs while avoiding self-interference between Uu and Un interfaces of RN; the RBs used by RN in backhaul link cannot be used in access link in the same TTI. For simplicity, we suggest that the RN uses a round-robin mechanism to manage the available resource and allocate them to the active MTCDs in their transmission in the access links. Whenever the MTCD acquires a resource from RN, they use them to send their data to RN, and then the aggregation function takes place and implemented within RN. B. MOVING BOUNDARY POINT RESOURCE PARTITIONING SCHEME This scheme provides a hybrid mechanism for resource sharing and partitioning between H2H and MTCD users. First, the number of RBs required by each user is estimated based on the size of data buffered for each user, and the channel quality indicator (CQI) for each user. In the same approach, the number of RBs required by each RN is estimated based on the data buffered at each RN and the CQI between RN and eNB. The RBs in this scheme are divided into three parts. The first part is reserved for H2H to guarantee the minimum data rate for each H2H user, known as the guaranteed bit rate. The second part is reserved for MTCDs to guarantee RBs for high priority MTCD traffic, while the third part is shared between the H2H users and MTCDs. The shared part of RBs can be exploited by any type of user based on their requirement to ensure that the delay tolerance value is not exceeded. A predefined moving boundary point is set as a threshold value to split the shared part of RBs between H2H and MTCD users; this threshold value is elastic and can be varied to increase the RBs assigned to H2H users in the event that there are free RBs in the other part, and vice versa. In the event that there is only one type of user requiring a resource block, while there is no data awaiting transmission by the other type of user, all RBs are available for the type of users that need the RBs. FIGURE 4 shows the concept of moving boundary point resource sharing and partitioning. This scheme guarantees that some RBs are reserved for H2H, and therefore guarantees that H2H users are not affected by the huge number of MTCDs. At the same time, it avoids MTCDs entering a starvation state and guarantees at least meeting a minimum RBs allocation level for high priority MTCDs. In addition, it provides elastic resource partitioning between H2H and MTCD users. The moving boundary point can be adjusted to increase the resources assigned to H2H, but this comes at the cost of RB assignment to MTCDs. VI. AGGREGATION FUNCTION The aggregation function is implemented within the RN, and it takes place when MTCD traffic arrives at the RN. The function aggregates all data from different types of MTCD and classifies them into different queue buffers based on their priority. The size of each queue inside the aggregator is assumed to have infinite capacity, and when the delay of aggregated packets exceeds their tolerable delay limit, they will be dropped. Each class has its own buffer in the RN, as shown in FIGURE 3. We assume three types of traffic with different priorities: the first (highest) priority for eHealth traffic, the second priority for MTCD traffic monitoring, and the third (lowest) priority for MTCD smart metering traffic. The aggregator accumulates the traffic in its buffers from different MTCDs and delays them until it reaches one of two parameters: the maximum tolerable delay threshold Di max or the maximum buffer size threshold Buf max (whichever occurs first). These two parameters are very important for controlling the aggregation process and they should be selected in such a way as to improve RB utilization while maintaining traffic delay at a level below the tolerable delay threshold of any traffic. As the aggregation delay increases, so the resource utilization increases up to a limit beyond which any increase of delay aggregation will degrade the system's VOLUME 9, 2021 performance in terms of increasing delay without any gain of resource utilization. Three different tolerable delay threshold values are used; one value for each type of traffic. The delay threshold for the highest priority traffic is the least, while the delay threshold for the lowest priority traffic will be the largest one. It is, therefore, expected that the aggregation delay for the high priority traffic will be less, while the lowest priority traffic will be delayed more. Each packet has its own timer, and when it reaches the tolerance threshold value it triggers the RN to request RBs to transmit the aggregated data. In the same approach, the size of the aggregated data should be accumulated until it reaches the threshold value, after which any increase in the size of data aggregated will degrade system performance. The two parameters are computed adaptively to the type of aggregated data, their tolerable delay, and the estimated TBS of each RN. TBS is defined as the number of bits that can be transmitted based on the RBs used, the modulation rate used, and the code rate. In an LTE RN, TBS depends on the number of RBs assigned to the RN. The CQI between RN and eNB indicates the modulation rate used to transmit the aggregated data. The aggregator traces the history of RB assignment to the RN (e.g., the last 10-15 assignments) and uses this to estimate the RBs that can be assigned to the RN in the next time slot (TTI). The aggregator then estimates the TBS for the next slot, and this TBS is used as the threshold value to which the aggregated data is accumulated. where nPRBs = the total number of RBs assigned to the RN, nDatasymbol = the number of data symbols within RBs in one subframe (12 × 7 × 2) , RE = the resource elements used for synchronization, modulationrate = the modulation order based on the SINR and channel quality between RN and eNB, and CRC is the cyclic redundancy check (equal to 24 bits in LTE). When the RN is granted an RB to transmit its traffic, the aggregator collects the traffic from the different buffers starting with the highest priority queue, then the second priority, and finally the third priority, until the accumulated data fills the available granted RB. In this way, traffic with the highest priority is transmitted first. To keep the drop rate for each queue as low as possible while buffering packets in the RN, we suggest another priority for the traffic with same type in the same queue based on its tolerable delay; the packet with the lower tolerable delay being served first. To further improve system performance we suggest that the traffic of the third priority class can be served before the traffic of the second priority in the event that the HOL packet of the third priority class reaches its delay threshold while the delay of the HOL packet of the second priority has a tolerable delay and can tolerate further delay without exceeding its delay threshold. This will provide a little balancing between the second-and third-class priorities and avoid the third priority class from entering a starvation state, thus decreasing both delay and drop rate. Of course, this comes at the cost of a small increase of delay for the second priority class. VII. PERFORMANCE METRICS This section sets out the performance metrics used to evaluate the proposed algorithm in this study: system utilization, average packet delay, average drop rate, and average throughput. A. SYSTEM UTILIZATION System utilization is one of the most important key performance indicators used to evaluate these types of systems. It is expected that the proposed aggregator will improve the performance of the system in terms of utilization by exploiting the RBs assigned to the RN efficiently. System utilization is defined as the average percentage of TBS used to transmit the aggregated data. In particular, utilization is defined as the effective throughput or spectral efficiency bits per second per hertz. In LTE, the TBS refers to the Physical Layer PHY payload to be transmitted over the radio interface, which consists of the MAC packet plus a 24-bit CRC overhead. The average utilization for each class priority is computed by averaging the throughput of all users belonging to that class over the maximum throughput of the system; the throughput for each user is defined as the total number of bytes transmitted correctly over the simulation time. The maximum throughput of the system is computed assuming the ideal case of the BS where the channel quality is optimum, the highest modulation rate and code rate are used, and by using the total number of RBs available in the system. The maximum throughput of the system can be defined as the maximum number of bytes that can be transmitted over time in the ideal environment of that system. M i = the number of MTCDs belonging to priority class i. B. PACKET DELAY Packet delay is calculated from the time the packet is generated by the user until it arrives at the eNB. It comprises two terms of delay: the delay of the packet within the buffer of the user, and the delay of the packet inside the buffer of the aggregator. Packet delay is computed by creating a timestamp for each packet when it is generated. When the packet arrives at the eNB, the delay is calculated for each packet; after that, the average delay for all packets belonging to the same user is calculated. In addition, the average delay for all users belonging to the same priority class is computed, to compute the average delay for each priority class. Packet delay is an important metric, and it must not exceed the tolerable delay of each type of traffic. It can be used to evaluate how the proposed aggregator fulfills the QoS of each traffic type. C. PACKET LOSS RATIO Due to holding packets within the user buffer until it is granted a chance to transmit its data directly to the eNB or to the aggregator, and the delay of the packets within the aggregator, some packets may exceed their tolerable delay, be dropped, and be considered as lost. We assume a dropped packet only occurs as a result of a delay exceeding the tolerable threshold and not due to an error in transmission. The loss ratio of each user is computed by dividing the number of lost packets by the number of packets generated by that user; the loss ratio is averaged for all users belonging to the same priority class to determine an average loss ratio for each class. D. AVERAGE THROUGHPUT Average throughput is one of the major performance key indicators for evaluating communication systems. Throughput is defined as the amount of data transmitted by each user correctly over a given time period. In this paper, the amount of data transmitted by each user is computed in each TTI, and then, averaged over all users with the same priority class. This is then averaged over all simulation time slots to determine the average throughput for the simulation time. Throughput is used to measure and evaluate system provisioning of the QoS. VIII. SIMULATION MODELS To evaluate the proposed algorithm, we have built a Matlab simulation model based on the one in [61], [62]. The model in [61] was modified to support LTE-A uplink transmission, and the RN was upgraded with a built-in aggregator. The simulation program was run and repeated twice with the same parameters shown in Table 3, once to evaluate the first proposed scheme (i.e., Proportional fairness resource partitioning scheme), while the second to evaluate the second proposed scheme (i.e., Moving boundary point resource partitioning Scheme). In the first scheme, one LTE-A cell, and three types of MTCD with various traffic characteristics is supposed. The simulation was run by varying the mean inter-arrival rate for the MTCD, while fixing it for H2H users. The mean arrival rate of MTCD traffic was increased in each run by increments of 5% of the initial value, where the initial values of mean arrival rate was 1 15 for e-Healthcare traffic, 1 20 for road monitoring traffic, and 1 30 for smart metering traffic. The smart metering traffic had the lowest arrival rate, while the road monitoring traffic had the second-highest arrival rate, and the e-Healthcare traffic had the highest arrival rate. The simulation was run using the aggregator as described in Section VI, and the resource management as described in Section V. The simulation was also run without using the aggregator, allowing all MTCD and H2H users to connect directly to the eNB and using a round-robin scheduling algorithm, and then, the results were compared. In the second scheme, the simulation was run with the same configuration parameters as in the first scheme, while also using the moving boundary point for resource sharing and partitioning between H2H and MTCD users as described in Section V-B. And the results were also compared to the case where the aggregation does not been used. IX. RESULTS AND PERFORMANCE ANALYSIS This section is divided into two subsections, the first section presents the results for the first proposed schemes (i.e., proportional fairness scheme, while the second section presents the results for the second proposed scheme (i.e., Moving Boundary Point scheme). All results in these two subsections will be presented as a function of the mean arrival rate. In each case, the simulation is run at 15 different mean arrival rates. The mean arrival rate is increased at each point with a 5% increment of the initial value of the arrival rate. Moreover, in each case, the simulation is repeated 10 times, and the simulation is run for 20000 TTI (i.e., 20,000 msec). The simulation result is averaged over the 10 runs with a 95% confidence interval. The observed simulation results a maximum error between runs of less than 0.50% of the mean value. The dashed curves represent the system performance without the aggregator, while the solid lines represent the system performance when using the aggregator. The average utilization for all traffic types in both cases (with/without using the aggregator) is shown in FIGURE 5. In the case of using the aggregator (solid lines), the average utilization for all MTCD traffic increases as the mean arrival rate increases. This comes at the cost of a decrease in the VOLUME 9, 2021 average utilization of H2H users (solid black line with ''+'' mark). The road monitoring traffic has the highest utilization, despite being medium priority; this is because the road monitoring traffic has the highest arrival rate. By comparing the utilization in both cases (with/without using the aggregator), there is a significant improvement in the utilization for all MTCD traffic when using the aggregator: approximately 16% increase for the road monitoring traffic and approximately 2% increase for the e-Healthcare MTC traffic. Furthermore, the utilization improvement for video H2H traffic is close to 5%, although it decreases as the MTC arrival rate factor increases, because the resources are consumed by MTCD. In order to maintain H2H performance when MTCD traffic varies, a solution will be proposed in the second scenario. A noticeable decrease in the utilization for smart metering traffic occurs at the high arrival rates. This is because the smart metering traffic has the lowest priority; therefore, the other types MTCD traffic is prioritized at the cost of performance deterioration of smart metering traffic. Besides these differences in utilization among the various traffic types, the total utilization of the system with aggregator improves as traffic is increased, as shown in FIGURE 6. 2) MTCD DELAY In FIGURE 7, the average delay (in msecs) for all types of MTCD traffic is presented on a logarithmic scale. In the case of using the aggregator, and at the low arrival rate factor, the results show a significant improvement in decreasing the average delay for all types of MTCD traffic, while at the high arrival rate factor, the average delay for smart metering traffic (solid green curve with ''+'' mark) and road monitoring traffic (solid blue curve with '' '' mark) have a Higher average delay than in the case of not using the aggregator. This performance was expected, since the aggregator delays the traffic until a predefined data size or time aggregation level is reached. This comes against an increase of system utilization. The figure also shows that the smart metering traffic has the highest delay (solid green curve with ''+'' mark). This is because they have the lowest priority; therefore, they are delayed in the aggregator to provide higher performance for the highest priority traffic. The e-Healthcare traffic has the lowest average delay (since they have the highest priority); this validates the QoS provision of the proposed algorithm for all traffic in terms of delay. 3) PACKET LOSS RATIO In FIGURE 8, the loss ratio of all traffic types is displayed on a logarithmic scale. In the case of using the aggregator, there is no loss ratio for the e-Healthcare traffic because they have the highest priority, while there is a slight loss ratio for road monitoring traffic (solid blue curve with '' '' mark) at the high arrival rate factor. In addition, the figure shows a high loss ratio for smart metering traffic (solid green curve with ''x'' mark) at a high arrival rate. This is because the smart metering traffic has the lowest priority; therefore, at the high arrival rate, the system cannot serve all traffic and it starts to drop the traffic with the lowest priority. In case of not using the aggregator (dashed curves), the figure shows there is packet loss for all types of MTCD traffic; the loss ratio increases as the arrival rate factor increases, and their loss ratio is higher than in case of using an aggregator. The black color curves in the figure show the loss ratio for video H2H traffic; the loss ratio for video traffic in the case of using the aggregator (solid curve with ''•'' mark) is less than the loss ratio when not using the aggregator (solid curve with ''|'' mark) at the low traffic rate, while this result is reflected at the high traffic rate. This is because, at the high arrival rate factor of MTCD traffic, the MTCD consumes the resources; therefore, the resources allocated for H2H users decrease due to the resources being partitioned between MTCD and H2H users based on the data buffered on each one. This ultimately, leads to an increase in the loss ratio for H2H users. In FIGURE 9, the loss ratio of the whole system is presented. It shows a significant improvement for the system in terms of decreasing the loss ratio by approximately 15% when using the aggregator at high traffic rates, while this improvement decreases to 6% at the low arrival rate. It shows that the size of data transmitted increases as the arrival rate factor increases. The figure also shows that all MTCDs transmitted a larger number of bytes when using the aggregator (solid curves), comparing to the data transmitted by the same devices without using the aggregator. At the high arrival rate factor, the smart metering traffic (green curve with ''x'' mark) shows a small decrease in data transmitted because they have the lowest priority, and there are insufficient RBs available for them. The road monitoring traffic transmits the largest volume of data, while the e-Healthcare transmits the least amount because of their data rate and packet size. FIGURE 11 shows the utilization of the system in terms of the number of resource block RBs used by MTCD and H2H users in both cases (with/without using an aggregator). It shows that using the aggregator decreases the number of RBs used by MTCDs (solid red line with '' '' mark) against increasing them for the H2H regular users (blue curve with '' '' mark). However, when the arrival rate of MTCD traffic exceeds a determined limit, the RBs used by the MTCD in case of using aggregator becomes greater than that used by the MTCDs without using the aggregator (at arrival rate 50%). By comparing the results of FIGURE 10 to the results of FIGURE 11, it is clear that MTCDs transmit a larger amount of data when using the aggregator, while using less RBs (solid red line with '' '' mark). B. SECOND SCHEME: MOVING BOUNDARY POINT RESOURCE PARTITIONING (MBPRP) Although the PFRP scheme shows an improvement in the system performance in general, and for MTCDs in particular, it did not keep the performance of H2H users from the negative effects of increasing the MTC traffic. As was shown in the previous sections, the improvement of the MTCD comes at the cost of degrading the performance of H2H users. So a new scheme is proposed to provide a QoS for MTCD while maintaining the good performance of H2H users. The MBPRP scheme for resource partitioning between H2H users and M2M devices was described in section V-B, and its results are presented in next subsections. VOLUME 9, 2021 Figure 12 presents the average utilization of all types of traffic in the MBPRP scheme, it shows that the utilization for H2H video traffic, e-Healthcare MTCD traffic, and road monitoring MTCD traffic and improved in case of using the aggregator. It also shows that this scheme keeps the utilization for H2H users at an approximately fixed level without or with a little effect of the increasing of MTCD traffic, but at the cost of the utilization of the lowest priority MTCD traffic (i.e., smart metering -solid green curve with '×' mark) which is degraded as the arrival rate factor increase. Like the first scheme, the utilization of MTCD traffic with the highest, and the second-highest priority are increase as the arrival rate factor increases. This scheme guarantees that the highest MTCD traffic (e-Healthcare -solid red curve with ' ' mark) gets the required resources, at the same time keep the H2H users from the effect of arrival rate increasing of MTCDs. 2) LOSS RATIO IN THE SECOND SCHEME FIGURE 13 presents the total loss ratio for all types of traffic in the second scheme, it shows that the loss ratio increases as the arrival rate increases, it also shows that using the aggregator (solid lines) decreases the loss ratio for all types of traffic except for smart metering traffic (solid green lines with 'x'), this is because the smart metering traffic has the lowest priority. By comparing the result in FIGURE 13 to the results in FIGURE 8 it is clear that the loss ratio for H2H in the case of using aggregator is decreased in this scheme, the loss ratio for H2H traffic in this scheme does not exceed 7%. While the loss ratio for H2H traffic in the first scheme exceeds 60% as shown in FIGURE 8. This explains how this scheme keeps the H2H traffic from the negative effects of increasing M2M traffics. This comes at the cost of increasing the loss ratio of smart metering traffic. X. CONCLUSION A QoS based data aggregation algorithm was presented for the MTCDs traffic when integrated with the co-existent H2H users within LTE-A. The algorithm goal was to mitigate the effects of MTC traffic on the performance of H2H users while maintaining the QoS for each type of traffic. To achieve this, an aggregator with an adaptive aggregation delay for each type of traffic, and adaptive size of aggregation data has been used. Three types of MTCD traffic served with different priorities have been considered: e-Healthcare, road monitoring, and smart metering traffic. Two resource allocation schemes have been presented: a proportional fairness data buffer aware resource partitioning and moving boundary point were considered. In the first, The LTE resources were partitioning between the RNs and H2H users proportionally to the size of data buffered, while In the second scheme, the LTE resources were partitioned and shared in a hybrid manner, by reserving some RBs for H2H to provide them with a Guaranteed Bit Rate GBR, and at the same time guaranteeing that the high priority M2M traffic does not get into starvation state. The results showed a significant improvement in the system performance in terms of average utilization, number of resources used, loss ratio, and the average delay in the case where an aggregator was used. However, the first scheme has a limitation in isolating the H2H performance from the negative effects of increasing MTC traffic. This limitation was alleviated in the second scheme, where the results showed that the QoS for the H2H users was maintained while data rate of the MTCDs was increased. Although the proposed schemes provided significant improvements in system performance, the new trends in designing the data aggregator should exploit the new technologies such as SDN, fog computing and network virtualization, to design a smart gateway aggregator where the data analysis and resource allocation can be achieved with more flexibility. We suggest the researchers to combine our results with these new technologies to design more trusted and adaptive schemes in data aggregation. APPENDIX A See Table 4.
17,371.2
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
A Computational Approach towards the Microscale Mouse Brain Connectome from the Mesoscale The wiring diagram of the mouse brain presents an indispensable foundation for the research on basic and applied neurobiology. It is also essential as a structural foundation for computational simulation of the brain. Different scales of the connectome give us different hints and clues to understand the functions of the nervous system and how they process information. However, compared to the macroscale and most recent mesoscale mouse brain connectome studies, there is no complete whole brain microscale connectome available because of the scalability and accuracy of automatic recognition techniques. Different scales of the connectivity data are comprehensive descriptions of the whole brain at different levels of details. Hence connectivity results from a neighborhood scale may help to predict each other. Here we report a computational approach to bring the mesoscale connectome a step forward towards the microscale from the perspective of neuron, synapse and network motifs distribution by the connectivity data at the mesoscale and some facts from the anatomical experiments at the microscale. These attempts make a step forward towards the efforts of microscale mouse brain connectome given the fact that the detailed microscale connectome results are still far to be produced due to the limitation of current nano-scale 3-D reconstruction techniques. The generated microscale mouse brain will play a key role on the understanding of the behavioral and cognitive processes of the mouse brain. In this paper, the conversion method which could get the approximate number of neurons and synapses in microscale is proposed and tested in sub-regions of Hippocampal Formation (HF), and is generalized to the whole brain. As a step forward towards understanding the microscale connectome, we propose a microscale motif prediction model to generate understanding on the microscale structure of different brain region from network motif perspective. Correlation analysis shows that the predicted motif distribution is very relevant to the real anatomical brain data at microscale. Introduction Identifying the structural architecture of the brain has been one of the most important and challenging tasks for the investigation of the brain and neuroscience.The connectome of the brain is the structural foundation and provides insights for deeper understanding of neural networks and neural functions.It also shows the genetic and evolutionary properties on the brain organization across different species.Small world connection and motif distribution property of the neural networks have been found in brains of different species, ranging from drosophila brain to the human brain [Achard et al (2006)].Experiments have shown that various types of motif support different special functional properties of the network, for example, three-node feed-forward loop motif plays an important role on information abstraction [Mangan and Alon (2003)]. Despite of the fundamental and important role for neuronal connectivity to brain and neuroscience research, the current understanding about them is far to complete.The massive whole brain connectivity can be roughly divided into multiple scales, namely, the macroscale, the mesoscale, and the microscale.The connectivity in different scales describes different principles on the organization of brain building blocks at different levels.The connectome at different scales share some mutual characteristics (e.g.small world phenomenon, similar motif distribution), although their interrelationships are complex. The macroscale connectome of the brain describes connectivity of different brain building blocks in a coarser level which is usually from one region to another region of the brain.It can be inferred by functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) which could predict the functional connectivity of the brain by detecting the changes of the blood flow and restricted diffusion of water respectively [Friston et al (1998), Assaf and asternak (2008)].The wiring diagram of the macroscale connectome for the whole brain has been built and has been used on many aspects of neural biological researches and applications. The microscale connectome of the brain is aimed at describing the connectivity of brain building blocks at the neuron and synapse level.Automatic microscopic reconstruction at the nanometer scale (e.g.stimulated emission depletion, STED for short) is a supporting technique for building the microscale connectome of the brain.Nevertheless, due to the scalability issue for automatic recognition and 3-D reconstruction, there is no microscale whole brain connectome atlas for mammals. The mesoscale connectome is between the macroscale and the microscale.It is finer than the scale of large brain regions and coarser than the scale of neurons and synapses.The connectivity of the sub-regions can be described at this level.One of the most representative results at this level is the mesoscale connectome of the mouse brain based on the enhanced green fluorescent protein (EGFP) technique [Oh et al (2014)]. The microscale connectome is essential and unique since it shows the basic wiring principles at the neuron level.The effort from mesoscale to microscale can be considered as a step forward towards the understanding of the behavioral and cognitive processes in mouse brain before the real and complete data come out through synaptic-level neuroanatomical investigations. In this paper, we attempt to take a step forward towards the microscale atlas from the mesoscale by computationally combining understandings and data both from the mesoscale and the microscale.The microscale connectome efforts are made from two perspectives: (1) the predicted number of synapses and neurons in each region; (2) the predicted microscale network motif distributions in target regions.The approach combines the understanding of the mesoscale connectome (which is composed of positive and negative connective strength based on the method of tracing projections between brain regions) and the anatomical experimental results (e.g. the number of neurons and synapses in some identified brain building blocks measured by anatomical processing) at the microscale to predict the number of neurons and synapses in other regions.In addition, anatomical network motif data in 54 brain regions are collected and used to train a microscale motif prediction model which could predict the motif distribution of a specific region at the neuronal and synaptic level.The new generated microscale connectome which contains the number of neurons and synapses, and the microscale motif distribution will be verified by the real anatomical data from future experiments. Section 2 will introduce the current progress of imaging methods in different scales, and a mesoscale connectome atlas produced by Allen Institute for Brain Science is selected as the basis of our investigation.In Section 3, anatomical data from sub regions of Hippocampal Formation (HF) are selected to validate the reasonability of the proposed conversion method.In Section 4, a model for microscale motif distribution prediction is proposed and validated.Section 5 gives a statistical degree analysis for the predicted microscale atlas.Section 6 gives a brief conclusion for the proposed method. Related Works With the advancement of new techniques and equipments, attempts of mesoscale and microscale neuron imaging for the mouse brain have been conducted and accelerated our understanding on the structure and function of the neural circuits. At the microscale, from the structure perspective, electron microscopic equipment only enables relatively small scale reconstruction and observation of the brain anatomy [Sporns et al (2005),Osten and Margrie (2013)].From the function perspective, at this scale, functional calcium imaging methods are used to understand the relationship between cognition and neuron activities, still at very small scale [Stosiek et al (2006)].Large scale (including whole brain scale) structural and functional connectivity at the microscale is one of the most challenging investigations for Brain and Neuroscience research [Kasthuri et al (2015)]. At the mesoscale, several investigations on the mouse brain connectome has already been made.One of them is by the Golgi silver impregnation method [Rakic (2006)], another even more efficient method is neuroanatomical-tracer method which has played an important role on the measurement of connectivity of sensory, motor and other sub systems [Felleman and Essen (1991), Rocklanda and Pandyaa (1979)].Comparing with the traditional methods which are time consuming, a quicker way is to make the mapping of point-to-point connections between two brain building blocks by the method of anterograde tracers and retrograde tracers [Bohland et al (2009)].This method uses two approaches to generate three-dimensional mouse brain atlas: the first one is the light-sheet fluorescence microscopy for the brain tissue after chemical processing [Maizel et al (2011)], and the second one is the integration of microscopy tissue sectioning (e.g.line-scan imaging, or two-photon microscopy [Ragan et al (2012)]).The main characteristic of this method is the mechanical removal of the brain tissue after the mosaic imaging of the upper tissue, so that the tissue will always be on the top of the camera, and high resolution camera or lenses can be used to achieve high standard imaging pictures.Instruments can be serial two-photon tomography [Ragan et al (2012)], knife-edge scanning microscopy or fluorescent micro-optical sectioning tomography [Maniadakis and Trahanias (2003), Seress (1988)]. The mesoscale mouse brain connectome atlas from Allen Institute for Brain Science uses EGFP to make the measurement of the projections of axons from 213 mouse brain sub regions (e.g.V1 and V4 in visual cortex, CA1 in hippocampus, and POL in thalamus) covering the whole brain [Oh et al (2014)].The atlas is calculated from 469 injected experiments and has been one of the state of the art atlases [Ragan et al (2012)].There are 213 × 213 values in which each one stands for the strength of the connectivity among two sub regions in the atlas. In this paper, we will try to combine mesoscale atlas with some other anatomical experimental results to make a predictive analysis on the neuron, synapse and motif distributions to bring a step forward towards the microscale mouse brain atlas. From Mesoscale to Microscale Connectome The microscale connectome atlas is able to identify the number of neurons and synapses in each sub region and the level of convergence and divergence for each cell in the network.Even though the detailed microscale atlas has not been acquired by the microscopic reconstruction method, with the high-resolution mesoscale mouse brain atlas and some sufficient and detailed microscale connections of sub-regions (e.g. the approximate number of neurons and synapses for some specific sub regions by anatomical experiments), we can bring the mesoscale connectome a step forward towards the microscale connectome.However, since different sub regions share different cell types and cell densities, our attempt is just a prototype to show the reasonability and importance of conversion method, and with the more real and more detailed anatomical results are found by experiments, a more feasible microscale atlas with higher accuracy will be produced based on the proposed convertion method. The essential idea that bridges the mesoscale connectome and the microscale connectome is that the EGFP method can obtain projection weights, while they are naturally weights of connections among clusters of neurons that belong to different regions (or the same region for interconnections).Hence, the projection weights at the mesoscale can be converted to the number of synapses at the microscale with the help of more and more anatomical experimental results.In addition, by using different kinds of EGFP methods, different types of connectivity (i.e.excitatory or inhibitory synapses) at mesoscale can be obtained. In order to make comparison with the prediction results, some special sub regions which already contain partial detailed microscale information by anatomical experiments are selected.Compared to other regions, most of the complex intrinsic wiring diagram of the hippocampus is increasingly refined over more than a hundred years.Hence, we select HF for verification.Here we separate the conversion procedure into four major steps. Providing Partial Microscale Anatomical Information Following the first step , we collect three main parts of anatomical experimental results, including hippocampal formation, visual cortex, and some parts of thalamus.As hippocampus formation for example, it is with a six layered architecture that comprises five distinct sub-regions: the Cornu Ammonis area one to three (CA1-CA3), the Subiculum (Sub) and the Dentate Gyrus (DG).A sketch on the mesoscale connection among different sub regions of HF and EC are shown in Fig 2 (which is integrated and refined from [Cutsuridis et al (2010), Treves and Rolls (1992)].).The EC contains two parts (i.e.MEC and LEC) and six layers (i.e.EC-I to EC-VI).Perforant pathway (PP) and trisynaptic pathway are the two main projections from EC to HF.The output regions from HF to EC are CA1 and Sub [Treves and Rolls (1992)].In addition, it also has strong recurrent network which is very different from other sub-regions in HF.The average number of synapses for each cell which converges on CA3 is 3, 750 for EC and 5, 500 for CA3 [Cutsuridis et al (2010)].The average synapse number of a single cell (in EC, DG or CA3, here we do not distinguish the types of cells) which projects to CA3 cells is 4, 600 and 6, 000 respectively [Treves and Rolls (1992), Witter (2010)]. Converting from the Mesoscale Connectome to the Microscale Connectome Up to now, there is no detailed atlas about the numbers of neurons and synapses in each sub regions of the whole mouse brain.The basic idea of this step is to provide a concrete method to predict them.In order to make the method more understandable, we introduce it combining with a concrete example. Table 1 presents the detailed description of the mesoscale connectivity strength in HF and EC.The values in the table are the voxel strength measured by EGFP in paper [Oh et al (2014)].CA3 is one of the sub regions with enough anatomical details about the number of neurons and recurrent-type synapses.So we select it as the sample region in Equation 1.Based on the average number of synapses in sample sub regions, we can calculate the approximate total number of synapses for each region based on the Equation 1. where Syn sub (i) denotes the total number of synapses of the target region i, W i, j denotes the connectivity weight from the region i to region j in mesoscale mouse brain atlas (e.g.some weights of sub regions in HF are shown in Table 1, more weights of other regions can be found in [Oh et al (2014)]); W s, j is the value of connectivity weight from sample sub regions to other regions (e.g.W CA3,CA3 is the recurrent value and is 0.116, as shown in Table 1); Neu s is the number of neurons in sample areas (hippocampal formation, visual cortex or thalamus) which is from the anatomical experiments; Syn s is the average number of synapses of cells in the sample areas (e.g. for CA3 the value is 6, 000, as shown in Fig 3 [Treves and Rolls (1992), Witter (2010)]. Based on the proposed method and the mesoscale connectome atlas in Table 1, we then can obtain the approximate synapse distribution of the whole mouse brain.Notice that this distribution is still the mesoscale distribution of the synapses for each sub regions. In order to predict the microscale number of neurons and the number of synapses for each neuron in each sub region of mouse brain, two attempts are tried to make the measurement on the number of neurons, as defined in Equation 2 and Equation 3. Equation 2 corresponds to the first method, and it is based on the idea that the number of synapses in each sub region is the sum of the synapse of all kinds of neurons.Syn sub (i) denotes the approximate total number of synapses in the sub regions of mouse brain.Neu All denotes the total number of neurons.Equation 3 presents the second method, and it obtains the number of neurons by the proportion voxel size of the target sub regions.828 which shows the high consistency of the two methods.By using this approach, we obtain the percentages on the number of neurons in each sub region at whole brain scale. Verifying the Conversion Results Comparing the predicted data with the realistic data from anatomical experiments is a reasonable attempt to verify the conversion results. The verification from the sub regions in HF For the anatomical results, as Table 2 shows, some quantitative data about the number of neurons, the ratio of excitatory and inhibitory synapses at microscale in HF has been measured [Insausti et al (1998)].For the prediction results, the number of neurons and the number of synapses in sub regions of HF and EC based on the two methods are shown in Table 3 and Table 4. Comparing with the anatomical experimental results from Table 2, the predicted number of neurons in each sub region of HF by two different methods are generally consistent to the anatomical ones.Note that due to the technical limitation of EGFP, many of the connectivity strength within or among sub regions are not visible (i.e. with the value 0).Hence, the average number of synapses for each neuron in specific region cannot be calculated based on the proposed method.While some of the blank results can be refined based on domain knowledge.For example, in the mammalian brain, almost all the regions are with interconnections among neurons within the specific region.Hence, for those values marked as 0 for self connections, the average number of synapses for this region (e.g.CA1 in Table 3 and Table 4) is assigned an average value of synapses calculated based on the data of whole mouse brain (namely, 2,800). Using the same methods, we generate the number of average synapse for neurons in all the 213 regions of the mouse brain.The over all predicted synapse number for the whole mouse brain is approximately at the same order of magnitude compared to the real house mouse brain.The predicted number of synapses in different regions have validated the reasonability of the proposed computational approach to convert the mesoscale connectivity weights to the microscale synapse distribution. The Verification from Other Sample Regions Since the number of neurons in each sub regions of the mouse brain has not yet been measured by the anatomical experiments, in order to verify the predicted results of the neurons and the synapses, we select the predicted number of neurons and synapses in the sample regions to make comparison with biological experimental evidence.The similarity is calculated based on Equation 4. The similarity comparisons of predicted and anatomical numbers of neurons are shown in Fig 5 (a) and Fig 5(b).From the similarity results, we can conclude that most of the results are above 50%, this shows that the two results are weakly consistent with each other.Because this attempt just uses the anatomical numbers of neurons in three sample regions, we think that with more anatomical information added into the conversion methods, a higher accuracy will be achieved. Region Specific Microscale Motif Distribution Prediction Most of the networks which could be represented as graphs can use network motif to describe their network properties [Sporns and Kotter (2004)].Three-node network motifs are commonly used for analyzing complex networks [Mangan and Alon (2003)].Although until now, there are no detailed microscale network structures for each sub regions, investigations on the motif distribution at the synaptic level could also help to get deeper understanding on the network structures of the brain at the microscale. Since the motif distribution at different scales share similarities to a certain degree, in this paper, we try to establish the link between the mesoscale connectome and microscale motif distribution. A three layered neural network is built to make the motif type prediction at the microscale, as shown in Fig 6 .The input layer contains 213 neurons (corresponding to the 213 mouse brain regions at the mesoscale) and each cell receives the inputs from one of the 213 connectivity strengths with the specific predicted region.The hidden layer contains 500 neurons and the output layer contains 13 neurons which correspond to the 13 types of three-node motifs.The output of the network is the motif distribution prediction results to a specific region.The network are trained by the data from two sources: (1) The mesoscale connectivity strength of CA1, CA3 to the 213 regions of the mouse brain as inputs and microscale motifs distributions in mouse by the network structure prediction method as outputs [Zhang et al (2015)]; (2) The mesoscale connectivity strength of 46 regions from both cats cortex and macaque cortex to the 213 corresponding regions as inputs [Sporns et al (2007)] and network motif distribution of cats cortex and macaque cortex as outputs (We use data from the same region of other mammalian brains since they generally share many structural similarities) [Sporns et al (2007)].On one hand, there are no ground truth for microscale connectome and motif distribution for cat and monkey brain.On the other hand, the mesoscale motif distribution provides partial evidence for microscale since mesoscale connections are also established by specific connections among neurons from different regions, and the motifs from mesoscale and microscale are similar to some extent, as shown in croscale motif distribution of 48 regions (CA1, CA3 in the mouse brain, and another 46 brain regions in cat and macaque brain) distributed in 112 groups of data (i.e.20 groups for CA1 and CA3 respectively in mouse brain, 92 groups from cats and macaque cortex).If we consider the mesoscale motif distribution from cat and macaque brain as a possible version of the microscale, after data cross validation (90% for training and 10% for prediction, with 10 times), the accuracy is 91.6%, which indicates that the proposed method works well for the prediction of the motif distribution. Input With the high accuracy of the prediction model, we apply it to the whole mouse brain, and generated a prediction of the motif distribution for each of the 213 regions.As shown in Fig 10, we could get predicted different motif distributions (y axis) in the whole mouse brain regions (x axis).This result will give us tips on the analysis and construction of structural and functional whole mouse brain model.Here we introduce by far the largest microscale cortical neuron connectome on the mouse brain (to the best of our knowledge) from [Lee et al (2016)] to validate the prediction on the mouse brain.Due to the scalability issue for synaptic level reconstruction, the largest microscale mouse brain cortical neuron connectome reported in literature by far (to the best of our knowledge) is a connectome containing 201 neurons and 1,278 synapses from V1 of the mouse brain [Lee et al (2016)].Although the connectome of the 201 neurons are interconnections within these neurons, it still partially reflects the structural properties of the most realistic microscale connections.In the motif distribution analysis, [Lee et al (2016)] reported that there are only 4 types of three-node motifs (out of 13 types over all) which has been observed in the connectome (which is rational since the connectome is only on how the 201 neurons are interconnected), namely, motif No.1,No.2,No.4,and No.5 in Fig 6 .The order and frequency of the motif distribution is No.4(1918) >No.2(430) >No.1(347) >No.5(32).While the predicted motif distribution order and ratio on V1 from the proposed model follows the sequence of No.4(0.17)>No.2(0.072)>No.1(0.04)>No.5(0.01).It indicates from the order perspective, that the predicted results are consistent with the real anatomical data.This provides an initial validation on the precision of the microscale motif prediction model.Here we want to extend the discussion why the predicted values are consistent with the real data from the order perspective, while still with inconsistency from the motif frequency ratio perspective.The main reason can be that the real anatomical data are interconnected motifs within the 201 neurons from a specific part of V1 (which is somewhat partial and local, due to current technology bottleneck), while our predicted results describe the motif distribution of the whole V1 area.If more comprehensive microscale connectome are available in the future, the predicted results and the real anatomic data can be compared in more details. Although the proposed microscale whole brain predictive motif distribution model is still in preliminary phase, it gives us a perspective and sketch on the analysis and construction of structural and functional whole mouse brain model at the synaptic level. Conclusion In this paper, a computational approach for constructing an initial microscale mouse brain connectome from the neuron, synapse and neuron motif distribution perspective is proposed.The mesoscale connectome data and some microscale anatomical data are combined together to produce a step forward towards a microscale connectome through predictions on the distributions of neurons, synapses, and microscale neuron motifs.Degree distribution analysis is conducted from various perspectives to explore the characteristics of the connectome at the microscale.The proposed approach should also have a potential to be used for producing microscale connectome for other mammalian brain. The structural connectome of the brain is a basis for the functional connectome.Structural connectome is definitely not the only reason which decides the function of the brain.Nevertheless, it provides the physical structural basis for information transmission among different building blocks at multiple scales.Hence, it is essential as structural support for cognitive function.We believe the relationship between structural connectome and functional connectome will become much clear as long as both of these connectomes become more complete.This is the reason why this paper aims to bridge the gap between mesoscale and microscale structural connectome of the mouse brain, to support future research on the comparative study on structural and function connectome at multiple scales. In the future, we would plan to combine other upcoming concrete experimental data to support refined version of this study and make the microscale connectome more realistic, feasible and useful in various disciplines and application domains (e.g.brain simulation and brain-inspired intelligence models [Liu et al (2016)]). Acknowledgement This study was funded by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB02060007), and Beijing Municipal Commission of Science and Technology (Z161100000216124). Figure 1 . Figure 1.The location and connectivity matrix of sub regions within hippocampal formation (1), part of the visual cortex (2), and thalamus (3) in the mouse brain. Figure 4 . Figure 4.The distribution of percentage of neurons in 213 mouse brain regions by the two methods. Fig 4 Fig 4(a) and Fig 4(b) present the percentages of numbers of neurons in 213 sub regions of the mouse brain by two different methods.The cosine similarity of them is 0.828 which shows the high consistency of the two methods.By using this approach, we obtain the percentages on the number of neurons in each sub region at whole brain scale. Figure 5 . Figure 5.The similarity comparison of predicted and anatomical number of neurons Figure 6 . Figure 6.The 13 types of motifs (a) and the three layered classifier for motif type distribution analysis (b) Figure 8 .Figure 9 . Figure 8.The 20 groups of motif distribution in CA1 of the mouse brain Figure 10 . Figure 10.The predicted microscale motif distribution of the 213 regions in the mouse brain. Since the microscale connectome focuses on synaptic connections within and among different brain regions.It is natural and essential to investigate on the distribution of synaptic connections since it reveal the structural characteristics of the mouse brain at the microscale.Based on the generated microscale connectome, the distribution analysis results of synapses in both long-range and short-range projections (synaptic connections within the same region) are shown in Fig 11 to Fig 14. Figure 11 . Figure 11.In-degree distribution of long-range projections among different brain regions. Fig Fig 11(left) is an analysis on the in-degree synaptic distribution of long-range projections for the mouse brain (including 213 regions).The in-degree describes the number of Figure 12 . Figure 12.Out-degree distribution of long-range projections among different brain regions. Figure 13 . Figure 13.Degree distribution of long-range projections among different brain regions. Figure 14 . Figure 14.Synaptic degree distribution of projections within the same brain region. Table 1 . [Oh et al (2014)]weights of EGFP-labeled axons between the sub regions of HF and EC (The values are extracted from[Oh et al (2014)]). Table 3 . The predicted number of synapses in hippocampus by method one Table 4 . The predicted number of synapses in hippocampus by method two
6,402.2
2017-01-01T00:00:00.000
[ "Biology", "Computer Science" ]
Thoracic Endovascular Aortic Repair for Aortoesophageal Fistula after Covered Rupture of Aortic Homograft A 63-year-old woman underwent replacement of the aortic root, ascending aorta, and partial arch due to Type A aortic dissection. Shortly thereafter, a replacement of the distal aortic arch and descending aorta was performed. Three years later, the patient developed an aortoesophageal fistula (AEF) resulting in re-replacement of the distal aortic arch and proximal descending aorta with a cryopreserved aortic homograft. Six weeks post-discharge, the patient was readmitted due to recurrent AEF. A thoracic endovascular stent graft was implanted to cover the aortic rupture, followed by correction of an esophageal lesion. The patient was monitored closely over time. Introduction Although it is a rare clinical condition, aortoesophageal fistula (AEF) presents problems to therapy because of the high rates of morbidity and mortality associated with surgical management. Therefore, less invasive approaches that reduce perioperative mortality have been evaluated, with special attention given to thoracic endovascular aortic repair (TEVAR). However, this technique has important limitations in treating AEF, mainly due to a high risk of graft contamination. Here, we describe a case of recurrent AEF treated with TEVAR stenting of a cryopreserved aortic homograft replacement of the aortic arch and Hemashield prosthesis replacement of the descending aorta. Case Presentation A 63-year-old woman underwent emergency replacement of the aortic root with a biological valve conduit (Medtronic Freestyle-Aortic-Root Modell 995, Gr. 23 mm, Medtronic Inc., Minneapolis, MN) as well as ascending aorta and partial arch replacement (Hemashield prosthesis, 26 mm) due to Type A aortic dissection. Two months later, the patient presented with a rapidly enlarging false lumen of the remaining aortic arch and proximal descending aorta. A replacement of the distal aortic arch (one-branch Hemashield prosthesis; 30 mm) and descending aorta (Hemashield prosthesis, 22 mm) was performed under circulatory arrest via a left thoracotomy. The postoperative course was uneventful, and the patient was discharged on postoperative day 21. Three years later, the patient presented with recurrent hematemesis and anemia. Esophagogastroduodenoscopy (EGD) identified an esophageal ulcer with evidence of slight ongoing bleeding. Computed tomography (CT) examination confirmed the presence of an AEF ( Figure 1A). An emergency re-replacement of the distal aortic arch and proximal descending aorta was performed with a cryopreserved aortic homograft (CryoLife, Kennesaw, GA; 25 mm). The adjacent esophageal lesion was closed during circulatory arrest with interrupted suture. An endoluminal esophageal stent (28/10 mm, Leufen-Medical-GmbH) was inserted at the end of the procedure. The stent was removed 2 months later, and control esophagogastroscopy demonstrated ulcerations with no visible fistula. Six weeks post-discharge, the patient was readmitted due to massive hematemesis. Recurrent AEF was suspected, and the patient underwent emergent esophagogastroscopy. Active bleeding at the primary AEF location was observed, and a Segstaken-Blakemore tube was inserted. A CT scan was suspicious for a new aortic rupture at the previous homograft site ( Figure 1B). An interdisciplinary team meeting resulted in the decision to place a TEVAR stent over the aortic rupture (32/32/180, Valiant-Captivia; Medtronic, Santa Rosa, Calif ), which was successfully performed without incident ( Figure 2). After stabilization, the patient underwent mediastinal debridement, esophageal resection, and a gastric pull-up procedure with a cervical anastomosis (end-to-end, double-row suture with 3-0 Vicryl, Ethicon, Somerville, NJ). During the recovery phase, bronchoscopy revealed a 5-mm perforation of the trachea and a large amount of surrounding pus. Mediastinitis (secondary to Enterococcus faecalis and Streptococcus anginosus) was diagnosed and treated with local debridement, vacuum-assisted closure therapy, and broad-spectrum antibiotics. The • Patient refused aggressive treatment and was discharged on analgesia and broad-spectrum antibiotics for palliative care patient recovered well and was put on long-term antibiotics to prevent recurrent septicemia. A CT angiogram 5 months later confirmed satisfactory position of the implanted stent graft and showed no signs of endoleak or infection. Therefore, antibiotic therapy was discontinued. Ten months after the last admission, the patient experienced a persistent high-grade fever. A blood test revealed an elevated leukocyte count and highly elevated C-reactive protein level. CT revealed air around the stent grafts, suggesting infection ( Figure 3). After being informed that any further operative treatment (consisting of removal of the infected stent grafts and replacement of the descending aorta or extra-anatomic bypass grafting) was likely to prove fatal; the patient and her family decided to refuse the aggressive treatment. The woman was discharged on analgesia and broad-spectrum antibiotics for palliative care. Discussion Patients who undergo surgery for aortic dissection often have residual dissected aortic tissue that may become a source of late complications. Repeat surgery is required in approximately 12-30% of patients, usually due to extension of dissection, aneurysm formation, or infection [1]. The very rapid progression of the distal arch and proximal descending aorta in our patient was caused by patent false lumen. CT scanning and intraoperative findings excluded infection of the prosthesis or presence of septic false aneurysm. During follow-up, our patient developed an AEF at the site of the anastomosis between the original aortic arch replacement graft and the subsequent distal arch/descending aortic replacement graft. Secondary AEF following surgery is uncommon (4.8%), with 50% of cases occurring after aortic surgery [2]. The mechanism of AEF after conventional surgery involves rupture of the prosthesis, dehiscence of the repair, direct erosion of the graft into the esophagus, or local infection. Our intraoperative findings suggested that the cause of the AEF was dehiscence of the repair due to infection, which caused aortic rupture and secondary penetration into the esophagus. On the other hand, previous replacement of the descending aorta implicates the occlusion of esophageal arteries arising directly from the aorta with impaired tissue healing and possible esophageal ischemia [3]. To treat the first AEF, we used a cryopreserved homograft to replace the distal aortic arch. Recent studies of AEF and aortoenteric fistulae demonstrate the superiority of cryopreserved aortic allografts because they are more resistant to infection [4]. However, homografts are not always immediately available in emergency situations. An alternative solution for orthotopic vascular reconstruction is use of selfmade xenopericardial tube grafts constructed from a patch [5]. Recently, investigators from the European Registry of Endovascular Aortic Repair Complications presented the results of different treatment strategies for AEF following TEVAR at the 27th European Association for Cardio-Thoracic Surgery Annual Meeting in Vienna and concluded that radical esophagectomy and extensive aortic reconstruction is the only durable approach for this fatal complication. When our patient re-presented with recurrent AEF, our interdisciplinary team decided against a fourth aortic arch operation because of the patient's generally poor condition and the excessive operative risk. We therefore opted to perform TEVAR as a life-saving intervention, which was followed by esophageal resection and a gastric pull-up procedure in one stage with a cervical anastomosis and long-term antibiotic therapy. This treatment strategy resulted in immediate control of aortic bleeding and a complete regression of the recurrent AEF, but may have increased the subsequent risk of infection. Despite correction of the esophageal lesion, the efficacy of our therapy was limited by mediastinal infection, which required multiple surgical interventions. Prolonged postoperative antibiotic therapy is advocated as a key component for success, but there is currently no consensus on the appropriate duration of antibiotics in this group of patients [6]. Most commonly parenteral antibiotics are given for 2 to 8 weeks post-procedure, but whether lifelong oral antibiotics are necessary is debatable [6]. Most recently, Canaud et al. reviewed the outcomes of TEVAR for AEF and reported that prolonged antibiotic treatment (i.e., greater than 4 weeks) was associated with significantly lower aortic mortality [7]. In our opinion, TEVAR for AEF can be used only as a bridge to definitive open aortic surgery or as combined treatment with mediastinal debridement, mediastinal drainage, and/or esophageal resection, particularly in patients in poor general condition. For long-term durability, it is necessary to resect the aorta and esophagus simultaneously to prevent prosthesis re-infection [8]. Based on our experience, stent graft infection can occur many months after the procedure. Thus, prolonged antibiotic therapy and life-long surveillance are mandatory in these patients regardless of symptoms or clinical signs of infection. However, additional clinical reports exclusively focusing on recurrent AEF are required to determine the optimal management strategy for this challenging problem.
1,895.6
2017-06-01T00:00:00.000
[ "Engineering", "Medicine" ]
Antiproliferative and Proapoptotic Effects of Phenanthrene Derivatives Isolated from Bletilla striata on A549 Lung Cancer Cells Lung cancer continues to be the world’s leading cause of cancer death and the treatment of non-small cell lung cancer (NSCLC) has attracted much attention. The tubers of Bletilla striata are regarded as “an excellent medicine for lung diseases” and as the first choice to treat several lung diseases. In this study, seventeen phenanthrene derivatives, including two new compounds (1 and 2), were isolated from the tubers of B. striata. Most compounds showed cytotoxicity against A549 cells. An EdU proliferation assay, a cell cycle assay, a wound healing assay, a transwell migration assay, a flow cytometry assay, and a western blot assay were performed to further investigate the effect of compound 1 on A549 cells. The results showed that compound 1 inhibited cell proliferation and migration and promoted cell apoptosis in A549 cells. The mechanisms might correlate with the regulation of the Akt, MEK/ERK, and Bcl-2/Bax signaling pathways. These results suggested that the phenanthrenes of B. striata might be important and effective substances in the treatment of NSCLC. Introduction For decades, lung cancer has been a considerable health issue owing to its high incidence and fatality rates [1,2]. Indeed, lung cancer kills more than one million people worldwide every year, with 80-90% of cases being related to NSCLC [3]. Although our understanding of targeted therapies, immunotherapy, and genetic alterations of cancers is evolving, the cure rate for NSCLC remains low [4,5]. In recent years, natural products from traditional Chinese medicines have been suggested as potential drugs to treat NSCLC [6,7]. Therefore, the discovery of anti-NSCLC active ingredients from traditional Chinese medicines has become one of the hotspots of modern lung cancer research. Structure Elucidation Compound 1 showed IR absorption peaks at 1455 and 1589 cm −1 for aromatic rings and 3227 cm −1 for hydroxy groups. Its molecular formula was determined as C30H26O6 with 18 degrees of unsaturation from a (−)-HR-ESI-MS ion peak at 481.1647 [M − H] − (calculated for C30H25O6, 481.1651). The 13 C-NMR data of 1 (Table 1) Cytotoxicity of the Isolates against A549 and BEAS-2B Cells Our prior research showed that EtOAc extract from B. striata exhibited significant cytotoxic activity against A549 cells (IC50 = 11.92 ± 0.68 μg/mL). However, water extract Cytotoxicity of the Isolates against A549 and BEAS-2B Cells Our prior research showed that EtOAc extract from B. striata exhibited significant cytotoxic activity against A549 cells (IC 50 = 11.92 ± 0.68 µg/mL). However, water extract and n-BuOH extract from B. striata exhibited no obvious cytotoxicity (IC 50 > 100 µg/mL) [14]. Thus, the cytotoxic effects of the phenanthrenes isolated from the EtOAc extract were investigated in this study. Table 2 presents the cytotoxicity results of the isolates against A549 lung cancer cells; compounds 11 and 15 were not detected due to limited sample amounts after structure determination. In addition, the cytotoxicity of compound 1 against normal human lung cells (BEAS-2B) was also assessed. The result showed that the cytotoxic Discussion Considering its high incidence and mortality, lung cancer is gradually becoming a serious health problem [26,27]. NSCLC is the most frequent type of lung cancer according to the histological type [28]. Although surgery, radiation, and immunotherapy have been used for treating NSCLC, pharmaceutical treatment is of great significance, and innovative drugs are needed. Traditional Chinese medicine (TCM) reflects the profound understanding of the Chinese people regarding life, health, and disease and has a time-honored historical tradition and unique theories and techniques. With the advantages of diverse chemical structures, a wide range of sources, and significant activities, natural products from TCM have been studied for the treatment of NSCLC [29,30]. Therefore, it is meaningful and feasible to find effective natural compounds from TCM to treat NSCLC. The tubers of B. striata, praised as "an effective medicine to treat lung diseases", were the best choice in terms of lung disease treatment according to Shennong's Materia Medica, Essential of Materia Medica and Treasury of Words on Materia Medica. Modern studies have demonstrated that phenanthrene derivatives from B. striata exert significant cytotoxic activities against A549 cells [11,13,31,32]. Therefore, this study explored phenanthrene derivatives from EtOAc extract from B. striata in terms of cytotoxicity against A549 cells and investigated the preliminary mechanisms. As expected, 17 phenanthrene derivatives, including two new compounds (1 and 2), were isolated from the tubers of B. striata. Most of the tested compounds showed cytotoxicity, especially compounds 1, 2, 4, 6, 7, 8, and 13 (IC50 < 10 μM). Moreover, the structure-cytotoxicity relationship was also investigated. In general, the cytotoxic effects of biphenanthrenes (1-4 and 6-8) were much stronger than those of simple phenanthrenes (9, 10, 12, 14, 16, and 17). However, the IC50 value of compound 5 was over 100 μM. A comparison of the results for compound 5 and other biphenanthrenes, especially between compounds 5 and 6, indicated that the introduction of an OMe group in position eight resulted in a considerable reduction in cytotoxicity. In addition, a comparison of compounds 1 (1,1′-connection) and 4 (1,3′- Discussion Considering its high incidence and mortality, lung cancer is gradually becoming a serious health problem [26,27]. NSCLC is the most frequent type of lung cancer according to the histological type [28]. Although surgery, radiation, and immunotherapy have been used for treating NSCLC, pharmaceutical treatment is of great significance, and innovative drugs are needed. Traditional Chinese medicine (TCM) reflects the profound understanding of the Chinese people regarding life, health, and disease and has a time-honored historical tradition and unique theories and techniques. With the advantages of diverse chemical structures, a wide range of sources, and significant activities, natural products from TCM have been studied for the treatment of NSCLC [29,30]. Therefore, it is meaningful and feasible to find effective natural compounds from TCM to treat NSCLC. The tubers of B. striata, praised as "an effective medicine to treat lung diseases", were the best choice in terms of lung disease treatment according to Shennong's Materia Medica, Essential of Materia Medica and Treasury of Words on Materia Medica. Modern studies have demonstrated that phenanthrene derivatives from B. striata exert significant cytotoxic activities against A549 cells [11,13,31,32]. Therefore, this study explored phenanthrene derivatives from EtOAc extract from B. striata in terms of cytotoxicity against A549 cells and investigated the preliminary mechanisms. As expected, 17 phenanthrene derivatives, including two new compounds (1 and 2), were isolated from the tubers of B. striata. Most of the tested compounds showed cytotoxicity, especially compounds 1, 2, 4, 6, 7, 8, and 13 (IC 50 < 10 µM). Moreover, the structure-cytotoxicity relationship was also investigated. In general, the cytotoxic effects of biphenanthrenes (1-4 and 6-8) were much stronger than those of simple phenanthrenes (9, 10, 12, 14, 16, and 17). However, the IC 50 value of compound 5 was over 100 µM. A comparison of the results for compound 5 and other biphenanthrenes, especially between compounds 5 and 6, indicated that the introduction of an OMe group in position eight resulted in a considerable reduction in cytotoxicity. In addition, a comparison of compounds 1 (1,1 -connection) and 4 (1,3 -connection) showed that the manner of the connection between the two dihydrophenanthrene monomers did not significantly affect the cytotoxicity. In the simple phenanthrenes, when compared with compound 9, the introduction of an additional OH (10) or an additional p-hydroxybenzyl (12) at C-1 significantly improved the cytotoxicity. The introduction of another p-hydroxybenzyl at C-6 (14) further notably increased the cytotoxicity. These structure-activity relationships may serve as a reference in the investigation of anti-NSCLC phenanthrenes from B. striata. The novel compound 1, possessing good cytotoxicity, was used to study the preliminary mechanisms. EdU immunofluorescence staining and cell cycle analysis were carried out to detect the antiproliferation effect of compound 1 on A549 cells. The data showed that, at concentrations of 3.13, 6.25, and 12.5 µM, compound 1 significantly inhibited the proliferation of A549 cells and induced cell cycle arrest at the G2/M phase. The checkpoint at the G2/M transition is a crucial regulatory gate during cell-cycle progression, but the cell will die if the cell-cycle checkpoint is lost before the completion of DNA repair [33,34]. In addition, cell migration is an essential procedure when the metastatic dissemination of cancer cells is mentioned [35]. Compound 1 exhibited an inhibitory effect with respect to A549 cell migration, as determined by the migration-related assays. Since Akt dominates the growth, cycle, metabolism, and death of cells by regulating various downstream substrates [36], the effect of compound 1 on AKT phosphorylation in A549 cells was investigated. Meanwhile, MAPKs acts as one of the significant serine/threonine protein kinases, which exhibit a crucial role in receiving signals and transmitting them into cytomembrane [37]. ERK1/2 is identified as an important mitogen-activated factor, which participates in numerous biological processes including both cell proliferation and survival. MEK1/2 is a crucial upstream protein of ERK1/2 [38]. Both of them are important members of the MAPK family, and the MEK/ERK signaling pathway has been proven to be crucial for NSCLC research [39,40]. Thus, the expression ratios of p-MEK/MEK and p-ERK/ERK were evaluated. These results disclosed that the Akt and MEK/ERK signaling pathways were involved in the antiproliferation effect of compound 1 on A549 cells. Unlike necrosis, apoptosis is a type of programmed cell death. Apoptosis disorder can cause pathological events. Among them, tumors and autoimmune diseases are representative examples [41]. The flow cytometry results indicated that compound 1 dramatically promoted the apoptosis of A549 cells. Thus, the proapoptotic effect of compound 1 was further studied. Bcl-2 and Bax are regarded as important apoptotic regulatory proteins. Bcl-2 promotes cell survival and suppresses cell death, while the effects of Bax are the opposite [42,43]. Our results pointed out that the proapoptotic effect of compound 1 on A549 cells was associated with a decrease in the Bcl-2/Bax ratio. Cytotoxic Activity Assay The purity of the tested compounds was more than 98%. The cytotoxic effects of isolated phenanthrenes on A549 and BEAS-2B cells were examined by MTT experiments as described in our previous report [14]. EdU Proliferation Experiment The cells were digested by trypsin and cultivated in 96-well plates (3.5 × 10 3 cells per well) for 24 h. They were incubated with compound 1 (3.13, 6.25, and 12.5 µM) for 48 h. Next, prepared EdU-labeling solution (10 µM) was used to stain the cells in an incubator at 37 • C for 2 h. The cells were fixed for 15 min with 4% cold paraformaldehyde. After that, Triton X-100 (0.3%) and Click-iT reaction cocktail were successively used to handle cells. Finally, DAPI staining was applied to the cells to counterstain them. A Leica DMI3000B inverted fluorescence microscope (Leica, Wetzlar, Germany) was used to capture the fluorescent images. In addition, to count the cells, ImageJ software (version 1.8.0) (National Institutes of Health, Bethesda, MD, USA) was used. EdU positive cells (%) = (green EdU-stained cells/blue DAPI-stained cells) × 100. Cell Cycle Analysis According to the manufacture's instructions, the cells were treated with compound 1 (3.13, 6.25, and 12.5 µM) for 48 h. Then, they were cultured with 500 µL staining solution (RNase A:PI = 1:9) for 1 h at room temperature without light after fixation with 70% prechilled ethanol for 2 h. The DNA content was instantly detected with a BD FACSCanto II flow cytometer (BD Biosciences, Franklin Lakes, NJ, USA). Wound Healing Assay Cells were placed in 6-well plates for the wound healing test. The scratches were formed using a sterile 200-µL pipette tip when the cells treated with compound 1 (3.13, 6.25, and 12.5 µM) had grown to 90% confluence. After a 24-h incubation, the images were obtained with the Leica microscope and the scratch area was calculated with the ImageJ. Migration rate (%) = [(A0 − A1)/A0] × 100, where A0 represents scratch area at 0 h, and A1 suggests scratch area at 24 h. Transwell Migration Assay The migratory effect of compound 1 (3.13, 6.25, and 12.5 µM) on A549 cells was further detected by using transwell chambers. Briefly, the cells treated with compound 1 were placed into the upper wells, while the medium with 10% FBS was added to the bottom wells. After a 24-h incubation, non-migrated cells were cleaned up, and the migrated cells were fixed for 30 min with 4% paraformaldehyde and dyed by 0.1% crystal violet. The images and the cell numbers were obtained by the Leica microscope and ImageJ, respectively. Apoptosis by Flow Cytometry Assay The A549 cells were treated with compound 1 (3.13, 6.25, and 12.5 µM), and the cell apoptosis was detected by flow cytometry assay as described in our previous report [14]. Statistical Analysis Data are shown as mean ± standard deviation (SD) with three biological replicates. One-way ANOVA and Tukey's post-hoc test were used to evaluate the significant differences (p < 0.05). Figures were obtained by GraphPad Prism software Version 5.0 (GraphPad Software, Inc., San Diego, CA, USA). Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27113519/s1, Figures S1-S16 are the 1D and 2D NMR, HR-ESI-MS, IR, and UV spectra for two new compounds, Table S1: the original western blots in three repetitions for Figure 6 in the paper, Table S2: the original western blots in three repetitions for Figure 7 in the paper.
3,084.2
2022-05-30T00:00:00.000
[ "Biology", "Chemistry" ]
End-to-End Rate-Distortion Optimized MD Mode Selection for Multiple Description Video Coding Multiple description (MD) video coding can be used to reduce the detrimental e ff ects caused by transmission over lossy packet networks. A number of approaches have been proposed for MD coding, where each provides a di ff erent tradeo ff between compression e ffi ciency and error resilience. How e ff ectively each method achieves this tradeo ff depends on the network conditions as well as on the characteristics of the video itself. This paper proposes an adaptive MD coding approach which adapts to these conditions through the use of adaptive MD mode selection. The encoder in this system is able to accurately estimate the expected end-to-end distortion, accounting for both compression and packet loss-induced distortions, as well as for the bursty nature of channel losses and the e ff ective use of multiple transmission paths. With this model of the expected end-to-end distortion, the encoder selects between MD coding modes in a rate-distortion (R-D) optimized manner to most e ff ectively tradeo ff compression e ffi ciency for error resilience. We show how this approach adapts to both the local characteristics of the video and network conditions and demonstrates the resulting gains in performance using an H.264-based adaptive MD video coder. INTRODUCTION Streaming video applications often require error-resilient video coding methods that are able to adapt to current network conditions and to tolerate transmission losses.These applications must be able to withstand the potentially harsh conditions present on best-effort networks like the Internet, including variations in available bandwidth, packet losses, and delay. Multiple description (MD) video coding is one approach that can be used to reduce the detrimental effects caused by packet loss on best-effort networks [1][2][3][4][5][6][7].In a multiple description system, a video sequence is coded into two or more complementary streams in such a way that each stream is independently decodable.The quality of the received video improves with each received description, but the loss of any one of these descriptions does not cause complete failure.If one of the streams is lost or delivered late, the video playback can continue with only a slight reduction in overall quality.For an in-depth review of MD coding for video communications see [8]. There have been a number of proposals for MD video coding each providing their own tradeoff between compression efficiency and error resilience.Previous MD coding approaches applied a single MD technique to an entire sequence.However, the optimal MD coding method will depend on many factors including the amount of motion in the scene, the amount of spatial detail, desired bitrates, error recovery capabilities of each technique, current network conditions, and so forth.This paper examines the adaptive use of multiple MD coding modes within a single sequence.Specifically, this paper proposes an adaptive MD coder which selects among MD coding modes in an end-to-end rate-distortion (R-D) optimized manner as a function of local video characteristics and network conditions.The addition of the end-toend R-D optimization is an extension of the adaptive system proposed in [9].Some preliminary results with this approach were presented in [10]. This paper continues in Section 2 with a discussion of the MD coding modes used and the advantages and disadvantages of each.Sections 3 and 4 present an overview of how end-to-end optimized mode selection can be achieved in MD systems.The details of the proposed system are provided in Section 5, and experimental results are given in Section 6. MD CODING MODES A multiple description (MD) coder encodes a media stream into two or more separately decodable streams and transmits these independently over the network.The loss of any one of these streams does not cause complete failure, and the quality of the received video improves with each received description.Therefore, even when one description is lost for a significant length of time the video playback can continue, at a slight reduction in quality, without waiting for rebuffering or retransmission. Perhaps the simplest example of an MD video coding system is one where the original video sequence is partitioned in time into even and odd frames, which are then independently coded into two separate streams for transmission over the network.This approach generates two descriptions, where each has half the temporal resolution of the original video.In the event that both descriptions are received, the frames from each can be interleaved to reconstruct the full sequence.In the event one stream is lost, the other stream can still be straightforwardly decoded and displayed, resulting in video at half the original frame rate. Of course, this gain in robustness comes at a cost.Temporally subsampling the sequence lowers the temporal correlation, thus reducing coding efficiency and increasing the number of bits necessary to maintain the same level of quality.Without losses, the total bitrate necessary for this MD system to achieve a given distortion is generally higher than the corresponding rate for a single stream encoder to achieve the same distortion.This is a tradeoff between coding efficiency and robustness.However, in an application where we stream video over a lossy packet network, it is not so much a question of whether it is useful to give up some amount of efficiency for an increase in reliability as it is a question of finding the most effective way to achieve this tradeoff. This paper proposes adaptive MD mode selection in which the encoder switches between different coding modes within a sequence in an intelligent manner.To illustrate this idea, the system discussed in this paper uses a combination of four simple MD modes: single description coding (SD), temporal splitting (TS), spatial splitting (SS), and repetition coding (RC), see Figure 1.This section continues by describing these methods and their advantages and disadvantages; see Table 1. Single description (SD) coding represents the typical coding approach where each frame is predicted from the previous frame in an attempt to remove as much temporal redundancy as possible.Of all the methods presented here, SD coding has the highest coding efficiency and the lowest resilience to packet losses.On the other extreme, repetition coding (RC) is similar to the SD approach except the data is transmitted once in each description.This obviously leads to poor coding efficiency, but greatly improves the overall error resilience.As long as both descriptions of a frame are not lost simultaneously, there will be no effect on decoded video quality.The remaining two modes provide additional tradeoffs between error resilience and coding efficiency.The temporal splitting (TS) mode effectively partitions the sequence along the time dimension into even and odd frames.Even frames are predicted from even and odd frames from odd frames.Similarly, in spatial splitting (SS), the sequence is partitioned along the spatial direction into even and odd lines.Even lines are predicted from even lines and odd from odd. We chose to examine these particular modes for the following reasons.First, these methods tend to complement each other well with one method strong in regions where another method is weak, and vice versa.This attribute will be illustrated later in this paper.Secondly, each MD mode makes a different tradeoff between compression efficiency The loss of either stream has no effect coding efficiency on decoded quality and error resilience.This set of modes examines a wide range on the compression efficiency/error resilience spectrum, from most efficient single description coding to most resilient repetition coding.Finally, these approaches are all fairly simple both conceptually and from a complexity standpoint.Conceptually, it is possible to quickly understand where each one of these modes might be most or least effective, and in terms of complexity, the decoder in this system is not much more complicated than the standard video decoder.It is important to note that additional MD modes of interest may be straightforwardly incorporated into the adaptive MD encoding framework and the associated models for determining the optimized MD mode selection.In addition, it is also possible to account for improved MD decoder processing which may lead to reduce distortion from losses (e.g., improved methods of error recovery where a damaged description is repaired by using an undamaged description [1,11]), and thereby effect the end-to-end distortion estimation performed as part of the adaptive MD encoding. OPTIMIZED MD MODE SELECTION Each approach to MD coding trades off some amount of compression efficiency for an increase in error resilience.How efficiently each method achieves this tradeoff depends on the quality of video desired, the current network conditions, and the characteristics of the video itself.Most prior research in MD coding involved the design and analysis of novel MD coding techniques, where a single MD method is applied to the entire sequence; this approach is taken so as to evaluate the performance of each MD method.However, it would be more efficient to adaptively select the best MD method based on the situation at hand.Since the encoder in this system has access to the original source, it is possible to calculate the rate-distortion statistics for each coding mode and select between them in an R-D optimized manner. The main question then is how to make the decision between different modes.Lagrangian optimization techniques can be used to minimize distortion subject to a bitrate constraint [12].However, this approach assumes the encoder has full knowledge of the end-to-end distortion experienced by the decoder.When transmitted over a lossy channel, the end-to-end distortion consists of two terms; (1) known distortion from quantization and (2) unknown distortion from random packet loss.The unknown distortion from losses can only be determined in expectation due to the random nature of losses.Modifying the Lagrangian cost function to account for the total end-to-end distortion gives the following: ( Here, R i is the total number of bits necessary to code region i, D quant i is the distortion due to quantization, and D loss i is a random variable representing the distortion due to packet losses.Thus, the expected distortion experienced by the decoder can be minimized by coding each region with all available modes and choosing the mode which minimizes this Lagrangian cost. Calculating the expected end-to-end distortion is not a straightforward task.The quantization distortion D quant i and bitrate R i are known at the encoder.However, the channel distortion D loss i is difficult to calculate due to spatial and temporal error propagation.In [13], the authors show how to estimate expected distortion in a pixel-accurate recursive manner for SD and Bernoulli losses.In the next section, we discuss this approach and the extensions necessary to apply it to the current problem of MD coding over multiple paths with Gilbert (bursty) losses. MODELING EXPECTED DISTORTION IN MULTIPLE DESCRIPTION STREAMS As discussed in Section 3, random packet losses force the encoder to model the network channel and estimate the expected end-to-end distortion.With an accurate model of expected distortion, the encoder can make optimized decisions to improve the quality of the reconstructed video stream at the decoder.A number of approaches have been suggested in the past to estimate end-to-end distortion.The problem was originally considered for optimizing intra/inter decisions in single description streams to combat temporal error propagation.Some early approaches to solving this problem in an R-D optimized framework appear in [14,15].In [13], the authors suggest a recursive optimal per-pixel estimate (ROPE) for optimal intra/inter mode selection.Here, the expected distortion for any pixel location is calculated recursively as follows.Suppose f i n represents the original pixel value at location i in frame n, and f i n represents the reconstruction of the same pixel at the decoder.The expected distortion d i n at that location can then be written as At the encoder, the value f i n is known and the value f i n is a random variable.So, the expected distortion at each location can be determined by calculating the first and second moment of the random variable f i n .If we assume the encoder uses full pixel motion estimation, each correctly received pixel value can be written as , where f j n−1 represents the pixel value in the previous frame which has been used for motion,-compensated prediction and e i n represents the quantized residual (in the case of intra pixels, the prediction is zero and the residual is just the quantized pixel value).The first moment of each received pixel can then be recursively calculated by the encoder as follows If we assume the decoder uses frame copy error concealment, each lost pixel is reconstructed by copying the pixel at the same location in the previous frame.Thus, the first moment of each lost pixel is The total expectation can then be calculated as (5) The calculations necessary for computing the second moment of f i n can be derived in a similar recursive fashion.In [16], this ROPE model is extended to a two-stream multiple description system by recognizing the four possible loss scenarios for each frame: both descriptions are received, one or the other, description is lost, or both descriptions are lost.For notational convenience, we will refer to these outcomes as 11, 10, 01, and 00 respectively.The conditional expectations of each of these four possible outcomes are recursively calculated and multiplied by the probability of each occurring to calculate the total expectation, Graphically, this can be depicted as shown in Figure 3(a).The first moments of the random variables f i n−1 as calculated in the previous frame are used to calculate the four intermediate expected outcomes which are then combined together using (6) and stored for future frames.Again, the second moment calculations can be computed in a similar manner.These previous methods have assumed a Bernoulliindependent packet loss model where the probability that any packet is lost is independent of any other packet.However, the idea can be modified for a channel with bursty packet losses as well.Recent work has identified the importance of burst length in characterizing error resilience schemes, and that examining performance as a function of burst length is an important feature for comparing the relative merits of different error-resilient coding methods [11,17,18]. For this system, we have extended the MD ROPE approach to account for bursty packet loss.Here we use a twostate Gilbert loss model, but the same approach could be used for any multistate loss model including those with fixed burst lengths.We use the Gilbert model to simulate the nature of bursty losses where packet losses are more likely if the previous packet has been lost.This can be represented by the Markov model shown in Figure 2 assuming p 0 < p 1 . The expected value of any outcome in a multistate packet loss model can be calculated by computing the expectation conditioned on transitioning from one outcome to another multiplied by the probability of making that transition.For the two-state Gilbert model, this idea can be roughly depicted as shown in Figure 3 (b). For example, assume T B A represents the event of transitioning from outcome A at time n − 1 to outcome B at time n, and P(T B A ) represents the probability of making this transition.Then the expected value of outcome 11 can be computed as shown in (7), The remaining three outcomes can be computed in a similar manner.Due to the Gilbert model, the probability of transitioning from any outcome at time n − 1 to any other outcome at time n changes depending on which outcome is currently being considered.For instance, when computing the expected value of outcome 00, the result when both streams are lost, the probability that the previous outcome was 10, 01, or 00 is much higher than when computing the expected value of outcome 11.Since the transitional probabilities vary from outcome to outcome, it is not possible to combine the four expected outcomes into one value as can be done in the Bernoulli case.The four values must be stored separately for future use as shown in Figure 3(b).Once again, the second moment values can be computed using a similar approach.The above discussion assumed full pixel motion vectors and frame copy error concealment, but it is possible to extend this approach to subpixel motion vector accuracy and more complicated error concealment schemes.As discussed in [13], the main difficulty with this arises when computing the second moment of pixel values which depend on a linear combination of previous pixels.The second moment depends on the correlations between each of these previous pixels and is difficult to compute in a recursive manner.We have modified the above approach in order to apply it to the H.264 video coding standard with quarter-pixel motion This causes bursty losses in the resulting stream. Previous values Frame n − 1 F r a m e n (a) Recursion with Bernoulli packet loss model p (11) p (10) p( 01 vector accuracy and more sophisticated error concealment methods by using the techniques proposed in [19] for estimation of cross-correlation terms.Specifically, each correlation term E[XY] is estimated by Figure 4 demonstrates the performance of the above approach in tracking the actual distortion experienced at the decoder.Here we have coded the Foreman and Carphone test sequences at approximately 0.4 bits per pixel (bpp) with the H.264 video codec using the SD approach mentioned in Section 2. The channel has been modeled by a two path channel, where the paths are symmetric with Gilbert losses at an average packet loss rate of 5% and expected burst length of 3 packets.The expected distortion as calculated at the encoder using the above model has been plotted relative to the actual distortion experienced by the decoder.This actual distortion was calculated by using 1 200 different packet loss traces and averaging the resulting squared-error distortion.As shown in both of these sequences, the proposed model is able to track the end-to-end expected distortion quite closely.Also shown in this figure for reference is the quantization only distortion (with no packet losses). MD SYSTEM DESIGN AND IMPLEMENTATION The system described in this paper has been implemented based on the H.264 video coding standard using quarterpixel motion vector accuracy and all available intra-and interprediction modes [20].We have used reference software version 8.6 for these experiments with modifications to support adaptive mode selection.Due to the in-loop deblocking filter used in H.264, the current macroblock will depend on neighboring macroblocks within the frame, including blocks which have yet to be coded.This deblocking filter has been turned off in our experiments to remove this causality issue and simplify the problem. The adaptive mode selection is performed on a macroblock basis using the Lagrangian techniques discussed in Section 3 with the expected distortion model from Section 4. Note that this optimization is performed simultaneously for both traditional coding decisions (e.g., inter versus intra coding) as well as for selecting one of the possible MD modes. As mentioned in Section 2, the current system uses a combination of four possible MD modes: single description coding (SD), temporal splitting (TS), spatial splitting (SS), and repetition coding (RC).Note that when coded in a nonadaptive fashion, each method (SD, TS, SS, RC) is still performed in an R-D optimized manner as mentioned above.All of the remaining coding decisions, including inter versus intra coding, are made to minimize the end-to-end distortion.For instance, the RC mode is not simply a straightforward replica of the SD mode.The system recognizes the improved reliability of the RC mode and elects to use far less intra-coding allowing more intelligent allocation of the available bits.Also, it was necessary to modify the H.264 codec to support macroblock level adaptive interlaced coding in order to accommodate the spatial splitting mode.The temporal splitting mode, however, was implemented using the standard compliant reference picture selection available in H.264. The packetization of data differs slightly for each mode (see Figure 5).In both the SD or TS approaches, all data for a frame is placed into a single packet.The even frames are then sent along one stream and the odd frames along the other.While in the SS and RC approaches, data from a single frame is coded into packets placed into both streams.For SS, even lines are sent in one stream and odd lines in the other, while for RC all data is repeated in both streams.Therefore, for SD and TS each frame is coded into one large packet which is sent in alternating streams, while for SS and RC each frame is coded into two smaller packets and one small packet is sent in each stream.Since the adaptive approach (ADAPT) is a combination of each of these four methods, there is typically one slightly larger packet and one smaller packet and these alternate streams between frames.If a frame is lost in either the TS or SD method, no data exists in the opposite stream at the same time instant, so the missing data is estimated by directly copying from the previous frame.Note that here we copy from the most previous frame in either description, not the previous frame in the same description.In the SS method, if only one description is lost, the decoder estimates the missing lines in the frame using linear interpolation, and if both are lost, it estimates the missing frame by copying the previous frame.Similarly for RC, if only one description is lost, the decoder can use the data in the opposite stream, while if both are lost, it copies the previous frame. EXPERIMENTAL RESULTS The following results have been obtained using our modified H.264 JM 8.6 codec (described above) with the Foreman and Carphone video test sequences.Both sequences are 30 frames per second at QCIF resolution.The Foreman sequence has 400 frames and the Carphone sequence has 382 frames. To measure the actual distortion experienced at the decoder, we have simulated a Gilbert packet loss model with packet loss rates and expected burst lengths as specified in each section below.For each of the experiments, we have run the simulation with 300 different packet loss traces and averaged the resulting squared-error distortion.The same packet loss traces were used throughout a single experiment to allow for meaningful comparisons across the different MD coding methods. Each path in the system is assumed to carry 30 packets per second where the packet losses on each path are modeled as a Gilbert process.For wired networks, the probability of packet loss is generally independent of packet size so the variation in sizes should not generally affect the results or the fairness of this comparison.When the two paths are balanced or symmetric, the optimization automatically sends half the total bitrate across each path.For unbalanced paths, the adaptive system results in a slight redistribution of bandwidth as is discussed later. In each of these experiments, the encoder is run in one of two different modes: constant bitrate encoding (CBR) or variable bitrate encoding (VBR).In the CBR mode, the quantizer and associated lambda value is adjusted on a macroblock basis in an attempt to keep the number of bits used in each frame approximately constant.Keeping the bitrate constant allows a number of useful comparisons between methods on a frame-by-frame basis such as those presented in Figure 6.Unfortunately, the changes in quantizer level must be communicated along both streams in the adaptive approach which leads to some significant overhead.While this signaling information is included in the bitstream, the amount of signaling overhead is not currently incorporated in the R-D optimization decision process, hence leading to potentially suboptimal decisions with the adaptive approach.We mention this since if all of the overhead was accounted for in the R-D optimized rate control then the performance of the adaptive method would be even slightly better than shown in the current results.In the VBR mode, the quantizer level is held fixed to provide constant quality.In this case, there is no quantizer overhead and this approach yields results closer to the optimal performance.Since the rates of each mode may vary when in VBR mode (where the quantizer is held fixed), it is not possible to make a fair comparison between different modes at a given bitrate.Therefore, in experiments where we try to make fair comparisons among different approaches at the same bitrate per frame, we operate in CBR mode, for example, Figure 6, and we use VBR mode to compute rate-distortion curves, like those shown in Figure 9. MD coding adapted to local video characteristics We first evaluate the system's ability to adapt to the characteristics of the video source.The channel in this experiment was simulated with two balanced paths each having 5% average packet loss rate and expected burst length of 3 packets.The video was coded in CBR mode at approximately 0.4 bits per pixel (bpp).Figure 6 demonstrates the resulting distortion in each frame averaged over the 300 packet loss traces for the adaptive MD method and each of its nonadaptive MD counterparts. The Foreman sequence contains a significant amount of motion from frames 250 to 350 and is fairly stationary from frame 350 to 399.Notice how the SS/RC methods work better during periods of significant motion while the SD/TS methods work better as the video becomes stationary.The adaptive method intelligently switches between the two, maintaining at least the best performance of any nonadaptive approach.Since the adaptive approach adapts on a macroblock level, it is often able to do even better than the best nonadaptive case by selecting different MD modes within a frame as well.Similar results can be seen with the Carphone sequence.The best performing nonadaptive approach varies from frame to frame depending on the characteristics of the video.The adaptive approach generally provides the best performance of each of these. Also shown in Figure 6 are the results from a typical video coding approach which we will refer to as standard video coding (STD).Here, R-D optimization is only performed with respect to quantization distortion, not the end-to-end R-D optimization used in the other approaches.Instead of making inter/intra coding decisions in an end-to-end R-D optimized manner as performed by SD, it periodically intra updates one line of macroblocks in every other frame to combat error propagation (this update rate was chosen since the optimal intra refresh rate [21] is often approximately 1/ p, where p is the packet loss rate). The adaptive MD approach is able to outperform optimized SD coding by up to 2 dB for the Foreman sequence, depending on the amount of motion present at the time.Note that by making intelligent decisions through end-to-end R-D optimization, the SD method examined here is able to outperform the conventional STD method by as much as 4 or 5 dB with the Foreman sequence.The adaptive MD approach outperforms optimized SD coding by up to 1 dB with the Carphone sequence, and optimized SD coding outperforms the conventional STD approach by up to approximately 3 dB. In Figure 7, we illustrate how the mode selection varies as a function of the characteristics of the video source.Specifically, we show the percentage of macroblocks using each MD mode in each frame of the Foreman sequence.From this distribution of MD modes, one can roughly segment the Forman sequence into three distinct regions: almost exclusively SD/TS in the last 50 frames, mostly SS/RC from frames 250-350, and a combination of the two during the first half.This matches up with the characteristics of the video which contains some amount of motion at the beginning, a fast camera scan in the middle, and is nearly stationary at the end. MD coding adapted to network conditions In our second experiment, we examine how the system adapts to the conditions of the network.The channel in this experiment was simulated with two balanced paths each with expected burst length of 3 packets.The video was coded in CBR mode at approximately 0.4 bits per pixel (bpp) and the average packet loss rate was varied from 0 to 10%. Figure 8 demonstrates the resulting distortion in the sequence for the adaptive MD method and each of its nonadaptive MD counterparts.These results were computed by first calculating the meansquared-error distortion by averaging across all the frames in the sequence and across the 300 packet loss traces, and then computing the PSNR. Notice how the adaptive approach achieves a performance similar to the SD approach when no losses occur, but its performance does not fall off as quickly as the average packet loss rate is increased.Near the 10% loss rate, the adaptive method adjusts for the unreliable channel and has a performance closer to the RC mode.Note that the intra update rate for the STD method was adjusted in the experiment to be as close as possible to 1/ p, where p is the packet loss rate, as an approximation of the optimal intra update frequency.Since this update rate could only be adjusted in an integer manner, the STD curves above tend to have some jagged fluctuations and in some cases the curves are not even monotonically decreasing.As an example, an update rate of 1/ p would imply that one should update one line of macroblocks every 2.22 frames at 5% loss and every 1.85 frames at 6% loss.These two cases have both been rounded to an update of one line of macroblocks every 2 frames resulting in the slightly irregular curves. Table 2 shows the distribution of MD modes in the adaptive approach at 0%, 5%, and 10% average packet loss rates.As the loss rate increases, the system responds by switching from lower redundancy methods (SD) to higher redundancy methods (RC) in an attempt to provide more protection against losses.It is interesting to point out that even at 0% loss the system does not choose 100% SD coding.The adaptive approach recognizes that occasionally it can be more efficient to predict from two frames ago than from the prior frame, so it chooses TS coding.Occasionally, it can be more efficient to code the even and odd lines of a macroblock separately, so it chooses SS coding.The fact that it selects any RC at 0% loss rate is a little counterintuitive, but this results since coding a macroblock using RC changes the prediction dependencies between macroblocks.The H.264 codec contains many intra-frame predictions including motion vector prediction and intra-prediction.In order for the RC mode to be correctly decoded even when one stream is lost, the adaptive system must not allow RC blocks to be predicted in any manner from non-RC blocks.If RC blocks had been predicted from SD blocks, for example, the loss of one stream would affect the SD blocks which would consequently alter the RC data as well.Occasionally, prediction methods like motion vector prediction may not help and can actually reduce the coding efficiency for certain blocks.If this is extreme enough, it can actually be more efficient to use RC, where the prediction would not be used, even though the data is then unnecessarily repeated in both descriptions. End-to-end R-D performance Figure 9 shows the end-to-end R-D performance curves of each method.This experiment was run in VBR mode with fixed quantization levels.To generate each point on these curves, the resulting distortion was averaged across all 300 packet loss simulations, as well as across all frames of the sequence.The same calculation was then conducted at various quantizer levels to generate each R-D curve.By switching between MD methods, ADAPT is able to outperform optimized SD coding by up to 1 dB for the Foreman sequence and about 0.5 dB for the Carphone sequence.The ADAPT method is able to outperform the STD coding approach by as much as 4.5 dB with the Foreman sequence and up to 3 dB with the Carphone sequence.ADAPT is able to outperform TS, which more or less performs the second best overall, by as much as 0.5 dB.One interesting side result here is how well RC performs in these experiments.Keep in mind that this is an R-D optimized RC approach, not simply the half-bitrate SD method repeated twice.The amount of intra coding used in RC is significantly reduced relative to SD coding as the encoder recognizes the increased resilience of the RC method and chooses a more efficient allocation of bits. Balanced versus unbalanced paths In our final experiment, we analyze the performance of the adaptive method when used with unbalanced paths where one path is more reliable than the other.The channel consisted of one path with 3% average packet loss rate and another with 7%, both with expected burst lengths of 3 packets.The video in this experiment was coded at approximately 0.4 bpp in CBR mode.Table 3 shows the distribution of MD modes in even frames of the sequence versus odd frames.The even frames are those where the larger packet (see Figure 5) is sent along the more reliable path and the smaller packet is sent along the less reliable path.The opposite is true for the odd frames.It is also interesting to compare the results from Table 3 with those from Table 2 at 5% balanced loss.The average of the even and odd frames from Table 3 matches closely with the values from the balanced case in Table 2. As shown in Table 3, the system uses more SS and RC in the less reliable odd frames.These more redundant methods allow the system to provide additional protection for those frames which are more likely to be lost.By doing so, the adaptive system is effectively moving data from the less reliable path into the more reliable path.Table 4 shows the bitrate sent along each path in the balanced versus unbalanced cases.In this situation, the system is shifting between 5-6% of its total rate into the more reliable stream to compensate for conditions on the network.Since the nonadaptive methods are forced to send approximately half their total rate along each path, it is difficult to make a fair comparison across methods in this unbalanced situation.We are considering ways to compensate for this.However, it is quite interesting that the end-to-end R-D optimization is able to adjust to this situation in such a manner. CONCLUSIONS This paper proposed end-to-end R-D optimized adaptive MD mode selection for multiple description coding.This approach makes use of multiple MD coding modes within a given sequence, making optimized decisions using a model of expected end-to-end distortion.The extended ROPE model presented here is able to accurately predict the distortion experienced at the decoder taking into account both bursty packet losses and the use of multiple paths.This allows the encoder in this system to make optimized mode selections using Lagrangian optimization techniques to minimize the expected end-to-end distortion.We have shown how one such system based on H.264 is able to adapt to local characteristics of the video and to network conditions on multiple paths and have shown the potential for this adaptive approach, which selects among a small number of simple complementary MD modes, to significantly improve video quality.The results presented above demonstrate how this system accounts for the characteristics of the video source, for example, using more redundant modes in regions particularly susceptible to losses, and how it adapts to conditions on the network, for example, switching from more efficient methods to more resilient methods as the loss rate increases.The results with this approach appear quite promising, and we believe that adaptive MD mode selection can be a useful tool for reliably delivering video over lossy packet networks. Figure 1 : Figure 1: Examined MD coding methods: (a) single description coding: each frame is predicted from the previous frame in a standard manner to maximize compression efficiency; (b) temporal splitting: even frames are predicted from even frames and odd from odd; (c) spatial splitting: even lines are predicted from even lines and odd from odd; (d) repetition coding: all coded data repeated in both streams. Figure 2 : Figure2: Gilbert packet loss model.Assuming p 0 < p 1 , the probability of each packet being lost increases if the previous packet was lost.This causes bursty losses in the resulting stream. Recursion with Gilbert packet loss model Figure 3 : Figure 3: Conceptual computation of first moment values in MD ROPE approach: (a) Bernoulli case: the moment values from the previous frame are used to compute the expected values in each of the four possible outcomes which are then combined to find the moment values for the current frame; (b) Gilbert losses: due to the Gilbert model, the probability of transitioning from any one outcome at time n − 1 to any other outcome at time n changes depending on which outcome is currently being considered.Thus, the four expected outcomes cannot be combined into one single value as was done in the Bernoulli case.Each of these four values must be stored separately for future calculations. Figure 4 : Figure 4: Comparison between actual and expected end-to-end PSNR: (a) Foreman sequence; (b) Carphone sequence.This figure demonstrates the ability of this model to track the actual end-to-end distortion, where the expected and actual distortion curves are roughly on top of each other.Also shown in this figure is the quantization only distortion which shows the distortion from compression and without any packet loss. Figure 5 : Figure 5: Packetization of data in MD modes: (a) SD and TS: data sent along one path alternating between frames; (b) SS and RC: data spread across both streams; (c) ADAPT: combination of the two resulting in one slightly larger packet and one slightly smaller. Figure 6 : Figure 6: Average distortion in each frame for ADAPT versus each nonadaptive approach.Coded at 0.4 bpp with balanced paths and 5% average packet loss rate and expected burst length of 3: (a) Foreman sequence; (b) Carphone sequence. Figure 7 : Figure 7: Distribution of selected MD modes used in the adaptive method for each frame of the Foreman sequence illustrating how mode selection adapts to the video characteristics: 5% average packet loss rate, expected burst length 3. Figure 8 : Figure 8: PSNR versus average packet loss rate: (a) Foreman sequence; (b) Carphone sequence.Video coded at approximately 0.4 bpp.The average packet loss rate for this experiment was varied from 0-10%, and the expected burst length was held constant at 3 packets. Table 1 : List of MD coding modes along with their relative advantages and disadvantages. Table 2 : Comparing the distribution of MD modes in the adaptive approach at 0%, 5%, and 10% average packet loss rates.(a) Foreman sequence.(b) Carphone sequence. Table 3 : Percentage of macroblocks using each MD mode in the adaptive approach when sending over unbalanced paths. Table 4 : Percentage of total bandwidth in each stream for balanced and unbalanced paths.
8,853.6
2006-01-01T00:00:00.000
[ "Computer Science" ]
Classical communication enhanced quantum state verification Quantum state verification provides an efficient approach to characterize the reliability of quantum devices for generating certain target states. The figure of merit of a specific strategy is the estimated infidelity ϵ of the tested state to the target state, given a certain number of performed measurements n. Entangled measurements constitute the globally optimal strategy and achieve the scaling that ϵ is inversely proportional to n. Recent advances show that it is possible to achieve the same scaling simply with non-adaptive local measurements; however, the performance is still worse than the globally optimal bound up to a constant factor. In this work, by introducing classical communication, we experimentally implement an adaptive quantum state verification. The constant factor is minimized from ~2.5 to 1.5 in this experiment, which means that only 60% measurements are required to achieve a certain value of ϵ compared to optimal non-adaptive local strategy. Our results indicate that classical communication significantly enhances the performance of quantum state verification, and leads to an efficiency that further approaches the globally optimal bound. INTRODUCTION Quantum information science aims to enhance traditional information techniques by introducing the advantage of 'quantumness'. To date, the major subfields in quantum information include quantum computation 1 , quantum cryptography 2 and quantum metrology 3,4 , which are respectively in pursuit of more efficient computation, more secure communication, and more precise measurement. To achieve these innovations, one needs to manufacture quantum devices and verify that these devices indeed operate as expected. Various techniques have been developed for the task to inspect the quantum states generated from these devices. Quantum state tomography (QST) 5 provides full information about an unknown state by reconstructing the density matrix and constitutes a popular point estimation method. However, the conventional tomographic reconstruction of a state is an exponentially time-consuming and computationally difficult process 6 . In order to reduce the measurement complexity to certify the quantum states, substantial efforts have been made to formalizing more efficient methods. These improved methods normally require prior information or access partial knowledge about the states. On the one hand, it has been found that with prior information about the category of the tested states, compressed sensing 7,8 and matrix product state tomography 9 can be used to simplify the measurement of quantum states. On the other hand, entanglement witnesses can justify the appearance of entanglement with far fewer measurements 10,11 ; in a radical case, it is shown that local measurement on few copies is sufficient to certify the appearance of entanglement for multipartite entangled systems 12,13 . Furthermore, when the applied measurements are correlated through classical communication, quantum tomography can be implemented in a significantly more efficient way [14][15][16] . In quantum information processing, the quantum device is generally designed to generate a specific target state. In this case, the user only needs to confirm that the actual state is sufficiently close to the target state, in the sense that the full knowledge about the exact form of the state is excessive for this requirement. Quantum state verification (QSV) provides an efficient solution applicable to this scenario. As mentioned above, tomography aims to address the following question: What is the state? While QSV addresses a different question: Is the state identical/close to the target state? From a practical point of view, answering this question is sufficient for many quantum information applications. By performing a series of measurements on the output copies of state, QSV reaches a conclusion like 'the device outputs copies of a state that has at least 1 − ϵ fidelity with the target, with 1 − δ confidence'. In order to verify a specific quantum state, different kinds of strategies can be constructed, and thus, it is profitable for the user to seek an optimal strategy. Rigorously, this optimization can be achieved by minimizing the number of measurements of n for given values of ϵ and δ. Similar to the realm of quantum metrology 17,18 , an optimal QSV strategy also strives for a 1/n scaling of ϵ, with a minimum constant factor before. For QSV, if the target state is a pure state, the best strategy is the projection onto the target state and its complementary space, then the 1/n scaling is reached, we call this strategy the globally optimal QSV strategy. Unfortunately, if the target is entangled state, entangled measurements are demanded while they are rare resources and difficult to obtain 19 . Recently, several works have shown that 1/n scaling can be achieved with a non-adaptive local (LO) strategy [20][21][22] , the LO here means that the applied measurement operators are separable as oppose to the entangled ones used in globally optimal strategy. However, this non-adaptive LO strategy is still worse than the globally optimal strategy by a constant factor, which represents the number of additional measurements required to compete with the globally optimal strategy. In this work, we demonstrate adaptive QSV using a photonic apparatus with active bi-directional feed-forward of classical communications between entangled photon pairs based on recent theoretical works [23][24][25] . The achieved efficiency not only attains the 1/n scaling but also further minimizes the constant factor from before. Both bi-and uni-directional classical communications are utilized in our experiment, and the results show that these adaptive strategies significantly outperform the nonadaptive LO strategy. Furthermore, the bi-directional strategy achieves higher efficiency than the uni-directional strategy, and the number of required measurements is reduced by~40% compared to the non-adaptive LO strategy. Our results indicate that classical communication is beneficial resources in QSV, which enhances the performance to a level comparable with the globally optimal strategy. Theoretical framework In a QSV task, the verifier is assigned to certify that his on-hand quantum device does produce a series of quantum states (σ 1 , σ 2 , σ 3 , …, σ n ) satisfying the following inequality: hΨjσ i jΨi>1 À ϵ ði ¼ 1; 2; :::; nÞ: (1) where Ψ j i is the target state that the device is supposed to produce. Equation (1) assumes a different scenario from that of QST, for which all σ i are required to be independent and identically distributed. Typically, with the probability as p l (l = 1, 2, …, m), the verifier randomly performs a two-outcome local measurement M l , which is accepted with certainty when performed on the target state. When all the measurement outcomes are accepted, the verifier can reach a statistical inference that the state from the tested device has a minimum fidelity 1 − ϵ to the target state, with a statistical confidence level of 1 − δ. For a specific strategy Ω = Σ l p l M l , the minimum number of measurements n required to achieve certain values of ϵ and δ is then given by 23 This result indicates that it is possible to achieve the 1/n scaling of ϵ in the QSV of pure entangled states. Furthermore, the verifier can optimize the strategy by minimizing the second largest eigenvalue λ # 2 ðΩÞ, as well as the constant factor 1 1Àλ # 2 ðΩÞ . For LO strategies with non-adaptive local measurements, the optimal strategy to verify ΨðθÞ j i¼cos θ HV j iÀsin θ VH j i is identified with a minimum λ # 2 ðΩÞ as 20 The globally optimal strategy can be realized by projecting σ i to the target state Ψ j i and its orthogonal state Ψ ? , under which λ # 2 ðΩÞ ¼ 0, and thus, the globally optimal bound is calculated as For QSV of entangled states, entangled measurements are required to implement the globally optimal strategy, which are sophisticated to perform [26][27][28][29] . Therefore, local measurements are preferred from a practical view of point. This realistic contradiction naturally yields a question that how to further minimize the gap between locally and globally optimal strategies with currently accessible techniques. Recently, a theoretical work generalizes the non-adaptive LO strategy to adaptive versions by introducing classical communication between the two parties sharing entanglement 23 for details) with prior probabilities f 1 2þ2sin 2 θ ; 1 2þ2sin 2 θ ; sin 2 θ 1þsin 2 θ g (θ ∈ (45 ∘ , 90 ∘ )), and the corresponding strategy can be written as 23 A bi-directional LOCC (Bi-LOCC) strategy can be implemented by randomly switching the role between Alice and Bob, which can be denoted as Ω ! ¼ ΨðθÞ j i ΨðθÞ h jþ 1 3 ðI À ΨðθÞ j i ΨðθÞ h jÞ. Although both of these two strategies utilize one-step adaptive measurement, the Bi-LOCC strategy outperforms the Uni-LOCC when θ ≠ 45 ∘ . When verifying entangled states with local measurements, adaptive strategies Ω → and Ω ! achieve higher efficiency compared to the non-adaptive LO strategy 23,25 . The efficiency of LO, Uni-LOCC and Bi-LOCC strategies depend on their respective constant-factors 1 1Àλ # 2 ðΩÞ , which are 2 þ sin θ cos θ, 1 þ sin 2 θ and 3/2. Although the performance of all these strategies coincides with two-qubit maximally entangled states (θ = 45 ∘ ), the adaptive strategies are still preferred in most practical scenarios, where the realistic states are always different from the maximally entangled ones and actually closer to target states with θ ≠ 45 ∘ . Experimental implementation and results In the above QSV proposals, a valid statement about the tested states is based on the fact that all the outcomes are accepted, while a single appearance of rejection will cease the verification without a quantified conclusion. In practice, the generated states from the quantum devices are unavoidably non-ideal with a limited fidelity to the target state; thus, there is always a certain probability to be rejected in each measurement. Even the probability of single rejection is small, it is natural to observe rejection events in an experiment involving a sequence of measurements. As a result, these original proposals are likely to mistakenly characterize qualified quantum devices as unqualified, which is inadequate for experimental implementation. By considering the proportion of accepted outcomes, a modified strategy is thus developed here, which is robust to a Fig. 1 Diagram of adaptive QSV with LOCC. The figure represents the general procedure to implement an adaptive QSV to verify whether an unknown source generates a pure target state. The generated states σ 1 , σ 2 , σ 3 , …, σ n are projected firstly by Alice with a randomly selected measurement setting Π i , with a prior probability as P i . The measurement on Bob's side depends on the outcome of Alice's measurement, i.e., if the outcome of Π i is 0 (1), Bob performs measurement Π i0 (Π i1 ) accordingly. Bob's outcome 1 and 0 are coarse-grained as accepted (√) and rejected (×) events, respectively. Similarly, Alice can perform measurements according to Bob's outcome. A bi-directional strategy can be applied by performing these two uni-directional strategies randomly. Through a statistical analysis of the sequence of accepted and rejected events, the verifier can ascertain the largest possible distance between the actual and target states up to some finite statistical confidence. certain proportion of rejection events. Quantitatively, we have the corollary that if 〈Ψ|σ i |Ψ〉 ≤ 1 − ϵ for all the measured states, the probability for each outcome to be accepted is smaller than 1 À ð1 À λ # 2 ðΩÞÞ Ã ϵ. As a result, in the case that the verifier observes an accepted probability p ! 1 À ð1 À λ # 2 ðΩÞÞ Ã ϵ, it should be concluded that the actual state satisfies Eq. (1) with a confidence level of 1−δ, where ϵ and δ are calculated from the inequality 12 δ e ÀD m n jj1Àð1Àλ # 2 ðΩÞÞϵ ð Þ n ; with and m results are accepted when n measurements are performed. As a result of this modification, in the case that the final accepted probability p ! 1 À ð1 À λ # 2 ðΩÞÞ Ã ϵ, the verification can eventually reach a conclusion quantifying the distance between the actual and target states. Benefiting from this modification, QSV can be applied to realistic non-ideal states, which allows us to experimentally verify two-qubit entangled states using the above adaptive proposals. With the setup shown in Fig. 2, we can perform adaptive QSV. The setup consists of an entangled photon-pair source (see 'Methods' for details), two mechanical optical switcher (MOS) and two highspeed triggered polarization analyzer (TPA). For adaptive QSV, Alice can guide her photon towards the MOS and perform a randomly selected projective measurement by TPA. Afterward, through a uni-directional classical communication, Alice's outcome is sent to Bob to control the measurement performed on the paring photon, which is delayed on Bob's MOS. An opposite adaptive process can also be realized by switching the role of Alice and Bob; and thus, the symmetric adaptive QSV can be executed by randomly selecting the two communication directions with equal probabilities. Technically, this random adaptive operation can be realized by controlling the MOS with a quantum random number generator (QRG), which outputs a binary signal (0 and 1) to decide which MOS transmits the photon directly while the other MOS delays the passing photon. For both Uni-and Bi-LOCC strategies, we use a QRNG to randomly decide the applied setting among M 1 , M 2 , M 3 ; therefore, the settings are unknown to the incident photon pairs in prior to the measurement. In order to confirm the power of classical communication in QSV, three strategies (LO, Uni-LOCC and Bi-LOCC) are utilized to verify a partially entangled state Ψð60 Þ j i and the results are shown in Fig. 3. In Fig. 3a, the results of 50 trials are averaged, which approximately coincide with the theoretical lines for the first few measurements and deviate from the predicted linearity afterward. This deviation mainly results from the difference between verified states and the ideal target state, which leads to rejection outcomes in QSV. In other words, only if the verified states are perfectly identical to the target state, a persistent 1/n scaling can be observed in a practical QSV. Since the occurrence rates of rejections are in principle equal for different strategies, a distinct gap in the estimated fidelity can be seen between the adaptive and non-adaptive strategies as predicted in the theory part. These results indicate the power of classical communication in boosting the performance of QSV. However, the practical scaling is not only determined by the optimality of the strategy, but also the quality of the actual state. In this sense, we can only access the intrinsic performance of a strategy by testing an ideal state. Although it is impossible to generate an ideal state in experiment, we can circumvent this difficulty by studying the first few measurements, of which the occurrence of rejections is fairly rare. In Fig. 3b, the first 25 measurements of single trials with all the outputs to be accepted are plotted, accompanied by the Fig. 2 Experimental setup. The setup includes an entanglement source, two sets of MOS and two sets of TPA. The entanglement source is mainly an SI in a triangle configuration, and the generated photon pairs are distributed to two separate parties, namely, Alice and Bob. The MOS consists of two D-Shaped Mirrors mounted in a motorized rotation stage (MRS), and a 100-m single-mode fiber (SMF) for delaying. Each TPA is composed of one electro-optical modulator (EOM) and one standard polarization analyzer, which consists of one half-wave plate (HWP) and one quarter-wave plate (QWP) mounted in MRSes, and a following polarized beam splitter (PBS) with two single-photon detectors (SPDs) at its two exits. For the Uni-LQCC protocol, the MOS on Alice's side is rotated to be open for the passing photon, which is directly measured by TPA with a randomly selected measurement setting. The other photon on Bob's side is reflected in the SMF at Bob's MOS, and then reflected into the TPA after emitting from the SMF. Adaptive processing is realized by using the outcome of Alice's measurement to control the measurement setting of Bob's TPA. Technically, Alice's photon sparks the corresponding SPD and the produced pulse triggers an arbitrary function generator (AFG) to output a recognizable signal for the EOM. After amplifying this signal to an adequate amplitude by a radiofrequency amplifier (RFA), the EOM can be driven to perform a required operation on the passing photon; thus, Bob's measurement is adaptive to the outcomes of Alice's measurement. The coincidence is recorded and analyzed by an ID800 (ID Quantique). The Bi-LQCC protocol can be implemented by randomly switching the role of Alice and Bob. The randomness of the measurement setting and communication direction is realized by controlling the TPA and MOS with the QRNG, respectively. Di dichroic, MMF multi-mode fiber, IF interference filter, SMF single-mode fiber, PCP phase compensation plate, L lens. averaged results in Fig. 3a in the same range. The efficiency can be characterized by the slope of linear fitting lines of these data points. For LO, Uni-LOCC and Bi-LOCC strategies, the fitted slope values of the averaged points are 0.13, 0.17 and 0.187, respectively. After eliminating the effects of the state deviations by considering all-accepted single trials, the fitted slope values are 0.135, 0.188 and 0.22 for LO, Uni-LOCC and Bi-LOCC strategies, respectively, and these values are exactly the theoretical predictions for ideal states. As a result, the efficiency of the Bi-LOCC strategy is 1.63 times higher than that of LO and 1.17 times higher than that of Uni-LOCC. In other words, by introducing bidirectional classical communications, only 60% measurements of the non-adaptive LO scenario are required to verify the states to a certain level of fidelity. The performance gap between optimal local strategy and the globally optimal strategy is further minimized. Concretely, the constant factor 1 1Àλ # 2 ðΩÞ is reduced to approximately 1.5. A further study of the performance gap between Uni-and Bi-LOCC strategies is made to verify another two entangled states Ψð70 Þ j iand Ψð80 Þ j i , and the averaged results of 50 trials are shown in Fig. 4. Both results show that Bi-LOCC significantly outperforms Uni-LOCC, and the differences of estimated fidelities are 2.1% and 1.3% for Ψð70 Þ j iand Ψð80 Þ j i , respectively. Classical communication better enhances QSV by transferring the information bi-directionally rather than an ordinary uni-directional configuration. DISCUSSION One main motivation to explore the quantum resources, such as entangled states and measurements, is their potential power to surpass the classical approaches. On the other hand, the fact that the quantum resources are generally complicated to produce and control inspires another interesting question: how to use classical resources exhaustively to approach the bound set by quantum resources? In the task to verify an entangled state, the utilization of entangled measurements constitutes a globally optimal strategy that achieves the best possible efficiency. Surprisingly, one can also construct strategies merely with local measurements and achieve the same scaling. In this experiment, we show that by introducing classical communications into QSV, the performance with local measurements can be further enhanced to approach the globally optimal bound. As a result, to verify the states to a certain level of fidelity, the number of required measurements is and Ψð80 Þ . Two adaptive strategies, Uni-LOCC and Bi-LOCC, are utilized to study the performance gap between them. In both a and b, δ is set to 0.05, and the value of 1/ϵ is plotted against the number of measurement n. The averaged results of 50 trials for Ψð70 Þ j iand Ψð80 Þ j iare shown together with the theoretical lines for ideal states. For both verified states, a distinct gap in the estimated fidelity can be seen between the Uni-and Bi-LOCC strategies. The standard errors for the averaging data points can be approximately expressed as ε = sn with s = 0.037 and 0.035 for UNI-and BI-LOCC strategies in a, and s = 0.045 and 0.057 for UNIand BI-LOCC strategies in b. only 60% of that for non-adaptive local strategy. Meanwhile, the gap between the locally and globally optimal bound is distinctly reduced, with the constant factor minimized to 1.5 before 1/n scaling. Furthermore, recently QSV has been generalized to the adversarial scenario where arbitrary correlated or entangled state preparation is allowed 30,31 . Generation of entangled photon pairs In the first part of the setup, tunable two-qubit entangled states are prepared by pumping a nonlinear crystal placed into a phase-stable Sagnac interferometer (SI). Concretely, a 405.4 nm single-mode laser is used to pump a 5-mm long bulk type-II nonlinear periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal placed into a phasestable SI to produce polarization-entangled photon pairs at 810.8 nm. A polarized beam splitter (PBS) followed by an HWP and a PCP are used to control the polarization mode of the pump beam. These lenses before and after the SI are used to focus the pump light and collimate the entangled photons, respectively. The interferometer is composed of two highly reflective and polarization-maintaining mirrors, a Di-HWP and a Di-PBS. 'Di' here means it works for both 405.4 and 810.8 nm. The Di-HWP flips the polarization of passing photons, such that the type-II PPKTP can be pumped by the same horizontal light from both clockwise and counterclockwise directions. Di-IF and LPF (Long pass filter) are used to remove the pump beam light. BPF (bandpass filter) and SMF are used for spectral and spatial filtering, which can significantly increase the fidelity of entangled states. The whole setup, in particular the PPKTP, is sensitive to temperature fluctuations. Placing the PPKTP on a temperature controller (±0.002°C stability) and sealing the SI with an acrylic box would help improve temperature stability. Polarization-entangled photon pairs are generated in the state ΨðθÞ j i¼ cos θ HV j iÀ sin θ VH j i (H and V denote the horizontally and vertically polarized components, respectively) and θ is controlled by the pumping polarization. Measurement setting for adaptive QSV For the QSV of two-qubit pure entangled states, Alice's measurement Π i (i = 1, 2, 3) are selected to be Pauli X, Y and Z measurements. When the outcome of X, Y and Z is 1(0), Bob performs Π 11 ¼ υ þ j i υ þ h j (Π 10 ¼ υ À j i υ À h j), Π 21 ¼ ω þ j i ω þ h j (Π 20 ¼ ω À j i ω À h j) and , respectively, and the vectors are defined as υ ± j i ¼ sin θ H j i ∓ cos θ V j i and ω ± j i ¼ sin θ H j i± i cos θ V j i. These adaptive measurement settings constitute the optimal Uni-LQCC strategy which has the form 23 M 1 ¼ þ j ihþj jυ þ ihυ þ j þ jÀihÀj jυ À i υ À h j; M 2 ¼ R j ihRj jω À ihω À j þ jLihLj jω þ i ω þ h j; where þ j i 1 ffiffi 2 p ð H j i þ V j iÞ and À j i 1 ffiffi 2 p ð H j i À V j iÞ denote the eigenstates of Pauli X operator, R j i 1 ffiffi 2 p ð H j i þ i V j iÞ and L j i 1 ffiffi 2 p ð H j i À i V j iÞ denote the eigenstates of Pauli Y operator. In each of these three combined local measurement settings, the choice of Bob's measurement setting is determined by the outcome of Alice's measurement, which can be achieved by controlling the local operation of Bob's EOM according to Alice's outcome.
5,552.4
2020-12-01T00:00:00.000
[ "Computer Science" ]
Ballistic Coefficient Calculation Based on Optical Angle Measurements of Space Debris Atmospheric drag is an important factor affecting orbit determination and prediction of low-orbit space debris. To obtain accurate ballistic coefficients of space debris, we propose a calculation method based on measured optical angles. Angle measurements of space debris with a perigee height below 1400 km acquired from a photoelectric array were used for orbit determination. Perturbation equations of atmospheric drag were used to calculate the semi-major-axis variation. The ballistic coefficients of space debris were estimated and compared with those published by the North American Aerospace Defense Command in terms of orbit prediction error. The 48 h orbit prediction error of the ballistic coefficients obtained from the proposed method is reduced by 18.65% compared with the published error. Hence, our method seems suitable for calculating space debris ballistic coefficients and supporting related practical applications. Introduction Most space applications and studies have increasingly improved the requirements for orbit calculation accuracy. As typical non-conservative forces, atmospheric drag perturbations have a long-term and significant impact on the orbit of space targets [1][2][3]. For accurately calculating atmospheric drag, the ballistic coefficients of space debris are important factors, and the acquisition of accurate ballistic coefficients allows us to determine and predict the orbits of low-orbit space debris [4]. The ballistic coefficient of a generic body can be expressed as [5] where C D is the atmospheric drag coefficient of the target, A is the cross-sectional area with respect to the velocity direction of the target relative to the atmosphere at a given time, and m is the mass of the target. The ballistic coefficients of different space objects vary over time. Therefore, the average semi-major-axis change caused by atmospheric drag can be calculated by using the average number of space debris. Then, the ballistic coefficient can be obtained by using the drag perturbation equation of the semi-major axis [5][6][7][8]. This method uses the average number of elements in the orbital data of approximately 17,000 target tow line elements (TLEs) updated daily by the North American Aerospace Defense Command (NORAD) through its space tracking website. The calculation of a space target's orbit using the TLE orbit report requires the SGP4 model developed by NORAD. The SGP4 model was developed by Ken Cranford in 1970 and is used for near-Earth satellites [9]. This model is a simplification of Lane and Cranford's (1969) extensive analytical theory, which takes into account the effects of perturbations such as Earth's non-spherical gravity, solar lunar gravity, solar radiation pressure, and atmospheric drag. SGP4 (Simplified General Perturbations) is a simplified conventional perturbation model that can be applied to near-Earth objects with orbital periods less than 225 min [10,11]. Then, it calculates the atmospheric drag perturbation on the semi-major axis of space debris and estimates the ballistic coefficient. This method provides the estimated ballistic coefficients of multiple low-orbit space targets for quality verification of TLE data. In 2020, Wei et al. [12] proposed an iterative calculation method based on multiple sets of TLEs from public data of space targets to reduce the influence of abnormal TLEs on the calculation results and obtain the ballistic coefficients of space debris. In 2022, Kuai et al. [13] collected a training set to predict ballistic coefficients by using space debris TLEs, a simplified general perturbation model (SGP4), and the publicly available object falling time as measured data samples. They used the iterative correction of ballistic coefficients to construct a long short-term memory neural network to predict the ballistic coefficients. Considering the previous research, we propose a method for calculating the ballistic coefficients of space debris using optical angle measurements. The data acquisition of this method is more convenient compared to radar measurement data. The basic principle of the method is as follows: (1) First, NORAD TLE space debris data are used to identify the optical angle measurements of massive space debris obtained from the photoelectric telescope array at the Changchun Observatory. (2) Then, the recognition results are combined with corresponding observations to determine the orbit. The orbit determination results are used to infer the mean elements of space debris at a specific time. (3) Finally, the average ballistic coefficient of the corresponding space debris arc segment is calculated using the change in the semi-major axis of the corresponding elements and atmospheric model. To validate the calculations using our method, we used extrapolated ephemeris calculations for comparison. Mean Elements of Space Debris Owing to the inability of space debris telescope arrays to continuously track observation targets, publicly available data of two-line elements should be used for space debris orbit recognition before orbit calculation. Hence, the joint identification and processing of massive unknown target data generated by space debris telescope arrays should be performed. The calculation of the mean elements of space debris based on optical angle measurements is described in Figure 1. We apply the gradient descent method to calculate the mean elements [14][15][16]. In addition, we use the SGP4 model as follows: (y , y , y , y , y , We apply the gradient descent method to calculate the mean elements [14][15][16]. In addition, we use the SGP4 model as follows: y 1 , y 2 , y 3 , y 4 , y 5 , y 6 = f(x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ), (2) where y is the orbital parameter PosVel, and x represents the orbital elements described by Semimajor axis (a), Eccentricity (e), Inclination (i), Longitude of the ascending node (Ω), Argument of periapsis (ω), and Mean anomaly at epoch (M). In Equation (2), f denotes the SGP4 model comprising dozens of equations. The SGP4 model can provide the orbital parameters at a specific time from the mean elements [17][18][19][20]. The orbit parameter PosVel i of the orbit determination result is used to reverse calculate the instantaneous orbital elements x i . Then, instantaneous orbital elements x i are reversed to obtain y i for different time instants t. We also calculate the root mean square error (RMSE) as follows: The partial derivative of the RMSE with respect to each orbital element is obtained numerically as follows: We perform fitting and calculation using the Adam gradient method. Unlike the conventional gradient descent method, the Adam gradient can automatically adjust the step size at each iteration as follows [21][22][23][24][25]: where β 1 = 0.9, β 2 = 0.999, = 1 × 10 −8 , v 0 = 0, and m 0 = 0; x i is an orbit element, g i is the partial derivative of the RMSE with respect to element x i , and t is the number of iterations. Ballistic Coefficient Calculation from Mean Elements According to Picone et al. [26,27], the ballistic coefficient can be calculated as where a is the semi-major axis, µ is the product of the gravitational constant and Earth mass, and . v D is the vector of the drag acceleration on space debris given by By integrating Equation (10) from time t 1 to t 2 , we obtain where ∆a t2 t1 is the average semi-major-axis change caused by atmospheric drag from time t 1 to t 2 . The corresponding numerical integration can be expressed as Therefore, the average ballistic coefficient from time t 1 to t 2 is given by where a t 2 and a t 1 can be determined as described in Section 2.1. The position and velocity of the target space debris at time t can be determined and calculated based on the orbit determination result from optical observations. The atmospheric density of the target location at time t can be obtained by averaging multiple different atmospheric density models. In this study, we obtained the atmospheric density by averaging the results of three atmospheric density models: NRLMSISE00, DTM2000, and JB2006. Parameters e v t −V t and v t − V t were calculated from the orbit determination results and atmospheric wind field model. We adopted the HWM96 wind field model. The instantaneous semi-major axis was calculated from the velocity and position parameters of the target space debris at each time as follows [28][29][30]: where r is the oblique distance of space debris and v is the debris velocity. Overall, the ballistic coefficients of low-orbit space debris can be calculated using optical angle measurements from a photoelectric telescope array, as described in Figure 2. mass, and v is the vector of the drag acceleration on space debris given by By integrating Equation (10) from time t to t , we obtain ∆a = Ba vρ|v − V| e • e dt , (11) where ∆a is the average semi-major-axis change caused by atmospheric drag from time t to t . The corresponding numerical integration can be expressed as Therefore, the average ballistic coefficient from time t to t is given by where a and a can be determined as described in Section 2.1. The position and velocity of the target space debris at time t can be determined and calculated based on the orbit determination result from optical observations. The atmospheric density of the target location at time t can be obtained by averaging multiple different atmospheric density models. In this study, we obtained the atmospheric density by averaging the results of three atmospheric density models: NRLMSISE00, DTM2000, and JB2006. Parameters e and v − V were calculated from the orbit determination results and atmospheric wind field model. We adopted the HWM96 wind field model. The instantaneous semi-major axis was calculated from the velocity and position parameters of the target space debris at each time as follows [28][29][30]: where r is the oblique distance of space debris and v is the debris velocity. Overall, the ballistic coefficients of low-orbit space debris can be calculated using optical angle measurements from a photoelectric telescope array, as described in Figure 2. Verification of Mean Element Calculations From optical observations acquired over 15-19 May 2021, we obtained the mean number of elements for NORAD TLE number 43,476. A comparison of the obtained prediction error and public TLE data prediction error is shown in Figure 3. Verification of Mean Element Calculations From optical observations acquired over 15-19 May 2021, we obtained the mean number of elements for NORAD TLE number 43,476. A comparison of the obtained prediction error and public TLE data prediction error is shown in Figure 3. The dashed line in Figure 3 represents the prediction error of the mean elements of the target satellite at the corresponding time obtained by using the proposed method. The squares in the solid line represent the prediction errors of public NORAD TLE data, and the circles in the dashed line represent the extrapolation errors of the mean elements obtained from the experimental method. The NORAD number of the target space debris in Figure 3 is 43,476, as obtained from a satellite with laser ranging. The orbit calculation was conducted using laser ranging data to obtain the standard orbit. The NORAD TLE data were used in the SGP4 model to obtain the position parameters of the target, and the difference between these parameters and those of the standard orbit was determined to obtain the inversion error of TLE data at each time. The mean elements obtained from optical observations were also used in the SGP4 model to obtain the position parameters of the target over a specific period, and the inversion error at each time in that period was obtained through comparisons with the standard orbit. The public NORAD TLE data used for comparison are as follows in Table 1. The experimental calculation results in TLE format are as follows in Table 2. The dashed line in Figure 3 represents the prediction error of the mean elements of the target satellite at the corresponding time obtained by using the proposed method. The squares in the solid line represent the prediction errors of public NORAD TLE data, and the circles in the dashed line represent the extrapolation errors of the mean elements obtained from the experimental method. The NORAD number of the target space debris in Figure 3 is 43,476, as obtained from a satellite with laser ranging. The orbit calculation was conducted using laser ranging data to obtain the standard orbit. The NORAD TLE data were used in the SGP4 model to obtain the position parameters of the target, and the difference between these parameters and those of the standard orbit was determined to obtain the inversion error of TLE data at each time. The mean elements obtained from optical observations were also used in the SGP4 model to obtain the position parameters of the target over a specific period, and the inversion error at each time in that period was obtained through comparisons with the standard orbit. The public NORAD TLE data used for comparison are as follows in Table 1. The experimental calculation results in TLE format are as follows in Table 2. The error curve obtained using the proposed method is shown in Figure 3. The origin of the x-axis indicates the time of the mean elements, when the position error of the public data was 1046.24 m, while that of the experimental calculation was 584.51 m, representing an error reduction of 40.08%. Hence, the mean elements obtained using the proposed method were more accurate than the published values at the target time. Considering previous methods for calculating ballistic coefficients using public NORAD TLE data [2], the accuracy of the mean elements obtained by combining optical angle measurements with gradient descent for orbit determination is suitable for calculating the ballistic coefficients. Verification of Ballistic Coefficient Calculations To verify the effectiveness of the proposed method for ballistic coefficient calculation, the orbit prediction method was used for comparison. The following procedure was adopted: (1) The publicly available TLE data over the study period were obtained to be verified. (2) The public SGP4 model was used to calculate the TLEs of two-line elements and set a period to predict the ephemeris of public data. (3) The ballistic coefficient of public TLEs during the experiment period was replaced by the ballistic coefficient of the space debris obtained from the experimental calculation. The parameters contained in the TLE data, such as six orbital mean elements, remained unchanged, thus enabling us to obtain new TLE data to verify the calculation results. (4) The public SGP4 model was used to calculate the TLEs of the experimental calculation results and set a period to predict the ephemeris of experimental calculations. (5) The space debris statistical orbit determination result obtained from optical angle measurements was used as the true value of orbit parameters. The RMSE was calculated using the predicted ephemeris and true orbit value, to evaluate the quality of the experimental results. The RMSE is given by where C is the position parameter of the extrapolated time and O is the position parameter of the orbit determination result. The detection accuracy of the equipment mentioned in Section 2 is 7.65 root mean square error. The satellite laser ranging data publicly released by the ILRS were used for orbit determination calculation, and the orbit determination results were used as the standard orbit. The accuracy evaluation of the orbit determination results of the optical measurements using standard orbits is shown in Figure 4 as follows. In Figures 5-8, the squares in the solid lines represent the ephemeris error predicted from the target ballistic coefficient of public NORAD TLE data, while the circles in the dashed lines represent the ephemeris error predicted from the ballistic coefficient of TLE data calculated using the proposed method. An interval of 50 min was considered between datapoints. The RMSE of different space debris between the published ballistic coefficients and experimental results are listed in Table 3. The table shows the results on the same space debris over 24 h of prediction. Figures 9 and 10 show the error of orbit prediction using the public TLE data and In Figures 5-8, the squares in the solid lines represent the ephemeris error predicted from the target ballistic coefficient of public NORAD TLE data, while the circles in the dashed lines represent the ephemeris error predicted from the ballistic coefficient of TLE data calculated using the proposed method. An interval of 50 min was considered between datapoints. The RMSE of different space debris between the published ballistic coefficients and experimental results are listed in Table 3. The table shows the results on the same space debris over 24 h of prediction. Discussion As shown in Figure 3 and Section 3.1, when using the proposed method described in Section 2.1, the mean elements obtained were more accurate than the published values at the target time, representing an error reduction of 40.08%. As shown in Table 3 and Figures 5-8, when using the ballistic coefficients calculated with the proposed experimental method for orbit prediction, the 24 h prediction RMSE was smaller than that obtained using the published ballistic coefficients. Although the reduction was not significant, it was still proven that this method could achieve an accuracy no less than that of publicly available data. As shown in Figures 8 and 9, when using the ballistic coefficients calculated using the proposed experimental method for orbit prediction, the 48 h prediction RMSE was smaller than that obtained using the published ballistic coefficients. These results indicate the effectiveness of the proposed method for ballistic coefficient calculation. Conclusions To improve the accuracy when calculating the ballistic coefficients of space debris, we propose a calculation method based on optical angle measurements. The proposed method uses optical angle measurements to determine the position and velocity parameters of space debris during a specific period through precise orbit determination and provides the ballistic coefficients of space debris based on the calculated parameters. According to the verification of the ballistic coefficient calculation results of NORAD space debris 25,876, 39,093, 40,342, and 44,722, the calculated ballistic coefficients of space debris provide a reduction of 18.65% on the predicted ephemeris RMSE when compared with the published values for 15 May 2021. After verification, the ballistic coefficients obtained from our method were found to be more accurate than the publicly available ones, demonstrating the effectiveness and applicability of the proposed method. The next research direction is to investigate the relationship between the accuracy of ballistic coefficient calculation results and the accuracy of an optical observation system, in order to further improve the calculation accuracy.
4,285.4
2023-09-01T00:00:00.000
[ "Physics" ]
PORCINE EPIDEMIC DIARRHOEA VIRUS WITH A RECOMBINANT S GENE DETECTED IN HUNGARY , 2016 Porcine epidemic diarrhoea virus (PEDV) can cause a severe enteric disease affecting pigs of all ages. In January 2016, diarrhoea with occasional vomiting was observed in a small pig farm in Hungary. All animals became affected, while mortality (of up to 30%) was only seen in piglets. Samples from different age groups and the carcass of a piglet were examined by various methods including pathology, bacteriology and molecular biology. PEDV was confirmed by PCR and its whole genome sequence was determined. The sequence PEDV HUN/5031/2016 showed high identity with recently reported European viruses. Differences were found mostly in the S gene, where recombination was detected with a newly identified and already recombinant swine enteric coronavirus (SeCoV) from Italy. The present report describes the first porcine epidemic diarrhoea outbreak in Hungary after many years and gives an insight into the genetics of the Hungarian PEDV. Porcine epidemic diarrhoea virus (PEDV) belongs to the family Coronaviridae.Coronaviruses possess a positive-sense, single-stranded RNA genome of 26.4 to 31.7 kb, which makes them the largest enveloped RNA viruses (Woo et al., 2010).The genome of PEDV is approximately 28 kb long and consists of open reading frame (ORF) 1ab, which encodes non-structural proteins.Only one VALKÓ et al. Acta Veterinaria Hungarica 65, 2017 third of the genome encodes structural proteins (Kocherhans et al., 2001), namely spike (S), envelope (E), membrane (M) and nucleocapsid (N).In addition, there is another ORF named ORF3 between the S and E genes, encoding an ion channel, which possibly regulates virus production (Wang et al., 2012).Among these the main research interest is focused on the S gene and its glycoprotein product, which in the presence of proteolytic trypsin is cleaved into two functionally distinct subunits, S1 and S2, responsible for receptor binding and fusion mechanisms, respectively, which makes the spike protein the primary target for neutralising antibodies (Wicht et al., 2014). PEDV was first described as the causative agent of porcine epidemic diarrhoea (PED) in the 1970s in England and Belgium (Wood, 1977;Pensaert and De Bouck, 1978).Thereafter the virus spread throughout Europe; however, the disease became infrequent in the last two decades, and epidemic outbreaks were reported only sporadically (Martelli et al., 2008).In contrast, PED still causes significant economic losses in Asian countries, especially since 2010, when new, highly pathogenic variants of PEDV were found (Li et al., 2012).Genetic analyses assume that in the spring of 2013 PEDVs of Chinese origin emerged in the United States of America (USA), where prior to that PED had been considered an exotic disease (Huang et al., 2013).After its introduction PED spread rapidly within the USA and through other parts of the American continent (EFSA AHAW Panel, 2014).However, not only these highly pathogenic viruses have emerged in the USA, new variants were also found, which were introduced into North America presumably at the same time as the previously identified ones.These variants do not cause severe clinical signs and have certain insertions and deletions in their S gene, which coined the designation S INDEL strains (Vlasova et al., 2014).The Asian and American outbreaks again attracted attention to PED in Europe; however, it is still notifiable only in a few countries (EFSA, 2016).In France, an outbreak caused by an S INDEL strain was reported in 2015 by Grasland and Bigault.Similar cases affecting mostly fattening pigs with mild diarrhoea were reported from Belgium (Theuns et al., 2015), Germany (Stadler et al., 2015), Austria (Steinrigl et al., 2015), Portugal (Mesquita et al., 2015), Slovenia (Toplak et al., 2016) and Italy (Bertasio et al., 2016), while in Ukraine a highly pathogenic virus caused an outbreak with nearly 100% mortality rate among piglets under 10 days of age (Dastjerdi et al., 2015).In Hungary, the first PED outbreak was reported in 1977 by Benyeda et al., and the last PEDV-positive pigs were found in 2009, although there is no report describing that case (EFSA AHAW Panel, 2014).Here, we report the first detection of PEDV in Hungary after many years and provide information about the details of the outbreak and the genetics of the newly identified Hungarian virus. Materials and methods At the end of January 2016, greenish to brownish, watery diarrhoea of varying intensity occurred in a 60-sow farrow-to-finish pig farm located in western Hungary.Morbidity of breeding animals reached 100% with no subsequent mortality, but severe hypogalactia became very frequent among lactating sows.All newborn piglets had severe diarrhoea and occasional vomiting, and the mortality of suckling piglets had reached 30% by the end of the outbreak.Several different antibiotics were used without particular effect, still the clinical signs faded away after about three weeks.The carcass of a piglet and 12 rectal swabs from different age groups (growing pigs, boars, pregnant and lactating sows) were submitted to the Department and Clinic for Production Animals, University of Veterinary Medicine Budapest for pathological and standard microbiological diagnostic examinations. Rectal swabs and intestinal samples were sent further to the Department of Microbiology and Infectious Diseases for testing for coronaviruses by polymerase chain reaction (PCR).All samples were prepared for PCR using Viral Nucleic Acid Extraction Kit III (Geneaid Biotech Ltd., New Taipei, Taiwan) and RevertAid First Strand cDNA Synthesis Kit (Thermo Fisher Scientific Inc., Waltham, Massachusetts, USA) according to the manufacturer's instructions.PCR of the S gene was performed as described previously (Kim et al., 2001).PCRpositive products were submitted to BaseClear B.V. (Leiden, The Netherlands) for sequencing in both directions by Sanger sequencing methodology.Sequences were edited and analysed by using BioEdit version 7.2.5 (Ibis Biosciences, Carlsbad, California, USA).Alignments were built and a phylogenetic tree was computed using the software MEGA version 6.0 (Tamura et al., 2013).Possible recombination events were detected by the use of RDP4 (Martin et al., 2015). The full-length genome was determined using the intestinal samples by the Laboratory for Molecular Biology of the National Food Chain Safety Office, Veterinary Diagnostic Directorate.RNA extraction was carried out with the help of the MagAttract Virus Mini M48 Kit (Qiagen, Hilden, Germany) on a King Fisher 96 Flex instrument (Thermo Fisher Scientific Inc., Waltham, Massachusetts, USA) according to the manufacturer's instructions.PCR was performed according to Song et al. (2015).All PCR products were submitted to BaseClear B.V. (Leiden, The Netherlands), but five sequences did not give clear results.These PCRs were repeated and products were submitted to Biomi Ltd. (Gödöllő, Hungary) for sequencing in the same way as mentioned above.Sequences were edited, analysed and the complete genome assembled with the programs included in the software DNASTAR version 13 (DNASTAR, Inc., Madison, Wisconsin, USA).The complete genome sequence was submitted to GenBank under accession number KX289955. Results Necropsy of the piglet revealed signs of diarrhoea and marked dehydration, together with acute gastroenteritis with small intestinal villous atrophy and crypt hyperplasia.Aerobic bacteriological culture of the small intestinal contents yielded non-haemolytic E. coli in almost pure culture.Rapid diagnostic test of the intestinal contents for F4, F5, F18 E. coli fimbriae, rotaviruses, Clostridium difficile and Cryptosporidium sp.(Bio K 353 Rainbow Piglet Scours, Bio-X Diagnostics S.A., Rochefort, France) yielded negative results.Standard aerobic culture of the submitted rectal swabs did not yield pathogenic bacteria, while anaerobic culture for Brachyspira species and pre-enrichment culture for Salmonella species were also negative.ite Likelihood method based on the alignment of the S gene of PEDV from 27 sequences downloaded from GenBank and from the virus newly reported in this study.Numbers along the branches indicate the percentage of 1000 bootstrapping replicates (bootstrap support < 80% is not shown).For each sequence, the name is followed by the country and the GenBank accession number.European sequences are marked with light grey and the originally discovered PEDV with dark grey shading.The Hungarian PEDV is indicated in bold PCR of the S gene was positive in five rectal swabs and in the intestinal sample.The rectal swabs did not contain enough material for further testing, therefore the intestinal sample was chosen for full-length genome analysis.The sequenced PEDV HUN/5031/2016 shared 99.6% nucleotide (nt) identity with the recently detected European virus FR/001/2014 (Fig. 1).Difference count was found mostly in the S gene (Table 1), located mainly in an approximately 400 nt long section, which showed the highest identity of 96% and 95% with swine enteric coronavirus (SeCoV) Italy/213306/2009 and SeCoV/GER/L00930/2012, respectively, and only 89% to 91% identity with the above-mentioned European strains.The difference count at amino acid level is shown in Table 2. Alignments of the S genes of these SeCoVs along with European, American and Asian sequences were analysed by several methods included in RDP4.A significant (P < 0.05) recombination event was detected in HUN/5031/2016 between positions 248 and 640 with the Belgian PEDV 15V010/BEL/2015 as the major parent and the SeCoV Italy/213306/2009 as the minor parent (Fig. 2). Discussion In this report, a PED outbreak in Hungary affecting pigs of all ages is described.The possible source of infection remains unknown, as the potential introduction routes of PEDV were not investigated thoroughly, because the owner of the farm rejected further co-operation and terminated operation after the clinical signs ceased.This was a solitary outbreak, as no similar cases have been reported to us or to other laboratories in Hungary since then.In contrast to recently reported European cases, the Hungarian outbreak occurred in a small farrow-tofinish operation, so suckling piglets and breeding animals were also involved.The viruses were quite similar at the genetic level: PEDV sequence HUN/5031/ 201 shared at least 99% nt identity with all recently reported European viruses in Fig. 1, and the differences were found mainly in the S gene (Table 1).In that, a possible recombination was detected between PEDV sequence 15V010/BEL/ 2015 found in diarrheic fattening pigs in Belgium in 2015 (Theuns et al., 2015), and SeCoV Italy/213306/2009 reported from Italy in a study of swine enteric coronaviruses, including PEDV and transmissible gastroenteritis virus (TGEV), which also belongs to the genus Alphacoronavirus and causes quite similar clinical signs (Boniotti et al., 2016).This study also reported a recombinant TGEV and PEDV sequence identified as SeCoV, which became the possible minor parent regarding the new Hungarian virus sequence reported in this paper.Recently, another SeCoV resembling the Italian virus was found in diarrhoeic faecal samples collected in 2012 in Germany (Akimkin et al., 2016).In this study, it is suggested that these viruses can be targets of recombination events, which is a possibility in our case.In the same year, a third SeCoV was found in Central Eastern Europe (Belsham et al., 2016), but its sequence information was not yet available at the time of analysing the Hungarian virus, therefore it did not affect the results described in this study.The position of the possible recombination in HUN/5031/ In conclusion, we detected a novel Hungarian PEDV with a possible recombination in its S gene.More swine enteric coronaviruses from Europe, including PEDV, TGEV and SeCoV should be identified and full-length genomes submitted to GenBank to determine the phylogenetic relations and the potential origin of the recombination events.As coronaviruses can be potential subjects to such events, confirmed also by this study, diagnostic difficulties can be expected.Further studies are needed to help overcome these diagnostic problems, as well as improve our knowledge regarding the epidemiological situation of PEDV with a focus on prevention through co-operation between field veterinarians and different laboratories. Acknowledgement This project was funded by the KK-UK 2015 grant of the Hungarian Government. Fig. 1 . Fig.1.Neighbour-joining phylogenetic tree computed with MEGA6 using the Maximum Composite Likelihood method based on the alignment of the S gene of PEDV from 27 sequences downloaded from GenBank and from the virus newly reported in this study.Numbers along the branches indicate the percentage of 1000 bootstrapping replicates (bootstrap support < 80% is not shown).For each sequence, the name is followed by the country and the GenBank accession number.European sequences are marked with light grey and the originally discovered PEDV with dark grey shading.The Hungarian PEDV is indicated in bold Sequence nucleotide difference count correlated to HUN/5031/2016 (GenBank Acc.No. KX289955).ORF: open reading frame, S: spike, E: enve-Sequence amino acid difference count correlated to HUN/5031/2016 (GenBank Acc.No. KX289955).ORF: open reading frame, S: spike, E: enve-Acta Veterinaria Hungarica 65, 2017 201 suggests that it can change the amino acid sequence of the S1 subunit, although further studies are needed to determine the potential consequences in receptor binding and antibody production reflecting these differences.
2,844.8
2017-06-01T00:00:00.000
[ "Medicine", "Agricultural And Food Sciences", "Biology" ]
Anti-Alzheimer potential, metabolomic profiling and molecular docking of green synthesized silver nanoparticles of Lampranthus coccineus and Malephora lutea aqueous extracts The green synthesis of silver nanoparticles (SNPs) using plant extracts is an eco-friendly method. It is a single step and offers several advantages such as time reducing, cost-effective and environmental non-toxic. Silver nanoparticles are a type of Noble metal nanoparticles and it has tremendous applications in the field of diagnostics, therapeutics, antimicrobial activity, anticancer and neurodegenerative diseases. In the present work, the aqueous extracts of aerial parts of Lampranthus coccineus and Malephora lutea F. Aizoaceae were successfully used for the synthesis of silver nanoparticles. The formation of silver nanoparticles was early detected by a color change from pale yellow to reddish-brown color and was further confirmed by transmission electron microscope (TEM), UV–visible spectroscopy, Fourier transform infrared (FTIR) spectroscopy, dynamic light scattering (DLS), X-ray diffraction (XRD), and energy-dispersive X-ray diffraction (EDX). The TEM analysis of showed spherical nanoparticles with a mean size between 12.86 nm and 28.19 nm and the UV- visible spectroscopy showed λmax of 417 nm, which confirms the presence of nanoparticles. The neuroprotective potential of SNPs was evaluated by assessing the antioxidant and cholinesterase inhibitory activity. Metabolomic profiling was performed on methanolic extracts of L. coccineus and M. lutea and resulted in the identification of 12 compounds, then docking was performed to investigate the possible interaction between the identified compounds and human acetylcholinesterase, butyrylcholinesterase, and glutathione transferase receptor, which are associated with the progress of Alzheimer’s disease. Overall our SNPs highlighted its promising potential in terms of anticholinesterase and antioxidant activity as plant-based anti-Alzheimer drug and against oxidative stress. nanoparticles have many therapeutic applications e. g. antimicrobial [23] (S5 Table and S6 Table), antioxidant [24], cytotoxic [25], and anti-inflammatory property [26]. Recently, nanotechnology played an important role in the development and improvement of techniques for the diagnosis and treatment of Al Alzheimer's disease [27]. Several nanoparticles such as titanium dioxide, silica dioxide, silver and zinc oxide have been used for treatment of neurological disease [28], where oxidative nanoparticles can decrease the activities of reactive oxygen species (ROS) scavenging enzymes such as glutathione peroxidase (GSH-Px), superoxide dismutase (SOD) and catalase in the brain of rats and mice [28]. Moreover, intravenous administration of nanoparticles are promising delivery systems for the treatment of neurodegenerative diseases [29], e.g. In the case of Alzheimer's disease, which is a form of dementia resulting in problems regarding memory, cognition, and behavior [30], biodegradable polymeric nanoparticles consisting of polyethylene glycol and/or poly(lactic-coglycolic acid) and functionalized with specific antibodies [31,32] or oligopeptide drugs [33] have been used to eliminate and prevent the formation of amyloid fibrils, leading to this disease. So this study was undertaken to investigate the possible anti-Alzheimer and antioxidant activity of the aqueous extracts of L. coccineus and M. lutea, along with investigating the phytochemical composition of the crude methanolic extracts of the two plants through UPLC-MS metabolomics profiling, followed by molecular docking in order to explore the chemical compounds that might contribute to the anti-Alzheimer and antioxidant activity. Plant material Fresh aerial parts of Lampranthus coccineus and Malephora lutea were collected in September 2016 from Engineer Ahmed Helal farm, Sheikh Zayed, Cairo, Egypt, and authenticated by senior botanist Mrs. Therris Labib head specialist for plant taxonomy, El-Orman botanical garden, Giza, Egypt. The two plants were washed with tap water, and the surface washed with distilled water until no impurities remained. The clean aerial parts were shade dried for 20 days at room temperature to remove moisture. The dried aerial parts were pulverized in a clean electric blender to obtain a fine powder and stored in an airtight, amber glass bottle to avoid sunlight for further use. Chemicals All the reagents purchased were of analytical grade and used without any further purification. Silver nitrate (AgNO 3 ) was purchased from Sigma-Aldrich, Germany with � 99.5% purity and distilled water was used for the preparation of aqueous extracts for all experiments. Human acetylcholinesterase ELISA kit AChE was determined using NOVA human acetylcholinesterase (AChE) ELISA kit, Beijing, China, which uses Sandwich-ELISA as the method. The micro Elisa strip plate in this kit has been pre-coated with an antibody specific to AChE. Standards or samples are added to the appropriate Micro Elisa strip plate wells and combined to the specific antibody. Then a horseradish peroxide (HRP)-conjugated antibody specific for AChE is added to each well and incubated. Free components are washed away. The 3,3`,5,5`-Tetramethylbenzidine (TMB) substrate solution is added to each well. Only the wells that contain AChE and HRP conjugated AChE antibody will appear blue in color and then turn yellow after the addition of the stop solution. The optical density (OD) is measured spectrophotometrically using a double beam V-630 spectrophotometer, Jasco, Japan at a wavelength of 450 nm. The OD value is proportional to the concentration of AChE. And then comparing the OD of the samples to the standard curve. Synthesis of silver nanoparticles using aqueous plant extract Silver nanoparticles were synthesized by macerating 10 grams of the powdered plant in 100 ml distilled water, the mixture was kept in a water bath at 60˚C for 30 min. Then the extract was filtered by Whatman no. 1 filter paper. For the biosynthesis of silver nanoparticles, the extract was added to 1 mM silver nitrate in the ratio 2:10 and kept in a water bath for 10 min at 60˚C [34]. Metabolomic profiling of Lampranthus coccineus and Malephora lutea extracts Metabolomic profiling was performed on methanolic extracts of L. coccineus and M. lutea according to Abdelmohsen et al. [35,36,37] on an Acquity Ultra Performance Liquid Chromatography system coupled to a Synapt G2 HDMS quadrupole time-of-flight hybrid mass spectrometer (Waters, Milford, USA). Chromatographic separation was carried out on a BEH C18 column (2.1 × 100 mm, 1.7 μm particle size; Waters, Milford, USA) with a guard column (2.1 x5 mm, 1.7 μm particle size) and a linear binary solvent gradient of 0%-100% eluent B over 6 min at a flow rate of 0.3 mL min−1, using 0.1% formic acid in water (v/v) as solvent A and acetonitrile as solvent B. The injection volume was 2 μL and the column temperature was 40˚C. To convert the raw data into separate positive and negative ionization files, MSConvert software was used. The files were then imported to the data mining software MZmine 2.10 for peak picking, deconvolution, deisotoping, alignment and formula prediction 11. The database used for the identification of compounds was the Dictionary of Natural Products (DNP) 2015. Anticholinesterase activity This study was conducted on adult male Albino rats of Sprague-Dawley of 130-150 g body weight in compliance with the guidelines for animal experiments set by the ethical committee of the National Research Centre and animals were treated in accordance with Canadian Council on Animal Care (CCAC). The unnecessary disturbance of animals was avoided. The animals were treated gently; squeezing, pressure and tough maneuver were avoided. The study was also approved by the Research Ethics Committee for Animal Experimentation, Department of Pharmacology and Toxicology, Faculty of Pharmacy, Helwan University, Egypt (project code 02A2019). They were kept under the same hygienic conditions and on a standard laboratory diet consisting of vitamin mixture (1%), mineral mixture (4%), corn oil (10%), sucrose (20%), cellulose (0.2%), pure casein (95%) and starch (54.3%). Animals were randomly classified into 8 groups each of 6 animals and treated according to the following scheme: gp1: received 1 ml saline and served as a normal healthy group, gp2: received 1 ml of 1mM AgNO 3 and served as control group, gp3: received AlCl 3 intraperitoneal (i.p.) 100mg/kg body weight (b.wt) once daily for 2 month and served as demented group, gp4 & gp5: received AlCl 3 (i.p.) 100 mg/kg (b.wt)+ 20mg/kg L. coccineus aqueous extract and nanosilver aqueous respectively, gp6 & gp7: received AlCl 3 (i.p.) 100 mg/kg (b.wt)+ 20mg/kg M. lutea aqueous extract and nanosilver aqueous respectively, and finally gp8: received AlCl 3 (i.p.) 100 mg/kg (b.wt) + Rivastigmine drug (0.3 mg/kg) once daily for 2 months. After the treatment period the rats were anesthetized with Xylazine (10 mg/kg) and Ketamine (75 mg/kg) according to [38], and then the animals were sacrificed by decapitation and brains hippocampus were rapidly excised for each group, weighed and homogenized in ice-cold phosphate buffer to prepare 10% (w/v) homogenates and stored at 4˚C for biochemical analysis. Data are presented as mean ± S.E from six animals in each group. Statistical significance was evaluated using the ANOVA test followed by post hoc Duncan's multiple range test. A probability value of less than 0.05 was considered statistically significant (P < 0.05), after synthesis and characterization of SNPS. The experimental work is summarized in a flow chart presented in (Fig 1) Antioxidant activity The antioxidant activity of all extracts was evaluated and compared with that of Rivastigmine after a daily dose of (0.3 mg/kg b.wt) for 2 months as a standard drug [39,40]. Effect on brain malondialdehyde (MDA) in Alzheimer-induced rats. Thiobarbituric acid (TBA) reacts with malondialdehyde (MDA) in an acidic medium at a temperature of 95˚C for 30 min. to form the Thiobarbituric acid reactive product. The absorbance of the resultant pink product can be measured at 534 nm [39,40]. Malondialdehyde (MDA) concentration in the sample: Tissue = A Sample / A Standard X 10 / g Tissue used = nmol/g Tissue 2.7.2. Effect on brain glutathione (GSH) in Alzheimer-induced rats. The method based on the reduction of 5, 5' dithiobis (2-nitrobenzoic acid) (DTNB) with glutathione (GSH) to produce a yellow compound. The reduced chromogen is directly proportional to GSH concentration and its absorbance can be measured at 405 nm. Prior to dissection, the tissue is perfused with a PBS (Phosphate buffered saline) solution, PH 7.4, containing 0.16 mg/ml heparin to remove any red blood cells and clots. Then the tissue is homogenized in 5-10 ml cold buffer (50 mM potassium phosphate, PH 7.5, 1 nM EDTA) per gram tissue using tissue homogenizer, and centrifugation at 4000 rpm for 15 minutes at 4˚C. The supernatant is removed for assay and stored on ice. Glutathione (GSH) concentration in sample: In tissue = A Sample X 2.22 / g Tissue used = μg/g tissue Characterization of the synthesized SNPs by TEM A drop of the silver nanoparticles solution was placed on a copper grid and coated with carbon support film. After drying, the shape and size of SNPs were analyzed using Transmission Electron Microscope (TEM), Jeol model JEM-1010, USA at The Regional Center for Mycology and Biotechnology, Al-Azhar University, Cairo, Egypt. Characterization of the synthesized SNPs by UV-visible spectrometer The formation of SNPs was monitored by measuring the UV-Vis spectrum of the reaction medium using a double beam V-630 spectrophotometer, Jasco, Japan, at the wavelength range from 200 to 600 nm at the College of Pharmacy, Ain Shams University, Cairo, Egypt. Characterization of the synthesized SNPs using FTIR FTIR-8400S, IR Prestige-21, IR Affinity-1, Shimadzu, Japan at College of Pharmacy, Cairo University, Cairo, Egypt, was used for characterization of the functional group attached to the surface of SNPs. Determination of SNPs particle size distribution (Z-average mean) by Zeta sizer using DLS technique The nanoparticles particle size distribution was studied using a Zeta-sizer Nano ZS (Malvern instruments) in a disposable cell at 25˚C, and the results were analyzed using Zeta-sizer 7.01 software, United Kingdom. X-ray diffraction (XRD) analysis The crystalline nature of silver nanoparticles was checked by X-ray diffraction (XRD) analysis using an X-Ray diffractometer (Shimadzu Lab, XRD-6000, Japan). The information of translational symmetry-size and shape of the unit cell are obtained from peak positions of the diffraction pattern [41]. Energy dispersive X-ray spectroscopy (EDX) analysis Energy-dispersive X-ray spectroscopy (EDX) of the synthesized SNPs was carried out using JED-2300T Energy Dispersive X-ray Spectrometer, USA at the Egyptian Atomic Energy Authority, Cairo, Egypt. Molecular docking Three crystal structures were selected to study the anti-Alzheimer activity of the ligands. The first crystal structure (PDB ID: 4BDS) is for Human butyryl cholinesterase. The 4BDS crystal has a co-crystallized ligand, tacrine (a cholinesterase inhibitor), that was utilized in defining the active site. The second crystal structure (PDB ID: 4M0E) is for human acetylcholinesterase with co-crystallized ligand (dihydrotanshinone I) that was used to define the active site. The third crystal structure (PDB ID: 4ZBD) is for glutathione transferase. The 4ZBD crystal's binding site was defined by co-crystallized glutathione. Hence, three docking sites were used to study the binding patterns and affinities of the ligands. In all dockings, a grid box of dimensions 40 grid points and spacing 0.375 was centered on the given co-crystallized ligand. Four conformations were generated for each ligand using OpenBabel, and docking was performed via Autodock4 implementing 100 steps of the genetic algorithm while keeping all the default setting provided by Autodock Tools. Visualization was done using the Discovery studio program. TEM characterization of the synthesized SNPs of L. coccineus and M. lutea aqueous extracts The TEM analysis of silver nanoparticles showed spherical nanoparticles with a mean size ranges between 12.86 nm to 28.19 nm. (Fig 2A and 2B). UV-Visible characterization of the synthesized SNPs of L. coccineus and M. lutea aqueous Ag echibit the highest efficiency of plasmon excitation among other metals e.g. Au and Cu. In addition, the optical excitation of plasmon resonance in nanosized Ag particles in the most efficient mechanism by which light interacts with matter, and it is known that silver is also the only material whose plasmon resonance can be tuned to any wavelength in the visible spectrum [42]. Synthesis of SNPs was first confirmed by changing their color to reddish-brown color due to surface plasmon resonance (SPR) phenomenon (S1 Fig) [43]. The formation of SNPs was monitored by measuring the UV-Vis spectrum (Fig 2C and 2D) of the reaction medium from 200 to 600 nm. UV-Vis spectroscopy can show that SPR peaks for green synthesized SNPs were between 370-435 nm [43]. In the current study, silver nanoparticles were synthesized at different concentrations of aqueous extracts of L. coccineus and M. lutea using 1 mM of silver nitrate. It has been observed that increasing the concentration of the aqueous extract resulted in increasing of the absorbance spectra with the small shifting of SPR peak to longer wavelength direction. At 4 ml concentration of the aqueous extract, the absorbance is maximum and the SPR band occurs at 417 nm [43]. The slight variations in the values of absorbance intensity signify that the changes are due to a change in the particle size [44,45]. FTIR characterization of synthesized SNPs FTIR has become an important tool in understanding the involvement of functional groups in the relation between metal particles and biomolecules which is used to search the chemical composition of the surface of the silver nanoparticles and identify the biomolecules for capping and efficient stabilization of the metal nanoparticles. There are many functional groups present which may have been responsible for the bio-reduction of Ag + ions. FTIR spectrum (Fig 2E, 2F, 2G The observed peaks are mainly attributed to terpenoids, flavonoids, glycosides, phenols, and tannins with functional groups such as ketone, aldehyde, carboxylic acid, and others [46]. The presence of these groups increases the stability of the nanoparticles. These metabolites prevent aggregation and pairing of the nanoparticles. The similarity between the spectra with some marginal shifts in peak position clearly indicates the presence of the residual plant extract in the sample as a capping agent to the silver nanoparticles. Therefore, it may be inferred that these biomolecules are responsible for capping and efficient stabilization of synthesized nanoparticles [47]. Determination of the nanoparticles particle size distribution (Zaverage mean) of Lampranthus coccineus and Malephora lutea aqueous and hexane extracts SNPs Using DLS technique, the z-average mean (d.nm) in case of L. coccineus aqueous nano extract was 136 with a polydispersity index (PDI) 0.282 and in case of M. lutea aqueous nano extract with the z-average is 206.7 with a polydispersity index (PDI) 0.418. The morphology and dimensions of the biosynthesized SNPs were initially characterized using TEM image as shown in (Fig 2A and 2B) with an average size ranges from 12.86 nm to 28.19 nm. It should be noted that the mean particle size determined by TEM analysis was significantly smaller than that measured by DLS analysis (Fig 2A and 2B & Fig 3I and 3J). This contradiction could be possibly due to the adsorption of organic stabilizers from the extract on the surface of SNPs, the aggregation of some small particles and the adsorption of water on the stabilized SNPs [48,49,50]. X-ray diffraction (XRD) analysis The dry powders of the green synthesized SNPs were subjected to XRD analysis. The diffracted intensities were recorded from 4˚to 90˚at 2 theta angles. The X-ray diffraction (XRD)spectra are used to confirm the crystalline nature of the synthesized SNPs by using L. coccineus and M. lutea aqueous extracts and the patterns are exhibited in (Fig 4K and 4L). The spectra of XRD clearly indicate that the synthesized silver nanoparticles using the above-mentioned extracts are crystalline in nature. The Bragg reflections of silver nanoparticles are observed at (2Ɵ value) that corresponds to 31.9˚, 32.5˚, 36.4˚, 38.06˚, 54.6˚ [51]. No peaks of the XRD spectra of Ag2O and other substances appear in (Fig 4K and 4L), and it can be stated that the obtained SNPs had a high purity, and the observed peak broadening and noise were most probably due to the effect of nanoparticles and the presence of various crystalline biological macromolecules in the aqueous extracts of L. coccineus and M. lutea. All the obtained results confirm that silver ions had been reduced to Ag 0 by the aqueous extracts of L. coccineus and M. lutea under the reaction conditions [52]. Energy dispersive X-ray spectroscopy (EDX) analysis The freeze-dried silver nanoparticles of L. coccineus and M. lutea aqueous extracts were mounted on specimen stubs with double-sided taps, coated with gold in a sputter coater, and Anti-Alzheimer potential of green synthesized silver nanoparticles examined under a JED-2300T Energy Dispersive X-ray Spectrometer at 30 kV. EDX spectrophotometer analysis of both the two samples established the existence of element Ag signal of SNPs and the homogenous distribution of it. It revealed a strong signal of Ag in the Ag region and is in (Fig 5M and 5N). Metal silver nanocrystals generally showed a typical optical absorption peak approximately at 2.983 keV. There were other peaks for O, Na, S, Cl, and K suggesting that they are mixed precipitates present in the plant extract [52,53]. Metabolomic profiling of the crude methanolic extracts of L. coccineus and M. lutea Dereplication of the secondary metabolites from the crude methanolic extract of L. coccineus and M. lutea resulted in the identification of different classes of compounds including alkaloids, flavonoids, and steroidal compounds (S1 Table) and (Fig 6). Anticholinesterase activity The anti-Alzheimer activity of L. coccineus and M. lutea aqueous and aqueous nano extracts were evaluated and compared with that of Rivastigmine as standard using [54]. The L. coccineus aqueous and aqueous nano extracts showed the highest antiacetylcholinesterase activity (1.23, 0.82 ng/ml) respectively, followed by aqueous and nano extracts of M. lutea (1.95, 1.36 ng/ml) in comparison to the standard drug Rivastigmine (0.79 ng/ml) as shown in (Table 1) and (S2 Fig). 3.9. Antioxidant activity 3.9.1. Effect of L. coccineus, M. lutea extracts, and Rivastigmine drug on brain malondialdehyde (MDA) in Alzheimer's disease-induced rats. The antioxidant activity of L. coccineus and M. lutea aqueous and aqueous nano extracts were evaluated by measuring the amount of malondialdehyde (MDA) and compared with that of Rivastigmine as standard using [39,40]. The aqueous nano extract of L. coccineus and M. lutea showed the highest antioxidant activity (36.4, 43.6 nmol/g tissue) respectively, followed by aqueous extract of L. coccineus (45.7 nmol/g tissue) and M. lutea (54.8 nmol/g tissue) in comparison to Rivastigmine (34.9 nmol/g tissue) (S2 Table) and (S3 Fig). 3.9.2. Effect of L. coccineus, M. lutea extracts, and Rivastigmine drug on brain glutathione (GSH) in Alzheimer's disease induced rats. The antioxidant activity of L. coccineus and Anti-Alzheimer potential of green synthesized silver nanoparticles M. lutea aqueous and aqueous nano were evaluated by measuring the amount of reduced glutathione (GSH) and compared with that of Rivastigmine as standard [54,55]. Molecular docking Docking all components of the L. coccineus and M. lutea methanolic extract into macromolecular targets involved in Alzheimer's pathophysicology explain the observed anti-Alzheimer's activity. Docking scores for the top-scoring compounds are given in (S4 Table). Further, 3D depictions showing the interactions of one top-scoring molecule with the surrounding amino acids in the active site of each protein target is given in (Figs 7, 8 and 9). For butyrylcholinesterase, the co-crystallized ligand in the crystal structure (4BDS) mainly made π-π interactions with a neighboring tryptophan (Trp82). While several compounds in the extract have high scores, see (S4 Table), Epicatechin5-O-beta-D-glucopyranoside-3-benzoate shows the most similar docking poses to the co-crystallized ligand. As shown in (Fig 7), it is predicted to interact similarly with the same Trp82 in the active site. In addition, it interacts via H-bonds with Asn83 and Glu197. For acetylcholinesterase, the co-crystallized ligand in the crystal structure (4M0E) made π-π interactions with a neighboring tryptophan (Trp286) and H-bonds with Phe295 and Tyr124. Catechin is also predicted to interact with Trp286, but via H-bonds, see (Fig 8). In addition, it forms H-bonds with Ser293 and Tyr341. In conclusion, the high scores and very similar interaction patterns of several ligands in the L. coccineus and M. lutea methanolic extract with known inhibitors of 3 enzymes involved in the pathophysiology of Alzheimer's disease provide a molecular explanation for the anti-Alzheimer's activity of the extract. Conclusion The formation of the synthesized SNPs was confirmed by observing the color change from pale yellow to reddish-brown color, and it has been also confirmed by using different techniques e.g. TEM, UV-Visiblespectroscopy, FTIR, DLS, XRD, and EDX. The results obtained from the present study confirm the formation of spherical nanoparticles with a mean size range between 12.86 nm to 28.19 nm estimated using transmission electron microscope (TEM), and The XRD spectra confirmed that silver ions had been reduced to Ag 0 by the aqueous extracts of L. coccineus and M. lutea and it is crystalline in nature. The phytochemical constituents of methanolic extract of L. coccineus and M. lutea were characterized using UPLC-MS. A total of 12 compounds were identified, and their neuroprotective activity and antioxidant effect against AlCl 3 induced Alzheimer's disease in rats were evaluated. The nanosilver aqueous extracts of L. coccineus and M. lutea showed the highest antiacetylcholinesterase and antioxidant activity followed by the aqueous extracts of L. coccineus and M. lutea, which confirms that nanoparticles were able to cross the blood-brain barrier in the in vivo experiment and increase the level of acetylcholinesterase and decrease the level of oxidative stress. A molecular modeling study was also conducted to provide an insight regarding the molecular target proteins acetylcholinesterase, butyrylcholinesterase and glutathione transferase, that could be involved in the mechanism of action of the studied extracts. The docking results Table. Antimicrobial activity as MICS (μg/ml) of tested samples against tested microorganisms was performed for the most active samples. (DOCX)
5,360.6
2019-11-06T00:00:00.000
[ "Biology" ]
Convolution-LSTM-Based Mechanical Hard Disk Failure Prediction by Sensoring S.M.A.R.T. Indicators The traditional Infrastructure as a Service (IaaS) cloud platform tends to realize high data availability by introducing dedicated storage devices. However, this heterogeneous architecture has high maintenance cost and might reduce the performance of virtual machines. In homogeneous IaaS cloud platform, servers in the platform would uniformly provide computing resources and storage resources, which e ff ectively solve the above problems, although corresponding mechanisms need to be introduced to improve data availability. E ffi cient storage resource availability management is one of the key methods to improve data availability. As mechanical hard disk is the main way to realize data storage in IaaS cloud platform at present, timely and accurate prediction of mechanical hard disk failure and active data backup and migration before mechanical hard disk failure would e ff ectively improve the data availability of IaaS cloud platform. In this paper, we propose an improved algorithm for early warning of mechanical hard disk failures. We fi rst use the Relief feature selection algorithm to perform parameter selection. Then, we use the zero-sum game idea of Generative Adversarial Networks (GAN) to generate fewer category samples to achieve a balance of sample data. Finally, an improved Long Short-Term Memory (LSTM) model called Convolution-LSTM (C-LSTM) is used to complete accurate detection of hard disk failures and achieve fault warning. We evaluate several models using precision, recall, and Area Under Curve (AUC) value, and extensive experiments show that our proposed algorithm outperforms other algorithms for mechanical hard disk warning. Introduction At present, Infrastructure as a Service (IaaS) cloud platforms have become the main solution to provide enterprise IT infrastructure. With the development of big data technology and application, more and more enterprises begin to realize the importance of data, so they put forward higher requirements for the availability of data. The traditional IaaS cloud platform generally introduces dedicated storage devices into the platform to achieve high availability of data storage and provides virtual machines in cooperation with dedicated computing devices in Figure 1 [1]. This heterogeneous archi-tecture often leads to two problems: first, it makes the heterogeneity of platform hardware more significant and increase the operation and maintenance cost and scalability cost of the platform; second, when computing resources and storage resources come from different devices, the connection between the computing resources and the storage resource of one virtual machine have to be based on the network connection among devices, which would reduce the performance of the virtual machine. With the proposal of Hyperconverged Infrastructure (HCI), more and more IaaS cloud platforms begin to adopt the homogeneous architecture. Servers in the platform would uniformly provide computing resources and storage resources that are shown in Figure 2 [1]. This homogeneous architecture could effectively solve the problems encountered by the heterogeneous architecture. Since there is no dedicated storage hardware for high data availability in the heterogeneous architecture, the cloud platform needs to introduce corresponding mechanisms to guarantee the data availability. Realizing high data availability of IaaS cloud platform mainly involves two aspects: one is data backup and the other is to realize the storage resources availability management. The data backup part mainly introduces backup policy management and backup data management. This part is not the focus of this paper. Furthermore, there are two main types of storage resources in the server: solid-state drive (SSD) hard disk and mechanical hard disk. SSD hard disk could provide higher data reading and writing speed, yet the cost is high, hence it is often used to realize the virtual machine system disk with high performance requirements. The mechanical hard disk, although its data reading and writing speed is relatively low, yet its cost is low, hence the mechanical hard disk is the main way to realize the data storage capacity of IaaS cloud platform. If we can predict the life of mechanical hard disks more accurately and perform operations such as data backup in a timely manner, we can effectively reduce the risk of damage. Existing mechanical hard disks already provide Self-Monitoring Analysis and Reporting Technology (S.M.A.R.T.) that can be used to sensoring the operational status of the mechanical hard disk. Furthermore, S.M.A.R.T. provides indicators of the operational status of the various components of the drive, such as heads, platters, motors, and circuits assisting in the prediction of mechanical hard disk status. Therefore, the key issue to address is how to predict mechanical hard disk life based on S.M.A.R.T. indicators in a timely and accurate manner. In recent years, several researchers have proposed methods for mechanical hard disk failure prediction, mainly divided into mathematicsbased methods [2,3] and machine learning-based failure prediction method [4]. These methods do not adequately consider the problems of removing unnecessary S.M.A.R.T. indicators, small number of failure samples, and making full use of timing data while predicting mechanical hard disk lifetime. In addition to this, some studies [5][6][7] have focused on assessing the dynamic reliability and fault prediction of the whole system, while we have mainly completed fault prediction for individual component hard disk. In details, there are three challenges for the fault prediction of hard disks: (1) How to filter the S.M.A.R.T. indicators that have the greatest impact on fault warning. The S.M.A.R.T. indicators of mechanical hard disks are the basis for determining faults. Nevertheless, there are also some characteristics that are not relevant to the failure result, excessive characteristics that are useless and may even affect the final analysis result (2) How to solve the problem of imbalanced sample size of failures. Statistically, the annual mechanical hard disk damage rate in data centers is around 2%-5%. Therefore, in the sensoring data of hard disk operation status, the data related to abnormal status is far less than that related to normal status (3) How to make the most of the timing relationships of mechanical hard disk data. Existing warning models for faulty hard disks first use time series data compression to complete feature extraction, and then pass the extracted data into a classifier for classification. This process has the potential to result in the loss of a large number of valuable features Therefore, the key problem to be solved in mechanical hard disk failure prediction in this paper is how to timely Journal of Sensors and accurately predict the service life of mechanical hard disk, so as to actively carry out data backup or migration before mechanical hard disk failure, so as to improve data availability. To address the above challenges, we first propose the Relief feature selection algorithm to filter indicators and select valuable indicators. And we propose the Generative Adversarial Networks (GAN) model to generate a small number of class samples. Then, we propose the Convolution-Long Short-Term Memory (C-LSTM) to solve the problem of long-term dependence on time-series data and accurately detects faulty hard disk data. The outline of this paper is listed as follows: Section II Related work reviews and discusses the previous related work; Section III Algorithm presents our algorithm; the experiment setup, results, and analysis are presented in Section IV Experimental results and discussion; in the end, Section V Conclusions makes a conclusion of this paper. Related Work Mechanical hard disk failure alerts have become increasingly important with the growth of IaaS cloud platforms. The hard disk is one of the most common failed components in today's IT systems, and damage to it can lead to suspension of system services or loss of data. As more and more services run on them, the damage caused by hard disk corruption is increasing every year. 2.1. Anomaly Detection of Mechanical Hard Disks. There are already several methods for detecting anomalies on mechanical hard disks. Yang et al. [8] proposed an evaluation method for comparing feature selection methods and anomaly detection algorithms for predicting hard disks failures. Yu et al. [9] proposed an adaptive error tracking method for hard disks fault prediction. Wang et al. [10] proposed a domain adaptive method to improve fault prediction performance. With the development of deep learning, combined with its many excellent properties, deep learning is now widely used to solve problems in the prediction domain [11][12][13]. How to handle time series data needs to be considered when using deep learning methods to accomplish hard disk failure prediction. Several existing studies have been considering how to handle time series data. Hu et al. [14] propose a disk failure prediction system based on a Long Short-Term Memory (LSTM) network. By replacing the input in the LSTM network with the continuous operating records of the disk, the problem of individual variation of the disk can be solved. [2] explored the ability of Decision Trees (DTs) [17] and Gradient Boosted Regression Trees (GBRT) [18] to predict hard disk faults based on S.M.A.R.T. indicators, and experimentally demonstrated that both prediction models have high fault detection rates and low false alarm rates. Chaves et al. [3] present a failure prediction method using a Bayesian network. The method calculates the deterioration of hard disks over time using S.M.A.R.T. indicators to predict eventual failures. De Santo et al. [19] propose a model based on LSTM, which combines S.M.A.R.T. indicators and temporal analysis for estimating the health of a hard disk based on its failure time. Li et al. [20] proposed a combination of XGBoost, LSTM, and ensemble learning algorithms to effectively predict hard disk failures based on S.M.A.R.T. indicators. In conjunction with S.M.A.R.T., Shen et al. [21] propose a hard disk failure prediction model based on LSTM recurrent neural networks and a new method for assessing the degree of health. The model exploits the long-term time-dependent characteristics of hard disk health data to improve prediction efficiency and efficiently stores current health details and deterioration. In addition to selecting all the attributes of S.M.A.R.T., some studies have also taken the approach of selecting some of the attributes. Wu et al. [4] propose the use of information entropy to optimise S.M.A.R.T. indicators to enable the selection of the most relevant attributes for prediction, combined with a Multichannel Convolutional Neural Network-Based Long Short-Term Memory (MCCNN-LSTM) model to complete the prediction of hard disk failures. 2.3. Sample Imbalance. The above study focuses on the use of S.M.A.R.T. indicators to detect anomalies and health states of hard disks. In addition, hard disks are healthy for most of their life cycle with relatively few failures, which creates a problem of sample imbalance. GAN-based methods are often used to solve the problem of sample imbalance. Lee and Park [22] proposed a GANbased fusion detection system for imbalanced data. Xu et al. [23] proposed a convergent Wasserstein GAN to solve the problem of class imbalance in network threat detection. Huang and Lei [24] proposed a novel Imbalanced GAN (IGAN) to deal with the problem of the class imbalance. In addition to the GAN-based approach, a number of others have proposed solutions to the problem of imbalanced hard disk failure samples. Tomer et al. [25] propose to apply machine learning techniques to accurately and proactively predict hard disk failures. Shi et al. [26] proposed a deep generative transfer learning network (DGTL-Net) that integrates a deep generative sample for generating pseudofailure network to generate pseudofailure samples and a deep transfer network to solve the problem of hard disk distribution discrepancy, enabling intelligent fault diagnosis of new hard disks. Ircio et al. [27] proposed an optimised classifier to solve the problem of imbalance between hard disk failure and normal hard disk height. Wang G. et al. [28] propose a multi-instance long-term data classification method based on LSTM and attention mechanism to solve the problem of data imbalance. Algorithm We present an evaluation method for comparing feature selection methods and anomaly detection algorithms for predicting hard drive failures. It enables the rapid selection of the best algorithm for a particular model of hard disk. It includes an evaluation mechanism for assessing feature selection methods from a performance and robustness perspective and for assessing the performance, robustness, efficiency and generalisability of anomaly detection algorithms. Hard disk failure prediction needs to deal with three important points, indicator selection, timing compression, and imbalanced sample processing. The overall process is shown in Figure 3. First, n vectors of characteristic timing parameters are input, and the vector A i ða i1 , a i2 , ⋯, a it , ⋯, a ik Þ is defined as the timing characteristics of parameter i input, and correlation analysis is performed using the correlation with the results, and the parameter with the higher correlation is selected. In the time series feature extraction stage, the current mainstream approach is to use single-value compression for continuous time series data over a period, it can be represented as: where S t is the cumulative data, Y t is the data of a node, and α is the coefficient. Nevertheless, these time series feature extractions are often not enough. The main problem is that the previous data is forgotten faster and faster over time, and the sequence of values is not considered, resulting in the data not playing its due role. On the other hand, the processing of imbalanced samples is relatively rough, often using oversampling of a few categories of data or undersampling of most categories of data. Nevertheless, oversampling of a few categories of data leads to changes in the probability of data features, which appear to have excellent performance in the training set, and decrease in the effect of the test set, resulting in a low recall rate. Using undersampling algorithms, clustering, and other methods to remove part of the samples to achieve sample balance, often resulting in loss of important features, or reduced data sample size, resulting in overfitting problems. This algorithm is divided into offline model implementation and online data analysis. The detailed algorithm flow is shown in Figure 4. As shown in Figure 4, this algorithm mainly includes offline model generation and online model detection. In the offline model generation stage, historical data is used for parameter selection, then time series features are extracted, samples are equalized, and finally a discriminant model is generated. In the online detection stage, parameter selection is performed, after time sequence features are extracted, model detection is performed, and finally the prediction result is generated. (1) Indicator Selection. Relief feature selection algorithm is used to filter parameters and select valuable indicators [31]. However, there are also some indicators that are not relevant to the failure result-excessive indicators that are useless and may even affect the final analysis result. When performing a hard disk analysis, it is necessary to consider the various complexities faced by hard disks. For instance, the capacity of a hard disk will gradually increase over time. In addition, the hard disk will slowly deteriorate, although the two are not very relevant as the capacity of a hard disk may be adjusted at any time. Therefore, it is essential to select the indicators to remove interfering features. To address these issues, we select suitable indicators as model inputs based on the Relief feature selection algorithm [32]. The Relief algorithm focuses on the binary classification problem, which in this paper refers to whether a hard disk has been damaged. We propose the "correlation statistic" to measure the importance of a feature. The correlation statistic is a vector, each component of which is the evaluation of one of the initial features, and the importance of a subset of features is the sum of the correlation statistics for each feature in the subset. For the feature measurement problem, Relief borrows the idea of the hypothetical interval, the maximum distance that the decision surface can move while keeping the sample classification constant, which can be expressed as [33]: where MðxÞ and HðxÞ refer to the nearest neighbors that are of the same kind as x and those that are not of the same kind as x, respectively. We know that when an attribute is favorable for classification, then samples of that kind are closer to that attribute, while samples of the opposite kind are further apart from that attribute. Suppose the training set D is ðx 1 , y 1 Þ, ðx 2 , y 2 Þ, •••,ðx m , y m Þ, for each sample x i , the nearest neighbour x i,nh of the 4 Journal of Sensors same category as x i is calculated, which is called "guessed nearest neighbor" ðnear − heatÞ. Then the nearest neighbor x i,nm , which is not the same as x i , is called the "wrong nearest neighbor" ðnear − missÞ, and the relevant statistic for attribute j is [33]: Discrete attributes: Continuous attributes: GAN-Based Imbalance Data Processing. In the daily operation of an IaaS cloud platform system, the number of failed hard disks is relatively small, while the number of normal samples is always large. Statistically, the annual mechanical hard disk damage rate in data centers is around 2%-5%, and a hard disk is normal for most of the time, which results in the raw positive and negative sample data always being imbalanced. Using machine learning methods for failure prediction on imbalanced data sets requires either oversampling a smaller number of data categories to achieve data balance or undersampling a larger portion of the data. Conventional oversampling algorithms, however, can lead to changes in the probability of data for a few classes, undersampling leading to loss of important features in most Journal of Sensors classes, or overfitting problems due to insufficient training data. Examples include the use of Synthetic Minority Oversampling Technique (SMOTE) oversampling algorithms [34], which synthesize new samples for a few classes based on interpolation, and the use of clustering algorithms to implement undersampling and discard some samples to alleviate class imbalance. Considering the problems of the original algorithm in dealing with imbalanced data, the innovation of this algorithm is to use the zero-sum game idea of GAN to generate less category samples. The GAN continuously plays a game through the generative network G and the discriminative network D, which in turn enables G to learn the distribution of the data. Using the GAN method, the generative network G and the discriminative network D are continuously played by using the zero-sum game idea in game theory, which in turn enables G to learn the distribution of the data. Define the distribution P data ðxÞ of the set of real images, x being a real image. Now it is necessary to generate some pictures that also fall within this distribution. Define the distribution generated by the generator G as P G ðx ; θÞ, with θ being the distribution parameter. Now, compute the likelihood function in the generative model as [35]: To implement the generator G to generate the real picture maximum, the likelihood function needs to be maximized. That is, it is necessary to find a θ * to maximize the likelihood [36]: The generator randomly generates a vector Z and generates a picture X through the generator GðZÞ = X network, that is, the generator sample interval. Then the discriminator DðXÞ ∈ ½0, 1 is used to distinguish the samples generated by the generator from those in the original sample space. And GAN is computed as follows [36]: The objective function is as follows [36]: Through k rounds of training, the discriminator can accurately distinguish between the original data and the data generated by the generator G. Next, train the generator so that the generator can confuse the discriminator and make it indistinguishable. Through multiple rounds of training and adjusting the discriminator and generator network results, a better model effect can be achieved. However, the stability of GAN training is not good, and it is difficult to achieve the desired effect in this experiment. By improving GAN, there are currently better algorithms such as Deep Convolutional GAN (DCGAN) [37], Wasserstein GAN (WGAN) [38], and Wasserstein GAN with Gradient Penalty (WGAN-GP) [39]. WGAN uses the Wasserstein distance, which has superior smoothing properties compared to Jense-Shannon (JS) and solves the gradient disappearance problem [23]. In addition, WGAN addresses not only the problem of GAN training instability but also provides a reliable indicator of the training process, and the indicator is highly correlated with the quality of the generated samples. Therefore, we choose WGAN as a method to solve the data imbalance problem. Based on LSTM Network Anomaly Detection and Recognition. Our proposed LSTM network-based model solves the problem of long-term dependence on time-series data and accurately detects faulty hard disk data. The traditional faulty hard disk early warning model uses time series data compression to first extract features, and then transfer 3.3.1. The Improved Network Structure of LSTM. The original LSTM network structure only takes into account the temporal sequence of data in time. Nevertheless, for hard disks, changes in certain parameters will affect the data of other parameters. Compared to the common LSTM structure, this algorithm borrows from the Convolutional LSTM, which means that convolutional computation is added to the input layer, local perception and pooling are introduced, spatial features are added and input to the LSTM structure together with the original data. The structure of C-LSTM is shown in Figure 5. Considering that this model is a multicategory model, the output should be the probability of each category. The values obtained from the neural network are normalized using the Softmax function, placing the results between [0,1], with larger values corresponding to larger probabilities. The Softmax function category i probability y i = PðC i j xÞ is calculated as follows [40]: where After the Softmax function has processed the results, our model uses cross entropy as the loss function. The loss function formula for Softmax is as follows [41]: where y i is the probability of category i. When calculating the loss function, the problem of gradient explosion arises, our model uses clip gradients method [42] to keep the weights within a certain range. Experimental Results and Discussion To verify the predictive effectiveness of the algorithm, fault warning experiments were conducted on mechanical hard Data Description. The experimental data is from Backblaze, and it consists mainly of data which gathered by sensors from nearly 30,000,000 mechanical hard disks over a 1year period in 2017 (the dataset: https://github.com/ 1210882202/data). The data is mainly S.M.A.R.T. indicators gathered once a day, with some of the disks not sensoring S.M.A.R.T. indicators over time, indicating that the mechanical disk has been damaged. The objective of the experiment was to predict whether the disk would become damaged in the future based on the last sixty days of data for these disks. As mechanical hard disks generally deteriorate slowly as the components age, this experiment assumes that the mechanical hard disk is not damaged within fifteen days and this data is marked as healthy, and if the disk is damaged within fifteen days this data is marked as faulty. Based on the sample data, the experiment hopes to design a fault warning model with excellent performance in terms of accuracy, recall, and Area Under Curve (AUC) value. Baseline. To evaluate the performance of C-LSTM against traditional models, traditional classifiers were added to the experiments. Details are as follows: (1) Logistic Regression (LR) [43]. LR is a supervised learning method often used in anomaly detection. For one variable or multiple independent variables, find the best fitting model to describe the set of independent variables, and complete the anomaly detection in this way (2) Random Forest (RF) [44]. RF is a common method of anomaly detection by bringing together multiple decision trees. The basic unit is a tree-structured decision tree. With this structure, normal instances can be learned and instances that are not classified as normal are considered as anomalies The Settings of LSTM (1) Input and Output. For the input data, the data relevance is first judged using the Relief algorithm to obtain valid 16-dimensional data, and the data sample map is obtained based on the faulty sample generation method. The specific input is a None * Seq * 16-dimensional tensor, and the output is a None * 2dimensional tensor (2) Network Structure. The LSTM network used in the experiments uses a network containing two layers of LSTM hidden layers, with a dropout layer added after each hidden layer, followed by a fully connected layer connecting the LSTM and the output, and finally a SoftMax layer The key parameters of the neural network used for the experiments were set as shown in Table 1 4 .3.2. The Settings of C-LSTM (1) Input and Output Settings. The input data is the same as the original LSTM (2) Network Structure. The network used in the experiment adds a layer of Convolutional network after the input layer, which is combined with the original input data and fed into the LSTM hidden layer network The Results of Experiment. In applying the Relief screening algorithm, attribute-related statistical components are calculated for the indicators which gathered by sensors in hard disks, the larger the score, the greater the classification power. The statistical components are ranked by size and scaled to take the key indicators needed. First, we analyzed data collected from 26,339 disks over a six-month period Figure 6. In Figure 6, the horizontal axis represents each dimension number, and the vertical axis represents the relevance of each dimension to the results, taking values in the range [0, 1], with closer to zero indicating less relevance to the results. Based on the results of the statistical components obtained in Figure 6, the parameters whose results are greater than the threshold are selected, and the final hard disk correlation is obtained. According to the above analysis, we use the WGAN network to conduct experiments, and the sample generation is shown in Figure 7. The horizontal axis of Figure 7 represents each indicator gathered by sensors of the mechanical hard disk, and the vertical axis represents time. Darker colors in Figure 7 represent lower indicator values, and lighter colors represent higher indicator values. As can be seen from Figure 7, the WGAN network uses the principles of game theory to generate samples that are relatively similar and can simulate a large amount of information, while at the same time differing from direct replication. The experimental results show that the use of WGAN for feature extraction and regeneration of the overall fault sample solves the problem of sample imbalance and expands the fault sample. In addition to experimenting on our proposed C-LSTM model, we have also experimented on the comparison algorithms. According to the specific experimental setup, the results of this experiment for the comparison model of LR are as shown in Figures 8-10. In these figures, we use different color to show the different scores of LR. According to the specific experimental setup, the results of this experiment for the comparison model of RF are as shown in Figures 11-13. In these figures, we also use different color to show the different scores of RF. According to the specific experimental setup, the results of this experiment for training the network model of LSTM are as shown in Figures 14-16. The horizontal axis of graphs in Figures 14-16 represents the number of training epochs, the vertical axis of the first graph in Figure 14 represents the accuracy rate during training, and the vertical axis of the second graph represents the training loss data. According to the graphs in Figure 14, we can see that after 3 epochs, the training gradually leveled off (in this paper, we define that a loss reduction of no more than 0.1 after 1 epochs is considered smooth), where the loss converged at around 0.05. Based on the Figures 15 and 16, we can see that the accuracy of training is around 0.91. The results of this experiment on the training of the C-LSTM network model are shown in Figures 17-19. The horizontal axis of graphs in Figures 17-19 represents the number of training epochs, the first graph's vertical axis represents the accuracy rate during training in Figure 17, and the second graph's vertical axis represents the training loss data. After 4.0 epochs, the training gradually leveled off, with the loss converging at around 0.05 and the accuracy at training at around 0.93. Comparing the training results of the LSTM in Figures 14-16 with the C-LSTM network model in Figures 17-19, we can conclude that the C-LSTM has a faster convergence speed, lower loss drop, and higher accuracy. Therefore, from a training perspective, the C-LSTM performs better. The individual classification models were evaluated according to precision, recall and AUC value, and the results are shown in Table 2. In terms of each metric, the C-LSTM model has the best result. Conclusions Firstly, the mechanical hard disk is installed with sensors for sensoring the status of the mechanical hard disk and the S.M.A.R.T. indicators gathered by these sensors on the operational status of the various components of the disk can be used to predict the life of the mechanical hard disk. Based on this, we focus on how to accurately predict mechanical hard disk failure and achieve effective improvement of data availability in the IaaS cloud platform. The model proposed in this paper includes three parts: Relief feature selection algorithm, WGAN, and LSTM models. To remove features from the S.M.A.R.T. indicators of mechanical hard disks that are irrelevant to the failure results, we use the Relief feature selection algorithm to remove interfering features and complete the parameter screening. As the number of failed hard disks is small in the IaaS cloud platform system, we use the zero-sum game idea of WGAN to generate fewer category samples to solve the sample data imbalance problem. Finally, we use the improved C-LSTM model to complete hard disk failure detection and early warning. Through extensive experiments, we constructed our own model and evaluated the model we designed and other methods using precision, recall, and AUC value. The experiments demonstrate that our proposed algorithm outperforms other algorithms for mechanical hard disk warning. As our future work, we aim to extend our approach to SSD-based IaaS cloud platforms. In our proposed approach, we mainly implement mechanical hard disk S.M.A.R.T.based fault warning through WGAN and LSTM to achieve effective improvement of data availability in IaaS cloud platforms. However, as more and more IaaS cloud platform systems gradually adopt SSD to pursue significant performance improvement. Based on this, how to better realize the automation of repair in SSD-based IaaS cloud platform and study the automatic adaptation of parameters are our future goals to achieve. Data Availability All data, models, and code generated or used during the study appear in the submitted article. Conflicts of Interest The authors declare that they have no conflicts of interests.
7,121.8
2022-10-04T00:00:00.000
[ "Engineering", "Computer Science" ]
RapidSCAT Sigma-0 and Tb Measurements Validation Scatterometer Radar Backscatter Calibration since the first SeaSat-A Satellite Scatterometer (Birer et al., 1982), the Amazon tropical rain forest has been recognized as a spatially large extent, homogeneous radar calibration target. During the commissioning of NSCAT (1996) and later QuikSCAT (1999), CFRSL worked with the JPL Scatterometer Cal/Val team to perform normalized radar cross section (Sigma-0) calibrations using the Amazon (see Zec et al., 1999-A and 1999-B) [1]. It is important to continue this activity using RapidSCAT to validate the Sigma-0 measurement provided in the L-1A data product, and moreover the time series of these backscatter observations can be used to establish an improved Ku-band Amazon calibration site for future on-orbit radar calibration [2]. Unfortunately, the Amazon radar backscatter (Sigma-0) exhibits a time of day dependence that is not well characterized, and for the sun-synchronous polar orbiting satellites (SeaSat-A, ADEOS-I and QuikSCAT), the observations occur at specific times of day, during the morning and night passes. But now with the low-earth-orbit of the ISS, there will be an orderly orbit precession, which allows the region to be uniformly sampled over the 24-hour period [3]. Also, since the RapidSCAT employs a conical scanning geometry, we can examine the isotropic nature of Amazon backscatter established by Zec’s (1998-A) analysis of NSCAT and later (1999-B) of QuikSCAT observations [4]. Thus, observations, collected over the RapidSCAT two-year mission will sample the Amazon with high spatial/temporal resolution, as a function of time of day, and over seasons. We propose to analyze these data to develop a high spatial resolution Sigma-0 Amazon map that can be used by future satellite radar missions. Introduction Scatterometer Radiometric Brightness Temperature previously under the QuikSCAT program, the LL developed a QRad brightness temperature (Tb) measurement capability included in the scatterometer L-1B data product.This Tb measurement was implemented using the QuikSCAT L-1A and L-1B data products during ground data processing at the JPL PODAAC [5].The QRad Tb had been shown by Ahmed et al. (2005) to be capable of providing measurements of rain simultaneous with the scatterometer backscatter measurements.As such, it could provide a reliable rain flag (e.g., used in the MUDD rain flag) or independent measurements (Tbh & Tbv) for use in an active/passive OVW retrieval algorithm (Laupattarakasem et al. (2009) and Alsweiss et al. (2011)) [6]. Previously, when SeaWinds flew on ADEOS-II, there was a significant degradation of the QRad Tb's because of physical temperature changes in the QSCAT antenna and waveguides during the orbit day/night periods.CFRSL developed an empirical correction algorithm that was implemented by PODAAC; however the QRad Tb was adversely compromised as reported by Rastogi et al. (2005) [7]. The Tb measurement capability on RapidSCAT will also suffer because of the ISS day/night periods, which occur every orbit.However, based upon the CFRSL Intersatellite Radiometric Calibration (XCAL) experience on the TRMM Microwave Imager, an effective procedure has been developed by Biswas (2010) to remove the time varying orbit Tb bias from the TMI radiometer.We propose to apply this method for RapidSCAT and to perform continuous XCAL with the Global Precipitation Mission Microwave Imager (GMI).These XCAL evaluations will validate the effectiveness of the RapidSCAT Tb algorithm for providing accurate ocean Tb's [8]. Objectives This research entails finding a bug in current HiRAD processing software and fixing it. • Level_0a: Dump of raw data into MATLAB mat files: Moments 1 -4 + cross-correlations between phase and quadrature signal components.Most useful are second moments files containing time reference and telemetry information.• Level_0b: Data organized in channels, time stamps and type (warm/cold load, isolator, etc.).• Level_0c: Same as Level_0b, but interpolated to the same time grid since various Level_0b data are in different time slices.Level_0c is the input to processing.Many Tb images show artificial stripes and goal is to detect the source of this error and remove it. Active/Passive Hurricane OVW Retrievals Historically, scatterometer OVW retrievals in hurricanes have badly under estimated ocean surface wind speeds > 30 m/s (hurricane CAT-1).The two main reasons are: Sigma-0 saturation at high wind speeds and the atmospheric attenuation associated with heavy tropical rain bands. Concerning the first, airborne scatterometer measurements in hurricanes by Esteban-Fernandez, et al. (2006) indicate that the Sigma-0 geophysical model function (GMF) increase until wind speeds reach 50 m/s (V-pol) or 70 m/s (H-pol); however these high wind speeds occur in limited spatial extent < 10 km.The smaller antenna beam footprints for RapidSCAT (compared to QuikSCAT) will allow better beam slices that should yield better OVW retrievals. Concerning the rain attenuation issue, research conducted with QuikSCAT by CFRSL by Laupattarakasem et al. (2009) and Alsweiss et al. (2011) has shown promise that combined active/passive OVW retrievals algorithms can significantly improve high wind speed retrievals.Further, with the lower altitude of the ISS and the resulting smaller SCAT antenna footprints will result in improved OVW retrieval performance. We propose to use RapidSCAT hurricane observations with near coincident aircraft under-flights to tune our active/passive OVW retrieval algorithm.Further, after developing this algorithm, it will be applied to SeaWinds on ADEOS-II and QuikSCAT measurements in hurricanes.By cross comparison of these hurricane retrievals with RapidSCAT, we will investigate the effects of spatial scaling on the active/passive OVW retrieval algorithm for future scatterometers [9]. Methodology Forward models are presented which relate HIRAD's raw radiometric self and cross-correlations to the antenna and visibility (V) incident on the antenna and to the brightness temperatures of the internal calibration reference sources.These models are inverted to solve for A T and V using measurements made while viewing the antenna and the calibration sources. Self Correlation Forward Model The basic signal flow through the radiometer for self-correlation measurements made by individual HIRAD receivers is shown in Figure 1. In Figure 1, T A is the incident brightness temperature weighted by the antenna pattern of an individual HIRAD 16-element linear array antenna, L is the net transmissivity of the lossy radome and antenna, Tiso is the physical temperature of the isolator that is connected to the antenna (it is used as an estimate of the physical temperature of the radome and antenna), A T ′ is the effective antenna temperature after propagation through the radome and antenna (including self emission by the lossy elements), T W and T C are the brightness temperatures of the internal warm and cold calibration loads, T ND is the increase in brightness temperature due to the noise diode (which includes the effects of losses due to power division, cable losses and the directional coupler), G is the receiver gain in units of Kelvins/counts, and C X is the raw 2nd moment counts when the system is in state X.Possible system states include: X = A (viewing the antenna), W (warm load), C (cold load), W + ND (warm load with noise diode on), and C + ND (cold load with noise diode on) [10]. Note that hardware problems have been identified that affect the stability of the noise diode signal in some channels some of the time.For this reason, a version of the calibration is described here which does not use the noise diode measurements. The forward model expression for C X while viewing the antenna is given by where RX T is the portion of the receiver noise temperature that is common to the antenna and calibration views, tRX D is the difference between the cal and non-cal receiver noise temperature, 0 C is a possible digital offset, and A T ′ is given by ( ) The 2nd moment counts while viewing the calibration states are given by ( ) Cross Correlation Forward Model The basic signal flow through the radiometer for cross-correlation measurements made by pairs of HIRAD receivers is shown in Figure 2. Φ represents the portion that also affects the injected noise diode signal.A single noise diode is common to all ten HIRAD channels to provide a correlated signal, but its additive brightness temperature is in general different for each channel.ij G is the cross-correlation gain for the ith and jth channels. , represent the real and imaginary components of the raw cross-correlation counts when the system is in state X, and IQ G represents the gain imbalance between the two complex components. In practice, the real and imaginary components of the cross-correlation counts are computed by multiplying and accumulating the proper in-phase and quadrature components of the signals from the two channels.If the in-phase and quadrature components of the signal from the ith channel are defined as i I and i Q , respectively, the cross-correlation counts are given by , The incident visibility sample corresponding to the baseline separation between the ith and jth antennas is defined as The incident visibility is attenuated by the lossy radome and antenna in a manner similar to the incident antenna temperature.However, because the visibility is formed by the product of the signals incident on each antenna, it will be attenuated by the geometric mean of the individual transmissivities.Therefore, the effective transmissivity of the lossy radome and antenna with respect to propagation by visibility ij V is given by ( ) The complex phase angle of the visibility will be rotated due to differences in phase transfer between the signals paths from the two antennas to the common correlator.The component of the difference in phase transfer that is common to both the antenna and noise diode signals can result from differences in the phase transfer functions of the two receivers, differences in the phase of the local oscillator signals arriving at each receivers down conversion mixer, and from differences in the lengths of the transmission lines from the receivers to the correlator.The component of the difference in phase transfer that occurs before the noise diode coupler (and is therefore not tracked by noise diode deflection measurements) can result from differences in the phase transfer functions of the two antennas and beam formers [12]. If the difference in phase transfer between the two antenna signals is given by then the measured visibility, V', will be a rotated version of the actual visibility, V, described by and sin cos The rotated visibility, V', is what is actually correlated.The forward model expressions for the real and imaginary parts of the raw correlation counts while viewing the antenna, , where 0, R ij C and 0, are residual offsets in the correlation counts, due at least in part to offsets between the analog and digital ground potentials at the digitization stage.These offsets should in practice be quite small and stable [13]. The cross-correlation counts measured while viewing the warm and cold calibration sources are given by , Equations ( 13) and ( 14) assume there is no correlated signal present in either the warm or cold load calibration states because the loads are different for each channel and so generate uncorrelated noise. A common noise diode is split by an in-phase power divider and coupled into the receiver front end of each channel, as noted schematically in Figure 2. The maximum possible cross-correlation between the noise diode signals in the ith and jth channels is given by ( ) where , ND i T is the increase in brightness temperature due to the noise diode for the ith channel.This maximum is achieved if the two noise diode signals incident on the cross-correlator are in phase.Ideally, the noise diode signal should be injected in phase into each receiver (and cable lengths have been carefully matched to ensure this).However, differences in phase transfer between channels, similar to those experienced by the antenna signals noted above, will cause the complex correlation between the noise diode signals in each channel to also rotate.The rotation angle may differ from that of the antenna signals because they do not share identical propa-gation paths.If the complex phase rotation angle of the noise diode signal is , , , R ij R i R j ∆Φ = Φ − Φ , then the measured complex cross-correlation will be given by where it is assumed that there is no intrinsic imaginary component of the cross-correlation prior to the differential phase change [14]. Visibility Gain Calibration The visibility calibration algorithm is derived by appropriate manipulation of Equations ( 8)-( 14).The crosscorrelation offsets, 0, R ij C and 0, I ij C , are determined directly from either the warm or cold load counts according to (13) and (14).The cross-correlation gain is found as the geometric mean of the appropriate pair of self-correlation gains.If G i is the self-correlation gain for receiver i, given by ( 5), then ( ) The rotated visibility, V', is found from Equation ( 12) as Visibility Phase Calibration The complex phase rotation angle of the noise diode correlation, , R ij ∆Φ in Equation ( 16), is given by where the 1 tan − function should take account of the possibility that the rotation angle will lie in the 3rd or 4th quadrant of the complex plane if the denominator in Equation ( 19) is negative. Phase calibration of the visibility measured while viewing the antenna should ideally consist of counterrotating the measured visibility, V', by a complex angle ij −∆Φ (see Equation (11)).However, variations in the difference between the portions of the ith and jth antenna signal path that lie outside the noise diode calibration loop (i.e.variations in , , A i A j Φ − Φ ) are assumed to be negligible and phase calibration of the visibility only corrects for variations in ij ∆Φ .The phase calibration algorithm is given by , , cos sin where R ij V ′ and I ij V ′ are given by Equation (18) and , R ij Figure 1 . Figure 1.Signal flow diagram for HIRAD self correlation measurements. Figure 2 .Φ Figure 2. Signal flow diagram for HIRAD cross correlation measurements.In Figure 2, incident A T signals enter the ith and jth channels of HIRAD.The antenna transmissivity, L, and the warm and cold reference brightness temperatures, W T and C T , are in general different for each channel.The total phase transfer of the signal from the ith antenna to the correlator is , , i A i R j Φ = Φ + Φ resulting from this distribution is found from ( )
3,299.8
2016-04-29T00:00:00.000
[ "Environmental Science", "Mathematics" ]
A Real-Time Pinch-to-Zoom Motion Detection by Means of a Surface EMG-Based Human-Computer Interface In this paper, we propose a system for inferring the pinch-to-zoom gesture using surface EMG (Electromyography) signals in real time. Pinch-to-zoom, which is a common gesture in smart devices such as an iPhone or an Android phone, is used to control the size of images or web pages according to the distance between the thumb and index finger. To infer the finger motion, we recorded EMG signals obtained from the first dorsal interosseous muscle, which is highly related to the pinch-to-zoom gesture, and used a support vector machine for classification between four finger motion distances. The powers which are estimated by Welch's method were used as feature vectors. In order to solve the multiclass classification problem, we applied a one-versus-one strategy, since a support vector machine is basically a binary classifier. As a result, our system yields 93.38% classification accuracy averaged over six subjects. The classification accuracy was estimated using 10-fold cross validation. Through our system, we expect to not only develop practical prosthetic devices but to also construct a novel user experience (UX) for smart devices. Introduction Gesture recognition is one of the most interesting research areas because of its utility in the human computer interface (HCI) field. Systems based on visual or mechanical sensors have been commonly employed as modalities for hand and finger movement recognition [1,2]. For example, force sensitive resistors were usually used for sensing finger and hand gestures [2]. In recent years, many researchers have tried to construct a hand and finger gesture recognition system based on the surface electromyogram (sEMG), which detects the motor unit action potential (MUAP) derived from different motor units during muscle contraction [3]. Since hand and finger movement is a result of the electrical activities of muscle cells, sEMG can be used to estimate the dynamics of our hands and fingers. sEMG has the advantage of convenience and safe use on the skin because of its noninvasive characteristics [1,4,5]. Moreover, sEMG has a better signal-to-noise ratio (SNR) compared to other neural signals [1]. For these reasons, sEMG-based HCI is considered as the most practical technology among neural signal-based HCIs. Almost all the studies on sEMG-based motion recognition have focused on arm and hand movement. For example, a study by Englehart et al. classified extension and flexion conditions of both arm and wrist based on wavelet analysis and principal component analysis (PCA) [6]. Englehart and Hudgins also classified four arm and wrist motions using the zero crossing rate and absolute mean value as feature vectors for a classifier [4]. Momen et al. constructed a real-time classification system for discriminating the various types of hand movements using sEMGs recorded from forearm extensor and flexor muscles [7]. The classification algorithm and feature vector used were the fuzzy C-means clustering algorithm and natural logarithm of root mean square value, respectively. In addition to the above studies, many researchers have tried to classify hand and arm movements using machine learning techniques such as linear discriminant analysis (LDA), artificial neural network (ANN) and support vector machine (SVM) classifier. The wavelength, Wilson amplitude, root mean square wavelet coefficients and so on are commonly used for recognizing hand and arm movement as features of classifier [8,9]. Even though many researchers have focused on recognizing the hand movement, finger movement based on the sEMG, has also been studied because of its potential utilization in HCI and prosthetic devices. Uchida et al. used FFT analysis and neural networks to classify four finger motions [10]. Nishikawa et al. used the Gabor transform and the absolute mean value to extract the features and classify six finger motions in real time, with learning based on neural networks [11]. Nagata et al. used absolute sum analysis, canonical component analysis, and minimum Euclidean distance to classify four wrist and five finger gestures [12]. Chen et al. used mean absolute values (MAV), the ratio of the MAVs, an autoregressive (AR) model, and linear Bayesian classification to classify 5-16 finger motions [13]. Al-Timemy et al. used time domain-autoregression feature and orthogonal fuzzy neighborhood discriminant analysis for recognizing finger movements based on sEMG. They showed that the abduction of finger and thumb movements can be successfully classified with few electrodes [14]. Some researchers devised wearable devices such as arm-and wristbands which recognize the finger gestures. Based on their wearable systems, they developed applications to control music players, games and interpret sign language [15][16][17]. Although these wearable systems worked successfully, they used multiple electrodes for recognizing multiple finger gestures so they are not appropriate for real-life applications. In addition, previous studies have only concentrated on recognizing simple movements such as an extension or flexion of fingers, but there is a need to recognize more complex movements for practical applications. In our present study, we propose a real-time pinch-to-zoom gesture recognition system based on sEMG signals recorded through an electrode. Pinch-to-zoom, which is a common gesture used in smart devices, such as iPhones and Android phones, is used to control the size of images or web pages according to the distance between the thumb and index fingers ( Figure 1). To infer the pinch-to-zoom gesture, we recorded sEMG signals from the first dorsal interosseous muscle and used multiclass classification techniques. Through our system, we expect to be able to not only develop practical prosthetic devices, but to also construct a novel user experience (UX) for smart devices. Figure 1. Scheme for pinch-to-zoom gesture. sEMG signal which is highly related to the pinch-to-zoom gesture is obtained from first dorsal interosseous muscle. In this figure, d means the distance between thumb and index finger. The paper is organized as follows: in Section 2, we describe the configuration of the hardware and software for our system. Section 3 provides details of the experimental procedure and the algorithms used for recognizing the pinch-to-zoom gesture. Section 4 provides the results of this experiment and the interpretation of our results. System Summary The purpose of this system is to record muscle movement using a sEMG and use it to recognize the pinch-to-zoom gesture in real time. The overall system consists of a sensor interface and computational unit parts. The sensor interface part includes a set of bipolar sEMG sensors, a microcontroller (ATmega328, Atmel Corporation, San Jose, CA, USA), and a Bluetooth module (Parani ESD-200, Sena technologies, Seoul, Korea). sEMG sensors are placed on the first dorsal interosseous muscle, which is closely related to the contraction of the thumb and index finger. The raw sEMG signal is transmitted to a computer system (Core i5, Windows 7) using bluetooth without any data loss. The software in the computational unit is developed based on Matlab (MathWorks, Natick, MA, USA). Our software provides noise reduction, feature extraction, and multiclass classification. The classification procedure is divided into training and testing sessions. The computer monitor displays instructions for finger movement during a training session. After the training session, the classifier provides a visualization of the distance between the thumb and index finger in real time. A detailed description of the 4-class classifier for this system will be provided in Section 3.4. The classifier recognizes the distance between two fingers at four levels (0 cm, 4 cm, 8 cm, and 12 cm). According to the level, the picture displayed on the computer monitor changes in real time. The overall system configuration is shown in Figure 2. Figure 2. System configuration for detecting pinch-to-zoom gesture in real-time. The total system consists of sensor interface device and computational unit parts. In sensor interface device, EMG was recorded from first dorsal interosseous muscle and transmitted to computational unit parts. In computational unit, feature was extracted from sEMG and classified. Software Settings The software was developed and implemented in Matlab for acquiring data, extracting the features, and estimating the distance between the thumb and index finger using machine learning. The following functions and tasks are performed in real time: (1) acquiring and displaying the raw sEMG data wirelessly transmitted from the sensors; (2) preprocessing the collected raw sEMG data for removing noise; (3) extracting features that are highly related to the pinch-to-zoom gesture; (4) and performing 4-class classification using a support vector machine (SVM) based on the one-versus-one (OvO) strategy. Figure 3 shows the graphical user interface for the Matlab implementation of the proposed system. (2) preprocessed EMG; (3) power spectral density (PSD); and (4) the distance between thumb and index fingers. Subjects and Settings Six healthy subjects (eight males and a female, mean age 27 years) were recruited among the graduate students at Gwangju Institute of Science and Technology (GIST). None of the subjects had experienced any muscular or neurological disorder that could affect our experimental results. All but one (S4) of the subjects were right-handed. Before the main experiment, a pre-test was conducted so that the subjects could familiarize themselves with the experimental protocol. All data were acquired at GIST, and a set of bipolar EMG electrodes, placed on the first dorsal interosseous muscle, was used for the EMG recording. The sampling rate was set at 1000 Hz, and all subjects were asked to sit in an armchair during recording time to prevent noise. Experimental Procedure During the experiment, our software presents four types of visual cues (0 cm, 4 cm, 8 cm, and 12 cm) to the subjects. In order to avoid the subject's prediction of the following visual cue, cue signs for 0 cm, 4 cm, 8 cm, and 12 cm were randomly displayed to the subjects though the computer monitor. All subjects were asked to perform a pinch-to-zoom gesture and maintain the distance between thumb and index finger according to the visual cue sign presented. A single trial consisted of pre-recording, recording, and an intertrial interval. A cue sign was provided for 1.5 s, and the first 0.5-s interval was reserved for gesture preparation. Only sEMG data during the recording period were used for further analysis. The intertrial interval was set to 1 s to prevent the overlap of EMG responses to successive visual cues (see Figure 4). sEMG data were acquired from 100 trials per visual cue, so a total of 400 trials per subject was used for further analysis. Pinch-to-Zoom sEMG Data Analysis As a preliminary investigation, we analyzed the statistical significance of the observed power spectrum in the four experimental conditions (0 cm, 4 cm, 8 cm, 12 cm) over all subjects. The power spectral density for each cue was estimated using Welch's method ( Figure 5). Figure 5a shows that the amplitude of the sEMG which is normalized from −10 to 10 is increased as the distance between the thumb and index finger became shorter. An ANOVA test was conducted for identifying the statistically significant frequency bands. As a result, the powers in all frequency bands from 1 Hz to 250 Hz are statistically different (p < 0.01) between the four experimental conditions (Figure 5b). For this reason, we assumed that the powers of observed EMG data are suitable feature for recognizing the pinch-to-zoom gesture. Classifier The use of SVMs proposed by Vladimir Vapnik are a popular technique for pattern classification. The general concept of SVMs is to find the hyperplane that maximizes the margins between the nearest training points. Assume a decision hyperplane as follows [18,19]: where x is a feature vector, x = (x1, …, xd) T , w is a normal vector of the hyperplane, and b indicates the bias. The cost function of this problem can be expressed as follows: where ω i is the class of sample, . Since SVMs are basically based on two-class classification, several hyperplanes have to be used for solving an N-class problem (N > 2). In this study, we choose the OvO strategy for recognizing the pinch-to-zoom gesture. The strategy constructs one classifier per pair of classes, i.e., OvO strategy trains N(N−1)/2 classifiers for a N-class classification problem. Since the number of classes, N, for our study was four (0 cm, 4 cm, 8 cm, and 12 cm), we obtained six binary classifiers using the training samples (see Figure 6). Figure 6. Diagram of classification algorithm for 4-class classification based on "One-Vs-One" strategy. Classification procedure consists of training phase and testing phase. In training phase, our classification algorithm trains total six binary classifiers (0 cm vs. 4 cm, 0 cm vs. 8 cm, 0 cm vs. 12 cm, 4 cm vs. 8 cm, 4 cm vs. 12 cm and 8 cm vs. 12 cm). In testing phase, sEMG response to unknown class was used for the input of six binary classifiers. The algorithms find the majority class from the outputs of six classifiers. Namely, the 4-class classification algorithm decides the majority class by the distance between thumb and index finger. Experimental Results EMG data for a total of 400 trials per subject were used for proving the utility of our system. As preprocessing procedure commonly used for sEMG, IIR band-pass filtering was applied to all the raw EMG data (Butterworth filter, order: 4, bandwidth: 20-500 Hz). Highpass and lowpass filtering is for removing movement artifacts which is typically dominant under 10 Hz and avoiding signal aliasing which is related to high-frequency components, respectively [20]. The power spectral densities were estimated using Welch's method for feature extraction. Based on the result obtained in Section 4.1, the powers which is estimated by Welch's method were used for the feature vectors. All the data were divided into a training and a test set and only the training set was used for constructing the classifier. We repeated this procedure ten times with different random partitions for calculating the classification accuracy (10-fold cross validation). The classification accuracies for the six subjects shown in Table 1, where the highest classification accuracies among subjects are indicated in bold. The right-most column in Table 1 means the whole 4-class classification accuracy instead of just the mean of the six binary classification accuracies. Mean correct rates were always significantly higher than 91.97%. These results clearly justify the utility of our system for recognizing the pinch-to-zoom gesture in real time. Discussions Since an HCI based on sEMG interprets and transforms the action potential that is induced by the movement of muscles into control commands for computer devices, many researchers consider an sEMG-based computer interface as a natural means of HCI [1,21,22]. Most studies on gesture recognition, based on the sEMG, have focused on wrist and arm motion detection. Our present study, however, tried to recognize the finger motion using a sEMG in real-time. Unlike existing studies, which have concentrated on detecting the flexion or extension of fingers, we constructed a pinch-to-zoom gesture detection system in real time for practical applications. Classification of sEMG responses in a single trial is very challenging because of the low SNR of the signal; therefore, signal processing techniques were required to extract task related responses from the raw sEMG signal. The overall procedure, described in our study, includes noise rejection, feature extraction, learning, and testing. First, IIR band-pass filtering was applied to the raw sEMG data for rejecting the noise. Next, we estimated the power spectral densities of filtered sEMG using Welch's method. Considering that the power of sEMG increases when a muscle is contracted, the power can be an appropriate indicator of task-related features. According to the result of Figure 5, the powers are significantly different between the four conditions (see Figure 5). Therefore, we have assumed that the powers are appropriate feature for identifying finger motor tasks. Since an SVM was originally designed only for classifying two classes, it is necessary to construct a strategy for multiclass classification based on SVMs. In this study, we selected an OvO strategy because of its outstanding performance. The performance of our system was evaluated through 10-fold cross validation, and the mean correct rate over all subjects was 93.38% for 4-class classification. All experiments were conducted in Matlab. In order to construct a myoelectric interface for real-life use, some critical issues should be considered. First, we should consider that most myoelectric interfaces are not appropriate for multi-user situations because sEMG signals are user-dependent. Since the skin impedance, thickness of subcutaneous fat, and the way muscles are moved for same gesture differ considerably among users, different classifiers have to be trained for individual users. This inconvenience of standard myoelectric interfaces makes them impractical, therefore, it is necessary to design a myoelectric interface for multiple users [23][24][25][26][27]. In our present study, we also tested the classification performance of our system for multiple users. We used the sEMG signal of a subject as test set, and the sEMG signals of remaining subjects as training set. We repeated this process for all subjects, and derived averaged classification accuracy. As a result, the averaged recognition rate was 41.36% ± 3.43%. Although this result is much over chance level for four-class classification, it is not enough for real-life application. Therefore, in the future study, we will develop the novel algorithm such as bilinear modeling in order to extract the user independent factors from sEMG signals for multi-user interfaces [28]. The second problem which has to be solved for practical application is the displacement of the electrodes. For recognizing the gesture using a sEMG-based system, it is necessary to acquire the task-related sEMG signal on a consistent muscle during training and testing. If electrodes are placed in the wrong position, the performance of the classifier may decline significantly. However, in the case of finger gesture recognition, it is very challenging to place the electrodes on exactly the same muscles since the muscles related to finger movements are usually very small. In this study, we recorded sEMGs on the first dorsal interosseous muscle, which is located between the thumb and index finger. Since the first dorsal interosseous muscle is the largest and strongest among the dorsal interosseous muscles, it can be easily found for all subjects and the SNR of the sEMGs recorded from the first dorsal interosseous muscle is better than the SNR of sEMGs recorded from the other dorsal interosseous muscles. When the distance between thumb and index finger become minimized, this muscle is maximally contracted and becomes swollen; therefore, we can easily find the specific location of the first dorsal interosseous muscle. This means that by using the sEMGs recorded from the first dorsal interosseous muscle, we can conveniently acquire pinch-to-zoom gesture-related sEMG signals from a consistent muscle for all subjects. Another obstacle for a practical application is how to select the appropriate number of classes. Since the number of classes and classification performance for a classifier is a trade-off, myoelectric devices usually recognize the gesture as two classes such as extension and flexion. Even though this approach shows good classification performance in a laboratory environment, two classes are not enough for real applications. Our study classified pinch-to-zoom gesture into four classes (0 cm, 4 cm, 8 cm and 12 cm). Although four classes may be still not enough to recognize smooth pinch-to-zoom gestures, it is not imperative to recognize the smooth pinch-to-zoom for practical applications, so that we choose only four distinct classes which show a high classification rate. However, in future study, we will try to construct the system to recognize the pinch-to-zoom gesture as more classes than four with high classification rates. As a practical application, we developed the software to control a presentation program (Powerpoint 2010, Microsoft, Redmond, WA, USA) based on our system. In this application, the results of the classifier (0 cm, 4 cm, 8 cm and 12 cm) are transformed into the commands, "run slideshow", "move to previous slide", "move to next slide", and neutral (see Figure 7). We used this tool for a presentation during 20 min without any errors. It shows that our system can be used in real-life applications. In addition, since the first dorsal interosseous muscle is highly related to pinch-to-zoom gestures as well as clicking motions, our system can be also used for recognizing the clicking motion which implies the tapping of index fingers. Therefore, our system was successfully utilized for the presentation software based on clicking motion with the same hardware and software. In this system, when subjects tap their index finger, the presentation program moves to the next slide. Figure 7. Snapshots of the application to control Powerpoint 2010 based on the pinch-to-zoom recognition system. (a) Scenario to run a slideshow. In this case, our system transforms the result of classifier, 0 cm into the command, "run slideshow" and the others (4 cm, 8 cm and 12 cm) into neutral commands; (b) Scenario to move slide. In this case, our system transforms the 12 cm result of the classifier into the command, "move to previous slide", 0 cm into "move to next slide", and both 4 cm and 8 cm into neutral. Considering the superior classification accuracy and low computational load, we expect that this system can be used in many types of applications, such as smart device control, robot arm control, sign language recognition, and game applications. For example, the system allows users to control web browsers or video actions of smart phones without touching the screen. Furthermore, this system has huge potential as a game controller because the video game industry requires quick and intuitive interfaces that can be used as game controllers. Existing devices have many physical buttons that require a lot of effort to master. Our system, however, can directly transform the movement of a user to the movement of a character in a video game.
4,993.8
2014-12-29T00:00:00.000
[ "Computer Science" ]
Pre-treatment minority HIV-1 drug resistance mutations and long term virological outcomes: is prediction possible? Background Although the use of highly active antiretroviral therapy in HIV positive individuals has proved to be effective in suppressing the virus to below detection limits of commonly used assays, virological failure associated with drug resistance is still a major challenge in some settings. The prevalence and effect of pre-treatment resistance associated variants on virological outcomes may also be underestimated because of reliance on conventional population sequencing data which excludes minority species. We investigated long term virological outcomes and the prevalence and pattern of pre-treatment minority drug resistance mutations in individuals initiating HAART at a local HIV clinic. Methods Patient’s records of viral load results and CD4 cell counts from routine treatment monitoring were used and additional pre-treatment blood samples for Sanger sequencing were obtained. A selection of pre-treatment samples from individuals who experienced virological failure were evaluated for minority resistance associated mutations to 1 % prevalence and compared to individuals who achieved viral suppression. Results At least one viral load result after 6 months or more of treatment was available for 65 out of 78 individuals followed for up to 33 months. Twenty (30.8 %) of the 65 individuals had detectable viremia and eight (12.3 %) of them had virological failure (viral load > 1000 RNA copies/ml) after at least 6 months of HAART. Viral suppression, achieved by month 8 to month 13, was followed by low level viremia in 10.8 % of patients and virological failure in one patient after month 20. There was potentially reduced activity to Emtricitabine or Tenofovir in three out of the eight cases in which minority drug resistance associated variants were investigated but detectable viremia occurred in one of these cases while the activity of Efavirenz was generally reduced in all the eight cases. Conclusions Early viral suppression was followed by low level viremia for some patients which may be an indication of failure to sustain viral suppression over time. The low level viremia may also be representing early stages of resistance development. The mutation patterns detected in the minority variants showed potential reduced drug sensitivity which highlights their potential to dominate after treatment initiation. Trial registration Not applicable. Electronic supplementary material The online version of this article (doi:10.1186/s12985-016-0628-x) contains supplementary material, which is available to authorized users. Background Highly active antiretroviral therapy (HAART) has resulted in improved quality of life among people infected with human immunodeficiency virus (HIV) including reduced mortality and morbidity rates. However, virological failure caused by emergence of drug resistant variants still occurs in some individuals on HAART [1]. Some individuals experience virological failure after HAART initiation as they may harbour pre-treatment drug resistant viral species [2][3][4]. HIV treatment guidelines from developed countries recommend drug resistance testing before initiation of HAART [5,6]. In developing countries pre-treatment screening for drug resistant species is however not done as part of treatment optimization due to limited resources. Pre-treatment drug resistance data are usually obtained through conventional population sequencing methods which do not detect low level viral species with a frequency of less than 20 % [7,8] and this may underestimate prevalence figures and the effect that these variants have on treatment outcomes. The effect of pre-treatment resistance associated minority variants on treatment outcomes is a subject of many studies [7][8][9][10][11][12][13][14]. While some studies have reported lack of strong association between drug resistance minority variants and treatment outcomes [9,10], there is also strong evidence suggesting otherwise. Patients with no detectable drug resistance mutations using the Sanger method but with low level resistance mutations detected by sensitive methods were shown to have a more than double increased risk of virological failure after initiating HAART [11]. Minority drug resistant species have been shown to reduce the effectiveness of first line non-nucleoside reverse transcriptase inhibitor (NNTRI) based regimens which are the widely used regimens in developing countries [12]. Viral load (VL) testing in individuals on HAART is the method used to detect HIV replication and virological failure. The measure of a successful HAART program would be a sustained viral suppression over time in individuals on the program. The South Africa HIV management guidelines recommend VL testing 6 months after HAART initiation followed by VL testing at 12 months and every 12 months thereafter [15]. Baseline drug resistance testing is not recommended but pre-treatment drug resistance prevalence figures of less than 5 % have been reported in South Africa [16]. We investigated long term virological outcomes and the pattern of pre-treatment minority drug resistance mutations in individuals initiating HAART at a local HIV clinic. Participants Participants were recruited from Tshwane District Hospital HIV clinic in Pretoria central, South Africa between July 2013 and May 2014 after written informed consent and followed up for at least 12 months and up to 33 months. Eligible participants were HIV infected treatment naïve adults with CD4 cell counts < 350 cells/μl and/or World Health Organization (WHO) clinical stage 3 or stage 4. Participants were initiated on a NNRTI based regimen consisting of Efavirenz (EFV), Emtricitabine (FTC) and Tenofovir (TDF). Sample collection and RNA extraction Samples were obtained before treatment initiation. Plasma was isolated from 10 to 15 ml of whole blood collected in EDTA tubes by centrifugation at 1600 g for 10 min and stored at −70°C until required for RNA extraction. RNA was extracted from 210 μl of plasma using the QIAamp Viral RNA Mini kits (Qiagen) and eluted with 60 μl elution buffer. Viral load monitoring Follow up HIV-1 VLs for treatment monitoring were done after at least 6 months of treatment using Abbott Real Time HIV-1 test (Abbott laboratories, Illinois, USA) with a detection limit of 40 RNA copies/ml. Suppression was defined as a VL < 50 RNA copies/ml. Virological failure was defined as a VL ≥ 1000 RNA copies/ ml after at least 6 months of HAART. HIV genotypic resistance testing A previously described method and primers was used for nucleic acid amplification and sequencing [17]. Superscript III reverse transcriptase enzyme (Life Technologies Corporation, California, USA) was used for cDNA synthesis and Platinum Taq enzyme (Life Technologies Corporation, California, USA) was used for the polymerase chain reaction (PCR). PCR primers targeted the protease (PR) gene and the first 300 codons of the reverse transcriptase (RT) gene (HXB2 nucleotide 2166-3440). BigDye Terminator V3.1 Cycle Sequencing Ready Reaction Kit (Applied Biosystems, Foster City, CA, USA) was used for sequencing reactions with two forward and two reverse primers and sequencing was done on the 3100 Automatic capillary sequencer (Applied Biosystems, Foster City, CA, USA). Sequences were edited and assembled on Sequencher V 4.5 (Gene Codes Corporation, USA). The Stanford HIVdb algorithm Version 7.0. (http:// hivdb.stanford.edu/) was used for subtyping, resistance mutations interpretation and quality assessment. HIV deep sequencing For ultra-deep sequencing (UDS) amplicons obtained from pre-treatment samples from four individuals who experienced virological failure were sent to Inqaba Biotechnical industries in Pretoria South Africa for sequencing up to 1 % prevalence. Additional samples from three individuals who virally suppressed and one sample from an individual who experienced low level viremia were also deep sequenced to provide comparison. Briefly, cDNA samples were fragmented using an ultrasonication approach and the resulting fragments were purified and size selected, end repaired and an illumina specific adapter sequence ligated. Following quantification, the samples were individually indexed and a second size selection step was performed using AMPure XP Beads. Libraries were quality controlled on a DNA chip (Agilent 2100 Bioanalyzer) and then sequenced on illumina's MiSeq platform, using a MiSeq v3 kit according to the manufacturer's protocol. Fifty (50) Mb of data (2 × 300bp long paired end reads) were produced for each sample. This was followed by HIV sequence analysis and quality check, genotyping and drug resistance interpretations using Deepchek 1.4 HIV analysis software. A minimum of 461 sequences was required for 99 % confidence at 1 % threshold and a Q30 score measure was applied for basecalling. Pre-treatment drug resistance determination Pre-treatment drug resistance was defined as identification of a mutation that is known to cause a reduced susceptibility to at least one prescribed drug. Adherence monitoring Adherence was monitored using a combination of measures including checking for missed clinic visits, participant interviews and investigating reasons for which participants had been sent for counselling sessions. Statistical analysis Differences between mean CD4 cell counts were calculated in excel using the t-test Two sample assuming equal variances. Enrolled participants characteristics and follow up A total of 78 participants were recruited into the study. At least one follow up viral load result was available for 65 participants. Follow up viral loads were not available for 13 participants (16.6 %) due to loss to follow up. Participant characteristics are shown in Table 1. Treatment initiation to first viral load monitoring After treatment initiation, participants came for the first viral load monitoring at different time points (Fig. 1). Virological outcomes There were 20 participants (30.8 %) with detectable viremia at some point after at least 6 months of HAART (Fig. 2). Virological failure (>1000 RNA copies/ml) was detected in eight participants (12.3 %) with the following identification codes (IDs): L013, L029, L031, L037, L039, L054, L064 and L069 while the rest of the participants had low level viral loads < 1 000 RNA copies/ml. Virological failure occurred at 6-12 months after HAART initiation in six individuals (accounting for 9.2 % overall and 75 % in virological failure group) while virological failure occurred at 22 months in one individual (L039) who had initially achieved viral suppression. There were eight participants (12.3 %), L022, L038, L039, L049, L058, L061, L063 and L067, who had initially suppressed the virus to below detection limits by month 8 to month 13 followed by detectable viremia at month 20 or after in all cases. The detected VLs after initial suppression were < 1 000 RNA copies/ml except for L039 with a VL of 21 646 RNA copies/ml. Subsequent viral load results after the initial rebound were available for two of these individuals, L038 and L039, and were done 10 and 3 months later respectively. The detectable viremia was shown to persist in both cases. Immunological outcomes of participants with detectable viremia There were no differences in mean baseline CD4 cell counts between the group that managed to achieve viral suppression and the groups with detectable viremia ( Table 2). The same pattern was observed in CD4 cell counts done after 6-12 months of treatment. However, there was a significant difference between mean baseline CD4 cell counts and mean CD4 cell counts at 6-12 months after treatment initiation for the virally suppressed group and the low level viremia group. The virological failure group did not achieve a significant change in mean CD4 cell counts between these two time points. CD4 cell counts at time of virological failure were < 500 cells/μl (range = 21-441 cells/ μl) for six out of the eight participants while participant L013 who was virally suppressed at some point had a CD4 cell count > 500 cells/μl (Fig. 2). Of the eight participants who initially suppressed the virus but later had detectable viremia, three had CD4 cell counts < 350 cells/μl at the time of viral suppression, while four participants had CD4 cell counts > 350 (range = 427-746 cells/μl) (Fig. 2). Baseline minority resistance mutations in participants with detectable viremia Detection of baseline minority resistance associated mutations using ultra deep sequencing to 1 % threshold was done for eight participants. The coverage and basis for excluding some mutations from analysis is shown in Fig. 3 and Additional file 1: Figure S1. Of these eight participants: four had virological failure (>1000 RNA copies/ml), one had low level viremia < 1 000 RNA copies/ml and three had suppressed viremia (<50 RNA copies/ml) after at least 6 months of HAART. The mutations list and patterns are shown in Table 3. Impact of treatment efficacy of initiated drugs NRTI drugs TDF or FTC had potentially reduced activity in three out of the eight cases in which minority drug resistance associated variants were detected but detectable viremia occurred in one of these cases. The activity of EFV was reduced in all the eight cases ranging from low level resistance (two cases) to high level resistance (1 case). Taken together, the pattern of minority variants detected showed potentially reduced EFV treatment efficacy and were generally susceptible to the NRTI drugs TDF and FTC at 1 % prevalence. Discussion The goal of HIV antiretroviral therapy is to achieve sustained viral suppression in individuals on treatment. Virological failure due to development of drug resistance is still a challenge for many HIV treatment programmes in developing countries due to lack of resources for effective treatment monitoring and treatment optimization among other challenges. Furthermore, the effect of pretreatment resistance associated minority variants on virological outcomes is still not well understood and largely underestimated because of reliance on conventional population sequencing data which do not include minority species. In this long term follow up study, baseline samples showed a high prevalence of pre-treatment resistance associated minority variants although their clinical relevance requires further study. There was high prevalence (30.8 %) of detectable viremia of any kind after at least 6 months of HAART and in 40 % of these cases detectable viremia occurred after previous viral suppression to below detection limits. Virological failure occurred in 12.3 % of the participants compared to an overall 15 % virological failure rate (range 0-43 %) calculated from 19 studies done in sub-Saharan Africa countries [18]. However the duration of treatment at time of failure detection was not mentioned for most of these studies. We detected virological failure at month 6 to 12 after initiation of HAART except where virological failure was detected after previous viral suppression. Viral load testing at month 6 and month 12 after HAART initiation, which is recommended in South African public health institutions caring for HIV positive patients, seems adequate to detect early virological failure as was shown in our participants. However, up to 23.1 % of participants as indicated in Fig. 1 had their first post HAART initiation viral load after more than 12 months which may affect the detection of early virological failure. South Africa HIV management guidelines [15] use a cut off value of VL > 1000 RNA copies/ml to define virological failure, however HIV treatment guidelines from some developed countries are more stringent and define virological failure as confirmed VL > 50 RNA Table 2 Comparison of mean CD4 cell counts: Intergroup and intragroup comparison of mean CD4 cell counts for the virally suppressed group, treatment failure group and low level viremia group at different time points copies/ ml [5,6]. Using this criterion, our frequency of virological failure would increase by more than 50 % from 12.3 to 30.8 %. We noted that viral suppression had been achieved by month 8 to month 13 and viral rebound occurred after month 20 in a subset of individuals indicating failure to sustain viral suppression over time. Although the rebound resulted in low level viremia ranging from 83 RNA copies/ml to 579 RNA copies/ml in seven of the cases, detection of low level viremia after previous viral suppression should be monitored as these detections may be early stages of resistance development. The association of pre-treatment minority drug resistance associated HIV variants of < 1 % frequency or greater with increased risk of virological failure for individuals on NNRTI based regimens has been highlighted [12]. The mutation patterns detected in the minority variants showed potentially reduced sensitivity to EFV and to a lesser extent TDF and FTC which highlights their potential to dominate after treatment initiation. However, the comparison of the pre-treatment pattern of minority variants detected between the virological failure group and the suppressed group did not show a specific pattern associated with virological failure at this stage indicating that other factors might be involved. Factors such as mutation linkages and mutational loads also need to be investigated. A single TAM D67E was found to be highly prevalent but TDF-based regimens have been shown to be effective in the presence of other single TAMs such as M41L at baseline [19]. Pretreatment minority TAMs M41L and K70R have been reported in other studies [20] including multiple TAMs of up to six in another study [13]. Pre-treatment HIV drug resistance has been shown to increase regimen switches in developing countries [2] which results in increased treatment costs and showing L001 (virally suppressed). Mutations were excluded from analysis for any of the following: noisy mutations filtering, coverage filtering, forward/reverse unbalanced frequency and forward/reverse unbalanced coverage. Additional file 1: Figure S1 shows the rest of the samples. C -E (virological failure, L031, L054 and L064 respectively) and F -H (detectable viremia, L009 and virally suppressed L074 and L075 respectively) exhaustion of treatment options. Recent studies have also demonstrated strong association between low level viremia of 200-499 RNA copies/ml with virological failure [21] while some studies have associated virological failure with lower levels of less than 50 copies per ml [22] after a follow up period of up to 2 years. This compares with our results where viremia after initial suppression was detected at month 20 and after and shown to persist on subsequent testing. Low level viremia may also be viewed in light of the correlation between viral persistence and the viral reservoir size which may require several years of HAART to reduce [23,24]. We noted that 43 % of individuals who experienced low level viral rebound after initially suppressing the virus generally had poor immunological outcomes with a CD4 cell count < 350 cells/μl after more than 6 months of HAART. The same poor immunological outcome pattern was observed in individuals who experienced virological failure where 75 % of them had a CD4 cell count < 350 cells/μl. However CD4 cell count monitoring in isolation has been shown to be a poor marker for treatment failure [25,26] and in our case we had a minority of individuals with CD4 cell counts > 500 experiencing virological failure or viral rebound. The study has a number of limitations in particular the absence of resistance data at the time of virological failure. Additional samples taken at the time of routine viral load testing were not always available since the researchers were not part of the routine patient care personnel at the clinic where participants accessed care. The small number of samples that were deep sequenced also limited the information and conclusions that could be derived from this data. The data obtained however provide a basis for further investigation using larger sample sizes and comparison of baseline minority mutations with sequences dominating at time of failure. We also relied on self-reported information on treatment adherence to monitor participant's treatment adherence levels. Poor adherence to treatment may have been the cause of virological failure in some individuals given that there was a high loss to follow up of 20 % in our cohort. Many studies have shown that poor treatment adherence is associated with virological failure in resource limited settings [27][28][29][30] and that an adherence of at least 95 % in individuals with minority variants significantly lowers the risk of virological failure [12]. Conclusions While the development of low level viremia or virological failure after previously suppressing the virus that we noticed in a subgroup of our participants could be due to viral persistence emanating from established viral reservoirs, there is need to establish if such occurrences might be due to resistant minority variants beginning to take over and dominate following suppression of the treatment sensitive population. These data therefore highlight the need for a better understanding of the role played by pre-treatment HIV drug resistance associated minority variants under various clinical settings. Additional file Additional file 1: Figure S1. Deep sequencing coverage. C -E shows sequencing coverage for samples with virologicalfailure (L031, L054 and L064 respectively), F shows coverage for a sample with detectable viremia (L009)and G and H show coverage for virally suppressed samples (L074 and L075 respectively). Mutations wereexcluded from analysis for any of the following: noisy mutations filtering, coverage filtering, forward/ reverse unbalanced frequency and forward/reverse unbalanced coverage. Authors' contribution MLM participated in the design of the study, recruited participants, collected data from patient's records, carried out the sequencing assays, performed statistical analysis and drafted the manuscript. CTT participated in the design of the study and helped to draft the manuscript. KR coordinated and facilitated the ultra deep sequencing assays. SHM participated in the design of the study and helped in participants' recruitment and collection of data from patient's records. GH participated in analysis of ultra deep sequencing results. SMB conceived of the study and participated in its design. All authors read and approved the final manuscript.
4,825.8
2016-10-12T00:00:00.000
[ "Medicine", "Biology" ]
Learning to Teach : A Descriptive Study of Student Language Teachers in Taiwan Studies have shown that many training programs are relatively ineffective in preparing prospective teachers for classroom teaching. Such findings suggest that teacher training programs might require improvement and that prospective teachers should be more thoroughly assessed during the training period. This study examined the learning process of a group of EFL teachers during their practicum at elementary schools. Our findings indicate that prior language learning experience and peer student teachers play a critical role in this period. Overall, the results suggested that student teachers would benefit from greater integration between field experiences, practicum, and lecture courses, which would enable the students to link teaching theory and practice more effectively. Introduction Numerous studies have been conducted with student teachers to explore various issues.Research topics have included program design, changes in teacher perspectives and attitudes during the training period, and the implementation of innovative techniques or technology.Results from these studies provide fresh ideas and hope for improved teacher education.However, several studies have indicated that training programs may be relatively ineffective in preparing prospective teachers for classroom teaching (Hwang, 1996;Leu, 1997).Such findings suggest that current teacher training programs might be deficient in some areas, and that better assessment of prospective teachers is needed during the training period.The relationship between course work and personal changes during the training period deserve greater attention. The process of learning to teach requires rigorous effort from both the student teachers and the trainers.This process constitutes a conceptual transformation in which student teachers reconstruct idealized or inappropriate ideas of learning and teaching.Various concerns and stages have been identified as indicators of conceptual transformation.According to Fuller (1969), teachers develop typical concerns during the process of professional development, such as concerns with self, teaching tasks, and effect.Novice teachers are particularly self-conscious and concerned with evaluations by administrators, and with gaining acceptance from their students and colleagues.As their length of teaching experience increases, their attention shifts to the effectiveness of their delivery of content and its effect on student learning.Occasionally, teachers become concerned with the pedagogical beliefs and values that they bring to their classrooms, based on their personal preferences and prior experience as students. This study was referenced on Kangan's (1992) model of teacher development.The study approach was descriptive and examined the experiences of student language teachers receiving training in an EFL context.The research questions were as follows: How do student teachers perceive language teaching and learning?What changes and difficulties are encountered over the practicum period? To what extent do the classroom experience and training program prompt student teachers to change their beliefs during the learning process? Changes in the Beliefs of Student Teachers Concepts of teaching are belief-driven (Chan & Elliott, 2004), and teachers' beliefs determine their performance and behavior in the classroom.Teachers make decisions and judgments according to the beliefs they bring to the classroom setting.Preconceptions of teaching and learning play a key role in student teachers' comprehension and learning during the training process; such preconceptions tend to resist change strongly.Thus, one of the key goals of teacher training programs is to correct misconceptions.Student teachers attribute their pre-training beliefs to cultural sources, years of experience as language students, models they have previously encountered, or their self-image as a teacher (Fischl & Sagy, 2005;Korthagen, & Russell, 1999;Lin, 2008, Lin & Lucey, 2010).Preconceptions act as a filter through which the theories and information presented during training must pass.In some cases, the preconceptions remain intact and the effect of training is questionable. Methods for promoting constructive change have been discussed in several studies.Key elements leading to changes in preconceptions have been identified, including events, contexts, and practices encountered during field experiences (such as the practicum).Real classroom experiences provide an environment in which students can testing and re-examine both learned theories and their own pre-existing beliefs.The teacher's personality, cultural background, and strength of beliefs influence how extensive the changes are likely to be (Eliam, 2002;Hennissen et al, 2011).For example, low-risk contexts that provide support and suggestions from supervisors and other student teachers increase the likelihood of change (Hollingsworth, 1989;Tang, 2003).When facing challenges, student teachers are confronted by a mismatch between their beliefs and reality.This conflict generates a tension that places the student in an "unstable equilibrium" (Melnick & Meister, 2008).The need to move from "disequilibration" to "stable equilibrium" provides the catalyst for belief change.Busch (2010) suggests that teacher educators should consider the effect of the belief systems of preservice teachers early in the training period.Furthermore, the transfer of skills from a training environment to classroom practice requires the integration of theory and practice (Brouwer, 1989).The student's involvement into a school context and the chance to participate in professional relationships beyond the classroom generate considerable influence in changing beliefs.Unfortunately, such opportunities are limited in most training programs, and social contacts are restricted to the college classroom.This narrowness of the contacts provided in training programs requires revision, if the final goal of changing student teachers' beliefs is to be realized. Effect of Training and Teacher Education Programs Teacher training programs generally aim at improving teachers' use of their pedagogical knowledge and familiarity with effective teaching methods (Begle, 1979;Goldhaber & Brewerm, 2000).Training courses and field experience comprise the two key elements of most training programs.Theories and concepts discussed during training represent "knowledge for teachers", which refers to the knowledge and skills needed for teacher certification.By contrast, field experiences such as the practicum provide professional contexts for teachers to develop their self-image as teachers (Xu & Connelly, 2009).Prospective teachers require information on specific subject content and the learning context in addition to general teaching skills (Bransford, Darling-Hammond & LePage, 2005;Wilson et al, 2001).Mentoring during the practicum helps students to develop teaching competence, and mentor dialogue during the practicum provides emotional support and guidance to student teachers as they struggle with the disjoint between theory and reality (Lindgren, 2005;Marable & Raimondi, 2007).Regardless of the curriculum design and pedagogical content, all training programs aim at changing the beliefs or behaviors of new teachers, and transplanting theories into the classroom. Studies on the effectiveness of training programs have shown that different programs provide different degrees of benefit for preservice teachers.Certain training programs achieve limited results in altering mismatched preconceptions (Lo, 2005;Peacock, 2001).An extensive investigation on the change in beliefs among preservice teachers showed that students tended to hold the same personal beliefs and self-images as a teacher before and after a training program (Kangan, 1992).The knowledge, practice, and supervision gained during training were apparently insufficient in changing these beliefs.To an extent, information acquired during the training process may actually confirm the students' preexisting beliefs and determine how they learn new knowledge (Richardson, 2003).When teaching in classroom settings, student teachers tended to use the methods and techniques they grew up with rather than those learned during their training program.To make matters worse, according to Korthagen and Russell (1999, p. 3), "Many notions and educational conceptions, developed during teacher education, were 'washed out' during field experiences." Studies on teacher training programs have shown the value of integrating specific theories into learning contexts where students can practice applying the theory.However, theories should not be introduced into contexts that are too tightly controlled or lack sufficient pupils, as the results are less effective under those conditions (Watzke, 2007).The opportunity to learn from real-life experience but being deprived of the opportunity for adequate reflection weakens the effect.In such cases, the transition from the learning process to the pedagogical routine remains unclear for the student (Hollingsworth, 1989).Many training programs provide insufficient pedagogical knowledge for prospective teachers (Kang, 1992;Zeichner & Gore, 1990).The tendency of training programs to emphasize theoretical and philosophical issues creates a gap between theory and real-life knowledge and experience, a discrepancy that frustrates prospective teachers (Calderhead & Shorrock, 1997).Studies such as the current one are required to investigate the process by which people learn to teach and the elements involved in this process; that is, courses or practices in the training program that assist along the way. The Study Descriptive studies describe the existing phenomena and discover potential research areas and connections that previous related studies overlook.Such studies focus on data collection during a specific time and at a specific place (Vogt, 1999).The current descriptive study adopts a qualitative paradigm and examines the learning process of a group of EFL teachers while conducting practicum at an elementary school. Teaching Practicum Data from classroom observations were collected from the records of student teachers during their practicum.Classroom experiences constitute a major part of the training program we investigated.Students in the program were required to teach English to elementary school pupils once a week in the third year of the program.The teaching practicum lasted a year and involved cooperation between the training institute and the public elementary school in the same neighborhood.This cooperation allows student teachers to gain (a) experience in teaching English to elementary school pupils, and (b) classroom teaching experience.Each week, student teachers were required to plan and teach one English lesson beyond the content of the regular English class.They decided on the content themselves, and designed their own material to fit with the theme of the regular English class.The practicum generally was conducted on a Tuesday afternoon and lasted 40 minutes.Student teachers visited the elementary school to teach English during that time and returned to the university afterward.Mentor sessions with the university lecturer were held weekly.Moreover, group mentorship with other student teachers occurred every second week; these meetings provided the opportunity to discuss events and difficulties encountered in classroom teaching.At the end of the semester, English teachers at various elementary schools reviewed the recordings of the student teachers and offered their suggestions during a group meeting.Thus, student teachers were exposed to several sources of suggestions and feedback, and additional information on techniques and solutions. During the practicum, the weekly routine for each student teacher included (a) developing lesson plans prior to the classes, (b) reflecting on the lesson by keeping a teaching journal, and (c) discussing the lesson with the instructor afterward.Student teachers were taught how to develop lesson plans at the beginning of the practicum.To enhance consistency among all student teachers, a single model lesson plan was provided as a sample to emulate.The model lesson plan consisted of several parts: lesson objectives, level of students, activities, props used, and lesson evaluation.To prompt reflection, the students were given a list of questions to answer about their own teaching, such as: "What did you learn from that event in class?" and "If you had a second opportunity, what would you do differently?"The university lecturer read and commented on the students' lesson plans and reflections each week before conducting the mentoring session. Participants For qualitative fieldwork, researchers draw a purposive sample to present various examined phenomena (Stake, 2000).Participants in this study represent student teachers of different genders and teaching experiences.Two male and two female student teachers were chosen in this study (Table 1).All were non-native English speakers with more than ten years of English learning experience.Their English proficiency varied, but was sufficient to teach elementary students.Three of them were in their early twenties and one male teacher was in his early thirties.Each of them had taught English in private settings, but their teaching experience was considerably limited.Each participant had personal involvement with children and their experience as an English teacher in language classroom varied.In terms of pedagogical knowledge, all of them had finished the required courses related to language teaching and learning and they continued taking two remaining core courses on language teaching methods and approaches during the time of the study. Data Collection and Analysis This study explored the integration between learned knowledge and actual classroom teaching, and changes in the students' beliefs about teaching.Data from interviews, reflections, lesson plans, and classroom observations were collected and analyzed.Each data source was triangulated by two researchers to identify (a) changes in students' concerns and beliefs, (b) possible causes for the identified changes, and (c) difficulties encountered during the practicum.The data analysis followed the procedure of grounded theory; we used the techniques of open coding and axial coding to identify and make connections between themes and categories that emerged from the data.Themes concluded based on each directions were discussed by the two researchers until they reached consensus. Semi-structured Interviews The researchers interviewed each participant before, during, and after the practicum.Each interview lasted approximately one hour and a semi-structured schedule was used.We interviewed four students and transcribed the interview data before conducting the analysis.Interview questions were arranged in four different categories: (a) pedagogical beliefs, (b) teaching content/activities, (c) reflection on classroom teaching, and (d) life experience. Teaching Reflections and Lesson Plans The qualitative data analysis emphasized the types of activities included in students' lesson plans, and special events that occurred during teaching.Approximately 40 teaching reflections and lesson plans from each participant were collected and examined.In the teaching reflection, student teachers were guided to reflect upon unexpected events that occurred in the classroom and the manner in which they had handled those incidents.They discussed how they would plan and organize an activity if they had a second chance to teach the same content.In addition, lesson plans showed whether the students' preferences for activities and teaching props had changed.The teaching reflections showed how well students developed critical analytic skills regarding their own teaching and their ability to handle unexpected events. Classroom Observation Data gained from the interviews and teaching reflections were triangulated with direct observation in real classroom teachings.Every week, we video recorded one class from each grade.Thus, an average of two to three lessons per student teacher were recorded each semester.Six lessons for each participant were recorded and analyzed according to various aspects.We used a classroom observation sheet to note aspects of in-class teaching, such as the use of English or the type of learning activities presented.The information documented on the observation sheet helped to portray the students' implementation of pedagogical beliefs and their modifications of lesson plans, to adapt to the classroom reality.Certain sections of the classroom observation sheet required descriptions of events, and other sections required the observer to log the occurrence of specific situations.Two reviewers watched the teaching recordings together and each filled out a separate observation sheet.After this review, they discussed discrepancies to reach consensus. Pedagogical Beliefs on the Qualification of English Teachers Questions regarding elements that promote learning English and the qualification of an English teacher were asked in interviews throughout the practicum period.We wanted to assess whether these aspects of the student teachers' thinking were altered during this period.No change was observed in their concepts of essential elements for learning English, but their beliefs about the qualification of teachers did change during the practicum. Participants all stated that the opportunity to communicate in English in various contexts was important to learn the language.Pronunciation practice and activities or games where students could speak English were also mentioned by the majority of participants.However, of all the participants, only Vivian created a classroom atmosphere that directly reflected this belief.With her strong faith in the "immersion" approach to language learning, Vivian tried to create an environment in which her students could listen to and speak English.She wrote, "I want to make sure that I do not speak any Chinese in the class because students need to hear English but not Chinese" (10-17, teaching reflections).As a result, her class was unique because of the high proportion of English used in the instruction.All other participants conducted their classroom instruction mainly in Chinese, and the class activities comprised mechanical exercises using fixed patterns and words.And such classroom instruction is the traditional way of learning English in Taiwan.The discrepancy between Vivian's approach and that of the other students shows the potentially strong influence on teaching behavior gained from students' past experiences as language learners.The majority of participants taught in a manner that reflected their own memories of having learnt English at a young age. When asked in the initial stage of the practicum about the training of English teachers, participants emphasized that training programs should provide knowledge (i.e.English proficiency and content) and cultural understanding (i.e. of countries in which English is spoken) as well as pedagogical knowledge (e.g.language teaching methods and child psychology).Several participants mentioned that personal qualities should be fostered by the training, including affective traits such as compassion toward students or the courage to stand in front of the classroom, and skills on classroom management.Jay stated that English proficiency, particularly oral ability, and teaching ability are lifelong pursuits that require consistent practice and new knowledge.He said, "Teaching in a real classroom can help accumulate teaching experience that is the most important of all.The same thing applies to English proficiency.Both of these two elements really need time and patience and cannot be achieved in the short term" (first interview). Vivian considered the willingness to place oneself in a pupil's shoes and the confidence to stand in front of the class to be two crucial characteristics of a good English teacher.She also emphasized the necessity of theory in training English teachers.In the interview, she said, "The first and most important is theory [on language teaching/learning] because you understand the development and changes over history in language teaching through the study of theories.This is similar to standing on the shoulder of a giant and looking over it" (first interview). Participants' opinions on English teacher training and qualifications reflected their views on professional knowledge and the manner in which a teacher presents themselves to a class.All participants wished to be seen as a patient and caring teacher.Thus, the initial emphasis was on the teacher.As the practicum continued, the focus shifted to interaction with the students and skills in dealing with students' needs.Kathy was evidently quite affected by her classroom teaching experience.Toward the end of the practicum, her ability to create appropriate props and handle unexpected events in the classroom became more important than the knowledge she had previously considered crucial (in the previous year).A similar change was mentioned by Joseph, who recognized that the ability to understand pupils' feelings was more important than English ability. These changes in beliefs on teaching English suggested that the practicum enabled student teachers to realize that teaching English requires both professional and practical knowledge.The experience of interacting with pupils in a real classroom setting allowed novice teachers to appreciate that comprehending their students' needs and struggles was as important as English language proficiency. Changes and Difficulties during the Practicum Previous studies have shown that an encounter with conflict between previous beliefs and real life provides the catalyst for teacher development.The most commonly affected area was classroom activities.Each participant carries their understanding of and beliefs on learning of young learners from various memories on this particular group of student teachers.The student teachers' past learning experiences also shaped their design of classroom activities and their interaction with pupils.Idealized and inappropriate beliefs have to be confronted when teaching in a real classroom.Such changes typically result in a greater understanding of pupils and a more mature comprehension of one's position as a teacher.Change typically occurs when the student teacher notices something "different" in their classroom.These differences contradict with the teacher's beliefs and catch their attention.For example, Jay's teaching reflections and classroom observations during the first half of the practicum showed relatively limited signs of engagement with pupils in classroom activities.However, in his third interview, Jay stated that My teaching in the first semester was pretty much a repetition of the same pattern: review, new lessons, and then review again.I felt the boredom from the eyes of my students.But, then in the second semester, students' excitement caught my attention after playing games… I continued to include different games into my lesson plan every week... lessons have to be creative to motivate students. After discussions with other student teachers, Jay realized that he had been ignoring the learning style of his students who were fun seekers.He had believed that classroom activities should include as many practices and drills as possible.The classroom observations highlighted the change in his attitude, as Jay was observed to smile more often in the second semester, and engaged in more games.He began to understand the different needs of his students and the typical learning styles for each age group. Similar findings were evident in Vivian's case as she began to choose more static classroom activities.In the first interview, when talking about her design of classroom activities, she said: "… [I] tried to modify the activities learned in the school club… play several games from my childhood."Initially, pupils in her class constantly moved around the classroom or outside in the playground, and Vivian engaged them with numerous songs and games.Gradually, her use of activities requiring physical action decreased.Toward the end of the second semester, she realized that activities that did not require students to move physically in the classroom could be equally effective.She wrote: "… this time I realize that dancing or moving around is not the only way to teach young children; writing and drawing are also good ideas!I was also able to better control the whole class when everyone was sitting in their seat."(teaching log; March 18th ). Changes in classroom activities and interaction indicated a growing understanding of pupils' feelings and needs. For Vivian and Kathy, this understanding also led to the development of critical thinking.Both these student teachers were able to look beyond their responsibilities as practicum teachers, and reflect upon their contribution to their pupils and the education system overall.Toward the end of the first semester, Vivian wrote: I usually think about one question: What can I teach my students?I only have one hour each week to interact with them and it takes time to build a relationship.With such limited time, how can I make my teaching as influential as possible?I believe, instead of teaching them English, it might be more helpful for them if I teach them the correct attitude toward learning and build their interest in English and the related culture (teaching log; December 18th ). Vivian began to expand her responsibility as a teacher beyond the focus on transmitting linguistic knowledge to her students.She wanted to make a difference in the pupils' learning process. Kathy similarly witnessed the pupils' struggles and noticed an inequality.In the final interview, she spoke of the wide range of English proficiency among her students.She said, "… I feel for my students.At such a young age, some of them need to attend cram school after getting out of elementary school.Others are so eager to learn English, but their family cannot afford after-school language schools.I think something needs to be done to change it" (third interview). Student teachers also showed a change in their approach to challenges that they encountered in the classroom.Participants frequently faced the dilemma of how to control students' behavior effectively.Relatively experienced teachers, such as Vivian and Kathy, seldom addressed this concern in their teaching reflections or interviews.Students in their classes were indeed more effectively controlled and disciplined.However, the two male student teachers, especially Joseph, encountered greater challenges in dealing with pupil misbehavior.Joseph's classroom observations showed that his provision of rules on punishment and reward was unclear. Students often walked around the classroom without appropriate control, and Joseph's classes tended to be chaotic.He complained of the difficulties in handling student misbehavior several times in his interviews.For example, he stated that "I am most troubled by classroom order.The entire class is in chaos and students like to talk to each other when I am teaching.I ask them to be quiet, but they go back to the 'talking mode' in seconds!I am really frustrated to see it" (second interview). Another common problem emerged in our study.This struggle did not result in changes on the part of student teachers, but led instead to a sense of hopelessness.The problem can be referred to as the "double peak" phenomenon that is frequently encountered in Taiwan, which describes the significant discrepancy in pupils' English proficiency within a single class.Many Taiwanese parents have been deeply influenced by and follow the idea of the Critical Period Hypothesis.According to this theory, the earlier a child is exposed to English, the better.Consequently, numerous Taiwanese children begin their English learning at a young age.In response to the demand, a large number of kindergartens and private language institutes have been created, presenting a huge volume of learning material.However, many scholars and language educators oppose this practice.Nonetheless, learning English at a young age has become the norm in Taiwan and the growth of this field has been sensational.Unfortunately, behind this sensation is the untold story of stressed children whose lives are out of their own control.At the same time, teachers face the difficulty of choosing material that is appropriate for a specific student level. Difficulties during the Practicum The ability to manage students in the classroom is the most painful and struggled component to every student teacher during the practicum period.Past experience with young children determines how much the student teacher would be troubled by students' behavior in the classroom.Problems in classroom management come from insufficient understanding of young children and the role as a teacher.Student teachers do not recognize their significant role as an authority figure and young learners' ways of learning when first stepping into the classroom.During the process of classroom teaching, pupils act as "critical reality definers" (Tang, 2003) and the comprehension of the nature, needs, and difficulties of students' learning validate and modify pedagogical competence and the image of teachers of student teachers.Teachers without a clear image of themselves as teachers in the initial stage of learning fail to integrate classroom management and instruction and establish a procedure that would further hinder transformation of the self-conscious to more automatic and unconscious teaching (Hollingsworth, 1989;Kangan, 1992).Consequently, methods on helping students in teacher training program be aware of their roles and authority as teachers are important in course design and teaching practice. Only when the teacher is fully aware of his unique power and responsibility in a classroom, the training they receive in the teacher program could make impact in the journey of teacher development. Beliefs and Concerns in Teaching The development of teacher knowledge of this group of student teachers in the current study shows shift on attentions and concerns from self-image and teaching techniques to the struggles and needs of students.They gradually reflect upon the model they set up in front of students and the impact they could make during the practicum period.What is observed in the current study echo Kangan's (1992) model of teacher development which emphasizes the changes on metacognitive skill and the awareness of the flexibility of teacher knowledge and skills.It cannot be denied that student teachers in the current study do not develop the problem solving skill that could be generated over different contexts.Nevertheless, the integration of theory course and real classroom teaching indeed bring some changes in student teachers' beliefs. However, the force that contributes to such changes is the interaction with the teaching contexts, such as fellow student teachers, or information found in Cyberspace.The information from training program, including mentoring session, does not contribute to these changes.For example, in recent decades, learning English as an acquired tool to accomplish tasks in daily routines has become the core concept in EFL contexts.The importance of communicative competence is repetitively mentioned and reinforced in theory courses.However, classroom observations of classroom teaching did not show obvious traces of implementing such concept.Vivian was the only teacher to give students chances to use language freely.She also tried very hard to use English when giving commands or instructions.Other participants, however, were more comfortable with the teacher-centered mode where student responses were limited to the repetition of either teachers' demonstrations or expressions written in the textbook. Such a discovery indicates that this group of student teachers only understood this concept at the surface level, but did not know how to apply this concept into classroom teaching.As suggested in previous studies (Kangan 1992;Richardson, 2003), student teachers tend to adopt their own personal experiences, but not new concepts learned in the training program as the point of reference when designing lessons.In other words, they select partial learned theories or concepts to confirm, but not to change, their beliefs and teaching practice (Korthagen, & Russell, 1999).As Hennissen et al. (2001) claim "teachers' knowledge and skills are event-structured, context-based, and practice-oriented in nature" (p.1051).The specific environment in which student teachers work as the context that influences the development of pedagogical beliefs.Results of the current study indicate the change of beliefs relate to classroom practice and usually occur when the student teachers encounter unexpected event that create conflict between preconception and reality.The classroom discussions and lectures in training program do not challenge student teachers so much that they need to make changes.But the classroom teaching forces student teachers to re-examine and reconstruct their beliefs and make changes accordingly.And this real world experience even enhances the development of critical reflective teaching in two participants. Student teachers in the current study were coincidently concerned with whether their corrections or comments would hurt students' feelings.They tended to pay more attention on building loving images of themselves to students than on the effect their teaching made on student learning.As suggested in the Concerns Theory (Fuller, 1969), the concern for self and teaching tasks are the first two stages in this developmental process.Concerns found in the current study indeed echoed this theory.However, Fuller suggested that these stages follow a chronological and hierarchical order and teachers enter the stage of effect and reflect on their challenges and stimulation to students' learning until accumulating a certain amount of teaching experience.However, student teachers in the current study did not necessarily follow what Fuller suggested.Concerns from both Vivian and Kathy reached broader school contexts and related to the social-economic status of students.Vivian stated that the goal of English teaching should go beyond simply English language knowledge to achieving a more mature and active attitude of learning in general.Kathy even discussed the inequality she observed among her students and the concerns on the results of such a phenomena effect. Influence from the Training Program Teacher training programs aim at transforming student teachers' inappropriate or immature beliefs into appropriate and mature capabilities.The effectiveness of a teacher-training program is shown in its influence on the change in beliefs and teaching practices of perspective teachers, particularly at the initial stage of teaching practice.The influence of the training program is noticeable when student teachers feel confused in adopting theories and concepts discussed in the lecture room or mentoring sessions.Thus, the awareness of the difference between theory and reality can be a sign of an effective training program. However, this study showed that student teachers were also influenced by the Internet, past learning experiences, discussions with other student teachers, and their own self-monitoring.Surfing the Internet and consulting homeroom teachers and peer student groups constituted the two main sources of suggestions or solutions in our study group.Moreover, every student teacher emphasized the importance of motivation and communication, two concepts commonly discussed in the mentor and course sessions.Nonetheless, the influence of peers and the Internet as well as the novice teacher's own previous experience as a learner overshadowed these formal influences. Knowledge learned and discussed in the training program did not exert a noticeable effect on classroom teaching, for any of the student teachers in this study.Two of the student teachers did not even mention the training program during their practicum.The only trace of an effect from the formal courses appeared in the activities at the start of the practicum.In the initial stage of the practicum, student teachers frequently employed the method of "total physical response".This technique requires minimal experience to master and is suitable to the learning styles of young children.The students in our study were indeed teaching young children.This teaching method is commonly used by English teachers in Taiwan. Previous studies have shown that mentoring influenced student teachers (Lindgren, 2005;Marable & Raimondi, 2007), but our findings did not confirm this.Instead, we found evidence of resistance arising from deep-seated or long-lasting beliefs (Korthagen & Russell, 1999).These results cast doubt on the effectiveness or necessity of the theory lectures and discussions that have traditionally formed a large portion of teacher training.However, previous studies have also shown that acceptance of new information can lead to a change in a student's belief system.This type of change has been associated with several elements of the training program, including challenges encountered (Pajares, 1992), the contrast between various bodies of culturally-related knowledge, the teacher's personality (Eilam, 2002), and situations that directly test one's former beliefs (Fischl & Sagy, 2005).The integration of theory and practice discussed by Brouwer (1989), in which student teachers encounter reality and challenges, was indeed evident in our study.Nevertheless, the mere awareness of conceptual differences does not lead automatically to the acceptance of new information and theories.Student teachers' resistance to or ignorance of theories discussed in the training program remained strong throughout the practicum.Our findings showed that student teachers tended to adopt the teaching method they had been exposed to as children or youngsters rather than the methods learned in their formal training. The main reason to explain such resistance relies on the differences between the local teaching contexts and Western-generated theories.Teaching methods and techniques taught in training program are popular and well-established in the learning contexts that differ from the contexts of the participants culturally and pedagogically.Participants tended to question the feasibility of teaching methods and techniques learned in the course work.The teaching reflections noted by Jay and Vivian revealed the students' concerns and caution in adopting Western methods and techniques such as the communicative approach or group work.Consequently, these teaching methods were not commonly seen in the classroom observations.It is possible that our study participants found that Western theories lacked applicability to their own culture and values, and that this deficiency would be detrimental to their pupils.Such theories are incompatible with students' preferred styles of learning and their social, religious, or community values (Liyanage & Bartlett, 2008).The introduction of textbook knowledge of language teaching methods should be prefaced by a discussion of local knowledge and customs (Xu & Connelly, 2009).Western teaching methods are not always applicable to non-Western contexts and the validity of each method in a local setting should be assessed (Chiang, 2008;Eilam, 2002).Learning to teach is context-specific and culturally bound.Teacher training programs must consider the educational and philosophical values of the specific group of pupils the perspective teachers will face.Thus, in addition to propounding theories developed in the West, the training of Taiwanese teachers requires attention to issues and techniques that have proven to be suitable for local students. Conclusion The current study unveils the process of becoming an English teacher and changes during the initial stage. Results from this study raise some concerns over student teachers' acceptance of Western-made theories into classroom teaching and imply factors that lead to smoother journey in this stage.Data indicate the less desirable effect from the training program and the powerful role that past learning experiences and peer student teachers play in this stage.One crucial factor that determines the success of this stage is if the student teacher builds a clear image as a teacher in front of pupils.A failure to do so creates potential obstacles in classroom management. Results from this study suggest the need to integrate group mentor sessions, besides practicum, as a platform where student teachers could exchange information, solutions, and most important of all, comfort from someone who are in the same boat.Discussions with peer student teachers also show some level of effect on changes of teaching beliefs.Moreover, methods to establish appropriate self-image as a teacher should be a main concern in teacher education since it is a crucial factor to classroom management to student teachers.In conclusion, to both novice and experienced teachers, teaching is an endless learning process that consists of self-searching and mutual support.Future studies should address issues on the process of apprenticeship between novice and experienced teachers, particularly its effect in handling teacher stress and burnout.
8,349.6
2012-10-31T00:00:00.000
[ "Education", "Linguistics" ]
Survival of Patient-Specific Unicondylar Knee Replacement Unicompartmental knee arthroplasty (UKA) in isolated medial or lateral osteoarthritis leads to good clinical results. However, revision rates are higher in comparison to total knee arthroplasty (TKA). One reason is suboptimal fitting of conventional off-the-shelf prostheses, and major overhang of the tibial component over the bone has been reported in up to 20% of cases. In this retrospective study, a total of 537 patient-specific UKAs (507 medial prostheses and 30 lateral prostheses) that had been implanted in 3 centers over a period of 10 years were analyzed for survival, with a minimal follow-up of 1 year (range 12 to 129 months). Furthermore, fitting of the UKAs was analyzed on postoperative X-rays, and tibial overhang was quantified. A total of 512 prostheses were available for follow-up (95.3%). Overall survival rate (medial and lateral) of the prostheses after 5 years was 96%. The 30 lateral UKAs showed a survival rate of 100% at 5 years. The tibial overhang of the prosthesis was smaller than 1 mm in 99% of cases. In comparison to the reported results in the literature, our data suggest that the patient-specific implant design used in this study is associated with an excellent midterm survival rate, particularly in the lateral knee compartment, and confirms excellent fitting. Introduction Unicompartmental knee arthroplasty (UKA) in isolated medial or lateral osteoarthritis leads to good clinical results. In comparison to total knee arthroplasty (TKA), surgery can be performed through a shorter approach, leading to quicker rehabilitation, and the kinematics after implantation of UKA are similar to those of the physiological knee [1][2][3][4]. Good clinical results were confirmed in two recent randomized controlled trials [5,6]. Registry Data confirmed these results and showed that the Oxford knee score is higher in patients with UKA compared to TKA [7]. On the other hand, the revision rate in UKA is nearly twice as high as for TKA. In the German arthroplasty registry, for example, the revision rate for UKA was 8% after 7 years compared to 4% in TKA [8]. There are many reasons for revision of UKA. The optimal positioning of UKA has been studied extensively [9][10][11]. In this respect, free-hand implantation of UKA leads to up to 41% of outliers of the optimal range [12]. Other reasons for revision are complications associated with tibial overhang or undersizing. Tibial undersizing may increase the risk of implant migration into the softer cancellous bone with consecutive loosening. On the other hand, a recent analysis showed that a tibial overhang over the bone of more than 3 mm can lead to a revision rate of up to 20% [13]. Medial overhang of the prosthesis is sometimes 2 of 12 difficult to avoid. The placement of the medial unicondylar prosthesis is limited in the lateral direction, as harm to the anterior cruciate ligament has to be avoided. Choosing a smaller implant can lead to undercoverage in the antero-posterior direction. Patient-specific implants (PSI) are produced individually for every patient based on a computed tomography scan of the leg. They have shown a better coverage of the tibia in CAD studies, with 0% overhang in comparison to off-the-shelf implants, which show overhang of up to 70% [14]. Furthermore, it has been shown that the implantation of the PSI in combination with patient-specific instruments leads to reproducible and precise implantation [15]. Thus, PSI should help to avoid suboptimal implantations leading to failures of UKA [12,16]. Lateral UKA can lead to good clinical results in isolated lateral osteoarthritis of the knee [17]. The procedure is performed less frequently and the revision rate is reported to be much higher than in medial UKA, with a revision rate of 12% after 5 years [18]. A reason for this is that lateral UKA is technically more challenging than medial UKA due to the lower number of indications, as well as the different functional anatomy of the lateral compartment. One more reason is the fact that most of the available UKA systems offer no specific lateral implants. Instead, the medial tibial component of one side (left/right) is used as a lateral component on the contralateral side. Knowing that the biomechanics of the lateral component differs to that of the medial, this is probably one reason for the higher revision rate of lateral UKAs [17]. With patient-specific implants a better fitting for lateral prosthesis as well is awaited. The use of a patient-specific unicompartmental knee prosthesis should result in more precise implantation and better coverage. These advantages should lead to a lower revision rate. However, clinical data showing this are sparse. The aim of this retrospective study was to analyze the survival of more than 500 PSI UKA and to measure the overhang of the tibial component. Materials and Methods A total of 537 consecutive knees in 492 patients that received isolated medial or lateral patient individual UKA (iUni, ConforMIS, Billerica, MA, USA), were included in the study. Surgeries were performed between 09/2010 and 03/2020 in three centers (ECOM Munich, Germany, Knee Centre Würzburg, Germany, and Klinikum rechts der Isar der Technischen Universität München, Germany) by three different surgeons (MK, HG, PW). There were 507 medial prostheses (462 patients) and 30 lateral prostheses (30 patients). Inclusion criteria were patients with anteromedial or lateral osteoarthritis of the knee or avascular osteonecrosis of the medial femoral condyle (AVON, Morbus Ahlbäck) as well as knee pain exclusively localized to the affected compartment. Exclusion criteria were the following: Prosthesis and Surgical Technique In all cases, the Conformis iUni knee was implanted. Every patient had a preoperative computed tomography scan of the knee and of the hip and ankle. Planning was performed individually for every patient according to the individual anatomy. The implant was delivered to the surgeon in combination with an iView surgical plan ( Figure 1). In all cases, the Conformis iUni knee was implanted. Every patient had a preoperative computed tomography scan of the knee and of the hip and ankle. Planning was performed individually for every patient according to the individual anatomy. The implant was delivered to the surgeon in combination with an iView surgical plan ( Figure 1). Figure 1. Preoperative planning of the prosthesis and surgical guide (Iview) delivered for every patient. It shows the osteophytes that must be removed to position the patient-specific instruments (orange-colored). The position of the patient-specific instruments is also shown. In particular, the position of the femoral jig in accordance with the femoral component is very helpful for the surgeon. Furthermore, it shows the final position of the prosthesis (see further details in the text). In brief, the prosthesis was implanted for the medial knee through a limited medial parapatellar approach, and for the lateral compartment through a lateral parapatellar arthrotomy. After exposition of the joint and removal of the meniscus, the joint is exposed and isolated medial or lateral osteoarthritis is confirmed. The functional integrity of the anterior cruciate ligament is checked. After this, the rest of the chondral layer on the medial or lateral femoral condyle as well as the osteophytes as indicated on the iView Surgical plan are removed. This step is crucial for correct placement of the individually designed instruments, since the surgical plan is based on the CT scan and therefore on the bony surfaces only. Correct position of the femoral jig (patient-specific instrument) is confirmed by comparison with the surgical plan. The next step consists of removing both the complete remains of the tibial cartilage and the marked osteophytes. Four different heights of balancer chips (1 mm steps) can be inserted into the knee to achieve an appropriate ligament tension. The ligament tension must be appropriate in extension. On the medial side, a laxity of 1-2 mm is aimed on the lateral side of 2-3 mm. After achieving correct ligament tension, the tibial cutting guide is put on the selected balancer chip seating on the tibia. The correct position of the cutting guide is additionally confirmed by an alignment rod attached to the tibia that has to be parallel to the tibial crest. The tibial resection can be performed after fixation of the tibial cutting guide. After removal of the tibial bone, the 8 mm spacer (height of the tibial component and the inlay) is positioned into the knee and the femoral jig is positioned on the femoral bone and in contact with the spacer block. With this technique, the position is achieved in accordance with the bone and the ligament tension. After fixation of the femoral jig, the dorsal femoral resection can be performed. There is no distal femoral resection as the implant is designed to replace only the distal femoral cartilage. Next, the trial is introduced and the joint play is In brief, the prosthesis was implanted for the medial knee through a limited medial parapatellar approach, and for the lateral compartment through a lateral parapatellar arthrotomy. After exposition of the joint and removal of the meniscus, the joint is exposed and isolated medial or lateral osteoarthritis is confirmed. The functional integrity of the anterior cruciate ligament is checked. After this, the rest of the chondral layer on the medial or lateral femoral condyle as well as the osteophytes as indicated on the iView Surgical plan are removed. This step is crucial for correct placement of the individually designed instruments, since the surgical plan is based on the CT scan and therefore on the bony surfaces only. Correct position of the femoral jig (patient-specific instrument) is confirmed by comparison with the surgical plan. The next step consists of removing both the complete remains of the tibial cartilage and the marked osteophytes. Four different heights of balancer chips (1 mm steps) can be inserted into the knee to achieve an appropriate ligament tension. The ligament tension must be appropriate in extension. On the medial side, a laxity of 1-2 mm is aimed on the lateral side of 2-3 mm. After achieving correct ligament tension, the tibial cutting guide is put on the selected balancer chip seating on the tibia. The correct position of the cutting guide is additionally confirmed by an alignment rod attached to the tibia that has to be parallel to the tibial crest. The tibial resection can be performed after fixation of the tibial cutting guide. After removal of the tibial bone, the 8 mm spacer (height of the tibial component and the inlay) is positioned into the knee and the femoral jig is positioned on the femoral bone and in contact with the spacer block. With this technique, the position is achieved in accordance with the bone and the ligament tension. After fixation of the femoral jig, the dorsal femoral resection can be performed. There is no distal femoral resection as the implant is designed to replace only the distal femoral cartilage. Next, the trial is introduced and the joint play is evaluated over the complete range of motion. If satisfactory, the tibial preparation is finished, and the bone is prepared for cementation. Original implants are always cemented with a fixed bearing inlay. If there is excessive joint laxity, a 2-millimeter-higher inlay is available [19]. Patient Follow-Up and Data Collection All patients are regularly followed-up clinically and radiologically after joint arthroplasty in the three centers (after 6 weeks, 1 year, and then every 2 years). At every control visit, a clinical examination as well as radiography of the knee in two planes are performed. If patients do not show up to the appointment, they are reminded by phone call. If they cannot come to the appointment, they are asked by phone if the prosthesis is still in situ or if any revision surgery was performed. If patients do not answer, a letter is sent asking them to contact the physicians' office. Revision surgery was defined as exchange arthroplasty of the inlay or the femoral and/or the tibial implant components. For the study purposes, an evaluation of the patient's charts and already collected data was performed. After all the data were documented for each patient, an irreversible anonymization was undertaken. Ethical approval was obtained prior to the study (Ethikkommission an der Technischen Universität München, Germany, Study 250/21 S-EB). As only a retrospective analysis of already collected data was undertaken with irreversible anonymization, informed consent of the patient was waived by the local ethics committee. In the study, a minimal follow-up of one year was required. The survival of the prosthesis was assessed, and Kaplan-Meier curves were calculated. A sub-analysis of medial and lateral UKAs was also performed. Furthermore, antero-posterior respective medial and lateral overhang of the tibial component of the prostheses were measured on the immediate postoperative X-rays. Statistical Analysis All statistical analyses were performed using SPSS version 25 (SPSS, Armonk, NY, USA). Descriptive analyses are reported as means, SDs, and ranges for continuous variables, and frequencies and percentages for discrete variables. Overall survivorship was determined using the Kaplan-Meier method. Preoperative Data The preoperative demographic variables of the patients, such as the radiographic state of the osteoarthritis according to the Kellgren and Lawrence classification [20], are shown in Table 1. Follow-Up In total, 512 prostheses were available for follow-up (95.3%) at a mean of 4.5 years after surgery (1-10.8 years). Two patients had died (0.2%) and twenty-three (4.5%) were not available for follow-up due to different reasons, such as having disconnected telephone numbers or not answering on multiple attempts. In the patients with medial UKA, the follow-up rate was 95.7% (485/507 patients) at a mean time of 4.6 years (SD 2.4) after surgery. In patients with lateral UKA, the follow-up rate was 90% (27/30 patients) at a mean of 4.2 (SD 2.5) years. Overall Survival Survival of the iUni UKA (both lateral and medial) is shown in Figure 2 Overall, survivorship after 4.5 years without revision for any reason was 96.0%. Overall Survival Survival of the iUni UKA (both lateral and medial) is shown in Figure 2 Overall, survivorship after 4.5 years without revision for any reason was 96.0%. The reasons for revision are given in Table 2. Time (Years If only revisions for mechanical failure (aseptic loosening, wear, and periprosthetic fracture) are considered, the survival rate after 4.5 years was 97.5% (Figure 3). The reasons for revision are given in Table 2. If only revisions for mechanical failure (aseptic loosening, wear, and periprosthetic fracture) are considered, the survival rate after 4.5 years was 97.5% (Figure 3) Survival of the Medial UKA Of the medial UKAs, 20 revisions out of 485 patients were performed after a mean of 4.5 years, corresponding to a survival rate of 95.8% (Figure 4). Reasons for Revision In total, 20 revisions were performed. In nine cases, there was an aseptic loosening leading to revision, and in five cases an infection was the reason for revision. The reasons for revision are displayed in Table 2. Reasons for Revision In total, 20 revisions were performed. In nine cases, there was an aseptic loosening leading to revision, and in five cases an infection was the reason for revision. The reasons for revision are displayed in Table 2. Table 2. Reasons for revisions of the iUni arthroplasty (all medial, no revision of lateral knees). Radiological Analysis Immediate postoperative X-rays were stored in the patient charts and consequently available for work-up in 431 (80.3%) of the 537 initial patients. In four (0.9%) prostheses, there was a medial tibial overhang of up to 3 mm. None had a relevant anteroposterior overhang. Two prostheses (0.5%) had an overhang of 1 mm, one of 2 mm (0.2%), and one of 3 mm (0.2%). In 404 (79.7%) patients of the medial group, postoperative X-rays were available, with three (0.7%) prostheses showing an overhang of up to 3 mm. In the lateral group, X-rays were available in all patients. Of these, there was one patient with a lateral overhang of 2 mm (3%). Figure 5 shows the postoperative X-ray of a lateral UKA: Tibial bone marrow edema 1 (0.2%) Progressive osteoarthritis in the other compartments 1 (0.2%) Infrapatellar contracture syndrome (revision at external institution) 1 (0.2%) Not reported (revision at external institution) 1 (0.2%) Radiological Analysis Immediate postoperative X-rays were stored in the patient charts and consequently available for work-up in 431 (80.3%) of the 537 initial patients. In four (0.9%) prostheses, there was a medial tibial overhang of up to 3 mm. None had a relevant anteroposterior overhang. Two prostheses (0.5%) had an overhang of 1 mm, one of 2 mm (0.2%), and one of 3 mm (0.2%). In 404 (79.7%) patients of the medial group, postoperative X-rays were available, with three (0.7%) prostheses showing an overhang of up to 3 mm. In the lateral group, X-rays were available in all patients. Of these, there was one patient with a lateral overhang of 2 mm (3%). Figure 5 shows the postoperative x-ray of a lateral UKA: Figure 5. Postoperative X-ray of a patient-specific lateral unicondylar knee prosthesis. Discussion The present study evaluated the outcomes of patient-specific UKA for isolated medial or lateral osteoarthritis. Although UKA yields good clinical outcomes, revision rates are relatively high compared to total knee arthroplasty, partly due to poor fitting of Figure 5. Postoperative X-ray of a patient-specific lateral unicondylar knee prosthesis. Discussion The present study evaluated the outcomes of patient-specific UKA for isolated medial or lateral osteoarthritis. Although UKA yields good clinical outcomes, revision rates are relatively high compared to total knee arthroplasty, partly due to poor fitting of conventional off-the-shelf prostheses, resulting in possible overhang of the tibial component over the bone in up to 20% of cases. This retrospective study analyzed 537 patient-specific UKAs (507 medial and 30 lateral) implanted in three centers over a decade, with a minimal follow-up of 12 months (range: 12-129 months), and is the largest available study on patient-specific UKA. In essence, this study showed a high survival rate in patient-specific unicondylar knee replacement of 96% in 512 knees and of 97% if considering mechanical failure alone at a midterm survival of 4.5 years. Moreover, the theoretical advantage of an excellent fitting of the tibial component of prosthesis to the bone [14] was also shown, with less than 1% of patients showing a tibial overhang of more than 1 mm. The UKA revision rate is higher compared to TKA. In the most recent report of the German Arthroplasty registry, a revision rate of 7% is reported for UKA after 5 years [8]. In the Australian registry (AAONR), the revision rate at 5 years is comparable with 6.5% and also double the TKA revision rate, which is also the case in the NJR [21,22]. In comparison to these registry data, the present study showed favorable results for an individually designed UKA, with a revision rate of 4% at 5 years. Furthermore, the most impactful data investigating implant survival are currently retrieved from joint replacement registries, since very large numbers can be assessed over time. However, thus far, no registry data have been available for patient-specific UKAs. This emphasizes the importance of performing individual studies with large patient numbers and high follow-up rates. The present study is the largest analysis of the iUni implant, with more than 500 cases involved and a follow-up rate of 95.3%. Thus, the present study is-although limited by its retrospective character-the most robust analysis currently available of implant survival of the patientspecific UKA. The PSI technique can be compared to modern robotically assisted implantations. A recent study of one center with 1000 knees showed a very high survival rate at 5 years for robotically assisted UKA of 98% excluding inlay exchanges [23]. This survival rate is approximately comparable to the 97% survival rate considering mechanical failure alone observed in this study. The good survival of the robotically assisted UKA is confirmed in a recent study with data of the Australian registry (AAONR). At 3 years, the robotically assisted UKA had a revision rate of 2.6%, which was half that of the non-robotic UKA (5.0% at 3 years). The best-performing non-robotic UKA reached a revision rate of 3.7% [24]. Again, the PSI of this study showed results comparable to the robotically assisted UKAs and the best non-robotic UKA implant. There are, to the knowledge of the authors, two studies reporting the results of patientspecific UKA. In the study of Pumilia, 349 knees (same implant as in the present study) were analyzed at a follow-up of 4.8 years with a survival rate of 97.8%, which was slightly better than the results of this study. However, the follow-up rate was less than 70%, which is a potential bias and could have influenced the results [25]. A smaller study also using the iUni by Conformis reported a 100% survival rate of 31 medial UKA after a short-term follow-up of 2.4 years [26]. The present study also included 30 lateral UKAs in 30 patients with a survival rate of 100% at 4.2 years. The lateral compartment of the knee is biomechanically and anatomically different from the medial compartment. Most commercially available unicompartmental implants are not designed specifically for the lateral compartment and therefore the fitting of the prosthesis in the lateral compartment is even more difficult. Furthermore, lateral UKA is performed less frequently, which makes it also more challenging. The literature with follow-up of more than 5 years is sparse, reporting a survival rate of 84-100% for fixed-bearing knees and 79-92% for mobile-bearing knees [27]. The analysis of registry data from the National Joint Registry for England, Wales, Northern Ireland and the Isle of Man revealed 93% survival of 2052 lateral UKAs at 5 years [28]. In contrast, a study by Demange et al., using the same lateral PSI implant also used in the present study, in 33 patients showed a high survival of 97% at 3 years and a better tibial fitting in comparison to a conventional implant. The survival in the conventional group of lateral UKA in the mentioned study was 85% [29]. The present study confirms the favorable results of the PSI, especially in lateral unicompartmental osteoarthritis, in a limited number of patients. Tibial fitting of the prosthesis is important, as Chau et al. found that an overhang of >3 mm resulted in poorer clinical outcomes on the medial side [30]. In their study, they found an overhang >3 mm in 10% of the patients. Undersizing is also not desirable as the prosthesis will be placed only on weaker cancellous bone, increasing the risk of implant migration and loosening. In a more recent analysis, it was even shown that an overhang of more than 3 mm leads to an increased revision rate of up to 20%, compared to 3% in patients with minor overhang at a follow-up time of five years [13]. The present study showed a very good fitting of patient-specific prostheses, with no overhang of more than 3 mm and only 1% with more than 1 mm. Thus, our study may corroborate the hypothesis that avoidance of tibial overhang is correlated to higher survival rates. The good fitting of the patient-specific tibial component should thus improve survival rate and clinical outcomes. One possible concern in PSI is the radiation to which patients are exposed through the preoperative computed tomography (CT) scan. Modern CTs have an effective dose between 1 and 7 millisievert (mSv), depending on the organ and the technique. The effective dose for a CT scan of the knee is reported to be 1.3 mSv [31]. In the protocol for PSI, a few slides have to be conducted on the hip joint with a slightly higher dose. The average effective dose through environmental sources is estimated to be 2.4 mSv per year in central Europe, ranging from 1 to 10 mSv depending on activity and the exact living area. Radiation through medical exposures is on average an extra 2 mSv, with variations depending on age and medical condition. The applied radiation of the preoperative CT scan is not negligible. However, it has to be weighed against the potential advantages of PSI. If the implants lead to lower revision rates, there will be reduced radiation for patients that do not need multiple X-rays before and after revision surgery. Furthermore, in conventional knee arthroplasty, preop whole-leg X-ray is mandatory. The radiation of these images is not negligible either. This radiation is not necessary in patients receiving PSI implants, since the leg axis is also determined through CT. Finally, in the opinion of most experts, the risk of developing a disease through CT scanning of the thorax or the abdomen in patients aged over 65 years is negligible [32,33]. Therefore, the much-lower radiation dose of a CT scan of the extremity is probably irrelevant in these patients. Considering the potential advantages of PSI, the necessary radiation for a preoperative CT scan to plan the PSI implants is justifiable in the eyes of the authors. In spite of the large patient sample, this study has some limitations. First, it was a retrospective analysis with no comparison group. However, it was a consecutive series with a large number of included patients. The follow-up rate of more than 95% is also very high, resulting in a robust data set. Second, the number of patients receiving a lateral UKA was relatively small. This is due to the significantly rarer indication of lateral UKA. Large case numbers can most likely be obtained with registry analysis, which should also become available for patient-specific implants in the future. A further limitation of the use of PSI is the costs that are 2-2.5-fold higher than for conventional implants, depending on the country. On the other hand, in PSI there is no need for additional trays, which reduces costs of sterilization and logistics and saves time. If PSI will lead to reduced revisions, as is the case in the present study, there is another potential of saving money by investing more in the implant during primary surgery. In the future, the costs of the implants should also be less by reducing the costs for production through modern 3D printing and an eventually higher use of PSI, which should also reduce costs. Considering all these facts, the costs are probably only slightly higher than in conventional implants, although detailed information of the exact extra costs is missing. Furthermore, the observed mean follow-up is only 5 years. Nevertheless, it is important to analyze the results at this point also, as eventual advantages or disadvantages of a device and potential risks can already be observed earlier. The present study showed that the survival rate was better in comparison to most of the used UKA at mid-term follow-up, and it is also likely that this difference will be observed in the longer term, justifying continuous use of patient-specific UKA. Finally, the study does not allow conclusions about the functional results, since no clinical scores were included in the analysis. This has not been the aim of the study, since implant survival should be investigated. There have been many studies showing excellent clinical results in unicompartmental knee replacement [25,26,34], including the implant used in this study. The patients of this study are followed very closely in the centers after UKA, and the low revision rates suggest that the clinical results are also satisfactory. Nevertheless, future studies including clinical results are mandatory. Conclusions The present study is the largest analysis of patient-specific UKA, with more than 500 prostheses analyzed retrospectively. The survival rate of 96% at 4.5 years (97.5% if considering mechanical failure alone) is excellent in comparison to the literature, and comparable to robotic-assisted UKA. Lateral UKA is a more complex procedure with higher risk of revision. The use of a patient-specific implant in this study showed a 100% survival rate at 4.2 years in 30 lateral knees, and these results should be confirmed in the future on a higher number of patients.
6,440
2023-04-01T00:00:00.000
[ "Medicine", "Engineering" ]
Rotor-bearing vibration control system based on fuzzy controller and smart actuators Most rotating machines, especially those mounted on flexible shafts and bearings when it starts operating, tend to pass through critical speeds, ie speeds that can cause the system to resonate the mechanical structure. Hence it is a constant concern for finding effective methods to mitigate the effect of vibration when passing through such speeds. Currently, it has been studied applications of materials made from special alloys as actuators in dynamic systems, in order to reduce the vibrations in a frequency range related to the resonance region. In this direction, it is the use of components made of active materials such as Shape Memory Alloys (SMA), considered “smart”, able to recover its original shape when the change in temperature and/or mechanical stress, and as the main characteristics, its high damping capacity, due to the increased levels of vibration. This paper presents a rotor-bearing vibration control system based on actuators SMA coil springs. A fuzzy controller has been used for control the vibration of the system based on the measuring of critical speeds. The experimental results of the operation of the system shown their effectiveness being obtained reductions of up to 60% in amplitude, during the passage through the resonance region. INTRODUCTION The most of rotation machines, overall those built in axis and flexible rotor-bearings, in the machine start up, it can pass through critical speed which may cause the system to resonance.The resonance works as mechanical amplifier, it may lead the system to collapse and induce seriously materials and/or humans damages.Therefore, there is the constant worry by looking for effectives methods that attenuate the vibration effect when mechanical systems pass through by such critical speeds.*Corresponding author.E-mail<EMAIL_ADDRESS>The common methods for attenuation of increase of acceleration when it pass through the critical speeds.Although, this approach become an inefficient fashion because it requires more power delivery to the machine to provide a high acceleration rate [1].Other method, according with these authors, is to raise the relation the machine damping, which is not an easy task.Many works have been proposed in literature using passive or active-adaptive control. The wide applicability of SMA in vibration control of machines and structures is greatly enhanced in [2], is indicated as the main factor for potential use in applications involving large forces and/or deformations. In [3] is presented the design, numerical analysis and optimization of an adaptive vibration absorber.The authors have adopted as the basis of their studies, the model assumed transformation kinetics, which is considered, in addition to deformation, and temperature T, a scalar internal variable, which represents the volume fraction of martensitic phase. Another control device was proposed by [5], which made use of the principle of the switching state absorption, known as SSA (State-Switched Absorber), which is a system that can quickly switch between the resonance frequencies when compared with TVA's classics. In [1] is proposed a theoretical model of a system consisting of SMA springs for use in rotating systems whose equation is based on the control model for dynamic vibration absorber. In this direction is the use of Shape Memory Alloy (SMA) actuators, considered a smart material (metallic alloy) able to return in the original shape with the change of temperature and/or mechanical stress, having as one of the main characteristics, its high damping capacity and stiffness increase in function of the temperature (heating).This paper shows the results of use SMA actuators (springs) to decrease vibration, in an active rotor-bearing (pedestal bearing) using a fuzzy controller. THE ROTOR-BEARING SYSTEM The representation of the rotor-bearing system is presented in Figure 1(a) can be considered a vibration absorber (Figure 1b).In this work, the system was considered as a vibration absorber in which the mass of the pedestal bearing corresponds to the secondary system.On the other hand, the mass of rotor-bearing is equivalent to the primary system.The values of mass and stiffness of absorber are chosen to motion of mass of primary system be minimum. The dynamical equation of the system written in a matrix form [6], for the situation shown in Figure 1 is: (1) where: = magnitude of the displacement, velocity and acceleration of the absorber mass = magnitude of the displacement, velocity and acceleration of the primary mass m = primary mass m a = absorber mass c a = damping of the absorber c = damping of the primary system k a = stiffness of the absorber x t x t x t ( ), ( ), ( ) = stiffness of the primary system F 0 = amplitude of the excitation signal ω = excitation frequency t = time Equation (2) shows the solution in exponentially form when the system is in steady state, and X the vibration amplitude of the primary mass and X a the amplitude of vibration of the mass of the absorber. (2) Substituting ( 2) into (1) and solving the system one can find as frequency response: where: (damping ratio) (ratio of the excitation frequency to the primary natural frequency) (natural frequency of the (ratio of the absorber mass to primary system ) the primary mass) (natural frequency of the (ratio of the decoupled natural absorber) frequencies) THERMOMECHANICAL BEHAVIOR OF THE SMA SPRINGS There are several proposed models to adequately describe the thermomechanical behavior of SMA springs, both in line microscopic approach (metallurgical aspects) and macroscopic (phenomenological aspects) [3]. The thermomechanical model used in this work was proposed by [7] and it has been recently applied by [8].Thus, it is possible to establish a relationship between stiffness and temperature that lies on a SMA spring. In order to validate the model a spring of Nickel (Ni) and Titanium (Ti) was subjected to a test for determining the stiffness as a function of temperature, in an universal testing machine Instron ® (Figure 2). Figure 3 shows the result of simulation and experimental data of the variation of SMA spring stiffness with temperature.Based on the results of Fig. 3, one can verify that mathematical model is valid for the SMA spring used in labaratory.Otherwise, if the percentages of Ni and Ti in the alloy are different, other values of transformation temperatures and, consequently, the modulus will be obtained. Furthermore, by comparing the experimental results with the theoretical model, it is observed that there is an approximation of the curve behavior during cooling, which did not appears clearly during heating.The reason of this effect is the material behavior of the alloy when the austenite to martensite transformation. The theoretical model proposed in [7] emphasizes just changing fractions of martensite and austenite between the temperatures of initial and final austenite phase, and between the temperatures of initial and final martensite phase. CONTROL SYSTEM The main idea in a vibration control system is to prevent the state from resonance, and this can be done in several ways.In the case of systems which vary the stiffness and damping as a function of temperature, it is obvious that the vibration control of these systems is directly related to temperature control elements that confer stiffness and damping. For the system studied, using SMA springs, the temperature of the spring can be associated to the natural frequency.Therefore, controlling the temperature of the SMA spring provides the adjust of the natural frequency of the system.The control of the natural frequency results in low amplitudes of vibration of the system, i.e., it is necessary to control the temperature of the spring to reduce the vibration of the main mass. The diagram of temperature control of the SMA spring is shown in Figure 4. In the diagram, the controller aims to send information that will allow the activation of actuators for the change in temperature of the springs, with the goal of changing the stiffness of the same when passing through resonance, causing vibration reduction.The control strategy was based on the information coming from a vibration sensor placed in the SMA bearing, which from an admissible value of amplitude vibration, sent over information to drive pulse heating (electrical current through springs) or cooling (fans with airflow over the springs).The logic of control of the springs is based on a fuzzy controller implemented in LabVIEW. EXPERIMENTAL RESULTS The physical model assembled built of shaft-rotor system consist of 2 rolling bearing (one assembled in a rolling bearing housing (in the right) and another one in rolling bearing absorber with 4 SMA springs -in the left), one shaft of steel with one disc and mass attached in axis and electric motor of 0.5 cv supplied by a power converter (voltage-source), as show in Figure 5. Table 1 shows the parameters of system.It has been performed tests with the system being subjected to a variation in motor rotation, from 0 to 40 Hz, in a acceleration rate of approximately 1 rad/s 2 (with ramp time = 241s), with and without actuation of the fuzzy controller.The results achieved are presented in Figures 6 and 7. 202 Rotor-bearing vibration control system based on fuzzy controller and smart actuators heated to 60°C.It is also observed an increase in amplitude, when the system reaches the resonance at the austenite phase ( fn a = 30 Hz). The actuation of fuzzy controller is shown in the vibration signal (Figure 7a), to a temperature variation description on the Figure 7b. It has been found that there is a reduction of the order until of 60% in amplitudes when the passage by natural frequency in martensite phase (low stiffness springs) to austenite phase (high stiffness springs), in relation to the system without control.As shown in Figure 6a, the heating effect in springs in martensite phase and the suddenly cooling of the springs was found in 60°C (austenite phase), proving the efficiency of SMA spring actuators and the use of fuzzy logic in temperature control. CONCLUSIONS This work presents a basic theoretical-experimental study of vibration control in a rotorbearing system with one rolling bearing absorber using SMA springs.The vibration control based in a fuzzy controller with stated rules in function of temperature variation, showed when the SMA springs were activated in resonance zone, the system was capable to reduce 60% of vibration amplitude in relation to normal state driving without any control. Figure 1 Figure 1 Representation of the dynamic rotor bearing system. Figure 2 SMA spring during characterization test in a Instron machine. Figure 3 Figure 3 Variation of SMA spring stiffness with temperature. Figure 4 Figure 4 Representation of the control system. Figure 6 ( Figure 6(a) shows the results of amplitudes when temperature of the SMA springs were 22°C.It is observed an increase in amplitudes, when the system reaches the resonance at the martensite phase ( fn m = 27 Hz).In Figure6(b), amplitudes are shown when the springs were 204Figure 7 Figure 7 Performance of vibration control system: (a) waveform of vibration and (b) temperature signal. Table 1 System Parameters.
2,524.2
2013-09-03T00:00:00.000
[ "Engineering", "Materials Science", "Computer Science" ]
Location of the QCD critical point predicted by holographic Bayesian analysis We present results for a Bayesian analysis of the location of the QCD critical point constrained by first-principles lattice QCD results at zero baryon density. We employ a holographic Einstein-Maxwell-dilaton model of the QCD equation of state, capable of reproducing the latest lattice QCD results at zero and finite baryon chemical potential. Our analysis is carried out for two different parametrizations of this model, resulting in confidence intervals for the critical point location that overlap at one sigma. While samples of the prior distribution may not even predict a critical point, or produce critical points spread around a large region of the phase diagram, posterior samples nearly always present a critical point at chemical potentials of $\mu_{Bc} \sim 550 - 630$ MeV. Introduction Exploring the QCD phase diagram is one of the major goals of experimental programs at RHIC and FAIR [1].While the hadronic and quark-gluon plasma (QGP) phases are smoothly connected at low values of the baryon density [2], where first-principles lattice QCD simulations are feasible, a first-order transition line is conjectured at high densities, starting at a second-order critical endpoint (CEP). Here, we aim to extrapolate knowledge from lattice QCD, available at lower baryon chemical potential µ B , to draw predictions for the QCD CEP expected at large values of µ B .For that, we employ a holographic model of the QCD equation of state, which is capable of reproducing lattice QCD results and is compatible with findings on QGP properties from the phenomenology of relativistic heavy-ion collisions [3].By using Bayesian inference tools, we perform a systematic scan over model realizations, selecting those that reproduce a set of lattice QCD constraints at zero density [4,5] with a probability given by the respective error bars.We then compute predictions for the QCD CEP corresponding to each of the selected models to find an a posteriori probability distribution for the CEP location. Our description of the QCD equation of state is based on the gauge/gravity correspondence, which allows us to use dual black holes in a 4+1 dimensional asymptotically anti-de Sitter bulk spacetime to describe the physics of a thermal, strongly coupled field theory sitting in the 3+1 Minkowski boundary of that geometry. More precisely, we employ a bottom-up Einstein-Maxwell-dilaton (EMD) model [3], in which a Maxwell field A µ is used to endow dual black holes with baryon number, while a dilaton scalar field ϕ is used to break conformal invariance and shape the renormalization group flow of the theory.The action our theory is given by where g is the determinant of the metric, R is the Ricci scalar, Maxwell field strength, and V(ϕ) and f (ϕ) are potentials which are tweaked to reproduce QCD physics.To assess the robustness of our results, we employ two different parametrizations of the potentials V(ϕ) and f (ϕ): 1. Polynomial-hyperbolyc Ansatz (PHA): A more traditional parametrization, similar to the one used in [6]: 2. Parametric Ansatz (PA): A parametrization where parameters directly control plateaus and exponential slopes in the potentials [7]: Holographic models of this kind have successfully reproduced lattice QCD results at intermediate temperatures T ∼ 100 − 500 MeV at both vanishing and finite baryon density [3].They can also predict a CEP in the QCD phase diagram and naturally describe the nearly inviscid nature of the QGP observed in high-energy heavy-ion collisions [8]. Bayesian analysis We wish to scan over realizations of the model parametrizations above to generate an ensemble of models distributed according to the error bars on lattice QCD results.We thus employ Bayes' theorem to find the posterior probability over model parameters, ⃗ θ, given the lattice QCD constraints ⃗ d: where P( ⃗ θ) is the prior probability distribution over model parameters, and we treat P( ⃗ d), known as the evidence, as a normalization factor for the posterior. where d i , σ i and p i ( ⃗ θ) represent, respectively, the i-th point in the lattice results under consideration, the corresponding error, and the prediction for that point given model parameters ⃗ θ.The matrix Λ i j models correlations between different points, given by an extra parameter Γ ∈ (−1, 1), which measures correlations between neighboring points.To draw samples from the posterior in Eq. ( 6), we employ a Markov chain Monte Carlo (MCMC), in which parameter sets θ (n) are randomly modified at each iteration to find θ (n+1) , in such a way that eventually θ (n→∞) becomes distributed according to the target probability distribution P( ⃗ θ | ⃗ d).In particular, we use differential evolution MCMC [9].After a sufficiently large number of iterations, the equilibrium probability distribution P( ⃗ θ | ⃗ d) is achieved, and we can obtain samples of the posterior. As inputs for our Bayesian analysis, we take the latest lattice QCD results for the entropy density and baryon susceptibility at vanishing baryon density from the Wuppertal-Buddapest collaboration [4,5].More details on this analysis and the MCMC implementation can be found in the supplemental materials of Ref. [7]. Results Finally, we compute the predictions for the QCD CEP location in the samples of the posterior distribution we obtain and compute confidence levels for its location on the QCD phase diagram.For each parameter set or sample, we find the corresponding location of the CEP by following the procedure outlined in Ref. [7].Results are shown in Fig. 1, where confidence levels for the critical temperature, T c , and baryon chemical potential, µ Bc are shown, alongside the posterior distribution for the corresponding beam energy, extracted from µ Bc with the parametrization from Ref. [10].CEP locations for the prior distribution are shown as crosses in the left panel. Conclusions We have presented results for the first Bayesian analysis of the phase diagram of QCD constrained by first-principles lattice QCD results at zero baryon density.The posterior distribution of CEP locations was computed for two different parametrizations of a holographic EMD model.We find that imposing agreement with lattice QCD tightly constrains predictions for the QCD CEP location, which were spread all around the phase diagram in the unconstrained prior.Moreover, bands for the CEP location within each model overlap within one sigma, indicating the robustness of our results against parametrization choices.While 20% of the prior predicts no CEP, a CEP is found in nearly all of the posterior, indicating that a CEP is statistically favored [7]. Figure 1 . Figure1.Prior and posterior distribution for the CEP location in the PHA (red) and PA (blue) parametrizations.Left: Histograms for the critical temperature, T c , and baryon chemical potential, µ Bc , and the corresponding 68% and 95% confidence levels in the posterior, together with critical point locations sampled from the prior (crosses).Right: Probability density function for the center-of-mass energy corresponding to µ Bc , according to the freeze-out line of Ref.[10].
1,590.4
2023-12-15T00:00:00.000
[ "Physics" ]
Separation and Extraction of Compound-Fault Signal Based on Multi-Constraint Non-Negative Matrix Factorization To solve the separation of multi-source signals and detect their features from a single channel, a signal separation method using multi-constraint non-negative matrix factorization (NMF) is proposed. In view of the existing NMF algorithm not performing well in the underdetermined blind source separation, the β-divergence constraints and determinant constraints are introduced in the NMF algorithm, which can enhance local feature information and reduce redundant components by constraining the objective function. In addition, the Sine-bell window function is selected as the processing method for short-time Fourier transform (STFT), and it can preserve the overall feature distribution of the original signal. The original vibration signal is first transformed into time–frequency domain with the STFT, which describes the local characteristic of the signal from the time–frequency distribution. Then, the multi-constraint NMF is applied to reduce the dimensionality of the data and separate feature components in the low dimensional space. Meanwhile, the parameter WK is constructed to filter the reconstructed signal that recombined with the feature component in the time domain. Ultimately, the separated signals will be subjected to envelope spectrum analysis to detect fault features. The simulated and experimental results indicate the effectiveness of the proposed approach, which can realize the separation of multi-source signals and their fault diagnosis of bearings. In addition, it is also confirmed that the proposed method, juxtaposed with the NMF algorithm of the traditional objective function, is more applicable for compound fault diagnosis of the rotating machinery. Introduction The signal analysis of vibration in rotating machinery has been widely used in the field of fault diagnosis because the signals contain the operational state of the equipment [1,2].However, in the case of the limitations on the number and installation location of sensors, the information obtained from the signals is limited [3,4].Moreover, the non-stationary nature of the collected signals, the interference between multi-source fault signals and environmental noise may often result in the disappearance of feature information.Therefore, it is of great significance for the separation and extraction of compound faults based on vibration analysis [5,6]. There are many analysis methods based on vibration signals, such as feature extraction, pattern recognition and deep learning.For example, Wang et al. [7] proposed a fault diagnosis method based on sparsity-guided empirical wavelet transform, which can defect single and multiple fault bearings of railway axles.Lu et al. [8] introduced a method combining wavelet transform and K-mean clustering to realize the prediction about the battery state of health.Alimardani et al. [9] present an approach based on vibration signals to diagnose the faults of rotor eccentricity.Zhang et al. [10] developed a method based on the local outlier factor and improved adaptive matching pursuit, which can detect and recover the anomalous vibration signal.Li et al. [11] present an adaptive data fusion strategy based on deep learning with the convolutional neural network, which is validated on an industrial fan system with non-manufacturing faults and a centrifugal pump.Łuczak [12] proposed a method named CWTx6-CNN, which offered a clear representation of fault-related features.Wang et al. [13] introduced a novel fault recognition method on the basis of multi-sensor data fusion and bottleneck layer optimized convolutional neural network (MB-CNN) and realized the identification and classification of multiple faults of bearings.We know that analysis methods based on vibration signals mostly focus on low-dimensional analysis [14], and the information obtained from the original signal is bounded.It requires us to perform dimensionality transformation on one-dimensional vibration signals and observe the multidimensional signal so as to reveal unclear information.Simultaneously, the local feature information can be enhanced significantly with dimensionality transformation [15,16]. In the past few decades, many methods of dimensionality transformation have been proposed and widely applied in the fields such as signal separation, image clustering, biological information extraction, behavior feature recognition, and environmental perception and prediction [17][18][19][20].The methods regarding dimensionality transformation can not only reduce the dimensionality of data but also extract salient features from highdimensional data effectively.Meanwhile, it is beneficial for subsequent data processing and can achieve low dimensional visualization of data.The traditional dimensionality transformation algorithm actually seeks the intrinsic linear structure of the data in low dimensional space [21,22].However, most of the internal structures of data are complex and show nonlinear characteristics.In addition, the dimensions of various types of data continue to grow at an extremely fast pace.Therefore, exploring the effective features and improving the ability to analyze such data has a positive effect.Machine learning algorithms based on matrix factorization are the key technologies for several types of problems in this field, including dictionary learning, non-negative matrix factorization (NMF), concept factorization, matrix padding, etc. [23][24][25].Among them, the NMF algorithm has attracted much attention in feature extraction engineering due to its unique advantage of interpretability and scalability [26].For example, Zhang et al. [27] proposed a weighted NMF algorithm, which achieved image clustering by optimizing three parameters in the algorithm.Gu et al. [28] introduced a method combining an improved NMF algorithm and a global position system to identify the sources driving ground deformation.Luo et al. [29] developed a novel approach based on the robust ensemble manifold projective NMF algorithm for image representation.Saha et al. [30] used a privacy-preserving NMF algorithm to ensure the degree of privacy guarantees.Li et al. [31] adopted a deep autoencoder-like NMF method for link prediction.In addition, the NMF algorithm performs well in the field of biomedicine.Marta et al. [32] proposed a negative binomial NMF algorithm, which can capture the variation across patients to extract the mutational signatures.Tu et al. [33] proposed a hypergraph regularized joint deep semi-NMF algorithm to identify biomarkers of Alzheimer's disease.Nasrin et al. [34] put forward a model on the basis of the improved NMF algorithm that can recognize native decoys in protein structure prediction. It can be observed that the NMF algorithm has been applied in many fields and has achieved many remarkable results since it was proposed.However, there is still some room to improve the NMF algorithm, especially in the blind source separation problem related to the diagnosis of compound faults in rotating machinery.Therefore, to solve the separation of multi-source signals and detect their features from a single channel, a signal separation method based on multi-constraint NMF algorithm is proposed.By utilizing the flexibility of β-divergence and the uniqueness of determinant constraint on the feature matrix, the objective function of non-negative matrix factorization can be converted to the minimum value smoothly, quickly and stably.According to the advantage of dimensionality transformation with the STFT algorithm, multi-constraint NMF algorithm, and construction of parameter WK, the proposed method can accomplish the separation of multi-source signals and their fault diagnosis of bearings, which makes fault diagnosis much easier and more reliable.As rolling bearings are important components of rotating machinery, this paper takes rolling bearings as the research object. The remaining sections are organized as follows: Section 2 describes the basic principle of the NMF algorithm.The STFT algorithm, multi-constraint NMF algorithm and the parameter WK are introduced in Section 3. In Section 4, the specific separation of compound fault signals based on the suggested method is presented.The simulated and experimental results are discussed in Section 5. Finally, the conclusions are summarized in Section 6. Principle of Non-Negative Matrix Factorization The basic idea of the non-negative matrix factorization algorithm can be generally represented as follows: for any non-negative matrix V ∈ R m×n + , the NMF algorithm is constructed with an approximate factorization of two non-negative matrices W ∈ R m×r + and H ∈ R r×n + [35], namely: where V m×n denotes a matrix with the dimension of m, whereas n represents the number of samples.W m×r denotes a basis matrix that can be regarded as a series of basis vectors.H r×n denotes a coefficients matrix that can be regarded as the coordinates of each sample with respect to these basis vectors.In order to achieve better results of dimensionality reduction, the parameter r (rank of the matrix) is regarded as r < mn/(m + n). The model of the NMF algorithm is shown in Figure 1.In the field of signal processing, it can be explained that if each column of the matrix V m×n is considered an observed signal, each group of observed signals contains different features (mixed features, single features, or redundant information) represented by green squares and red triangles.Each column of the matrix W m×r contains the separated feature of the observed signal by the NMF algorithm, which can be reconstructed to the original signal by multiplying the coefficients matrix H r×n .It shows the idea of representing the whole based on parts. parameter WK are introduced in Section 3. In Section 4, the specific separati pound fault signals based on the suggested method is presented.The simula perimental results are discussed in Section 5. Finally, the conclusions are sum Section 6. Principle of Non-Negative Matrix Factorization The basic idea of the non-negative matrix factorization algorithm can b represented as follows: for any non-negative matrix  +  m n V R , the NMF algori structed with an approximate factorization of two non-negative matrices W  +  r n H R [35], namely: where m n  V denotes a matrix with the dimension of m, whereas n represents of samples.m r  W denotes a basis matrix that can be regarded as a series of ba r n  H denotes a coefficients matrix that can be regarded as the coordinates of e with respect to these basis vectors.In order to achieve better results of dim reduction, the parameter r (rank of the matrix) is regarded as / (  r mn m model of the NMF algorithm is shown in Figure 1.In the field of signal proces be explained that if each column of the matrix At present, a variety of optimization algorithms about cost function are widely used, and the Euclid Distance is one of the most popular methods, which can be represented: The cost function of Equation ( 2) is regarded as the following optimization problem: The above problem can be solved with a gradient descent algorithm until convergence.The updated rules are presented: Parameter Selection of Short Time Fourier Transform Signals can be transformed into the frequency domain, sparse domain, or other combination domains for processing and analysis.Indistinct features in the time domain can be manifested through such transformation.The traditional Fourier transform is a global transformation based on the combination of different frequency components, which cannot express the time-frequency localization.In order to describe the time-frequency properties of signals, short-time Fourier transform (STFT) is proposed. STFT is a joint time-frequency analysis method based on non-stationary signals.Its basic idea is to truncate the signal by a window function with a fixed length, and the Fourier transform is performed on each segment of the truncated signal to obtain the local frequency spectrum of each segment.Its model can be presented as [36]: where t is the time, f is the frequency, x(t) is the time-domain signal, τ denotes a shift in time, and w(t − τ) is the window function, and j is an imaginary unit.By shifting τ continuously, Fourier Transforms at different times can be obtained.The set of these Fourier Transforms is S(t, f ). As an important processing tool in time-frequency analysis, the short-time Fourier transform has the advantages of simple principle and excellent localization.The weak local feature information can be captured by the two-dimensional representation of vibration signals in the time-frequency domain, and the high-dimensional spatial matrix is easier to leverage the ability of non-negative matrix decomposition algorithms, making compound faults diagnosis easier to implement. Two main parameters (types and lengths of the window function) affect the effectiveness of the short-time Fourier transform.Window function is a method of truncating signals, which can reduce the effect of spectral leakage.The length of the window function affects the time-frequency resolution.The longer the window length, the higher the frequency resolution, but the time resolution is lower.Therefore, the type of window function and the length of the window need to be determined based on the specific signal type and processing environment. In order to reduce the effects of windowing and improve diagnostic accuracy, it is necessary to choose an appropriate window function.As we know, the wider the main lobe of the window function, the smoother the spectral peak of the signal is, and the more obvious the suppression effect of the fence effect is, but it will lead to a decrease in spectral resolution.From the perspective of spectrum analysis, it is required that the main lobe of the window function spectrum should be as narrow as possible to improve the resolution of the spectrum.At the same time, the side lobes of the window function spectrum should be as small as possible and decay rapidly with frequency, which can reduce leakage distortion.Therefore, comparing the performance of several common window functions for the coupling characteristics of compound fault signals in rotating machinery, the Sine-bell window is selected as the processing method in this paper.The sine-bell window performs well on side lobe suppression and can concentrate spectral energy in the main lobe.If the overlapping length is specified during its sliding process, the overlapping window segment can further compensate for signal attenuation at the window edge.The waveform and frequency response of the Sine-bell window are shown in Figure 2. The window length is 128 samples, and the overlap is half of the window length. If the overlapping length is specified during its sliding process, the overlapping win segment can further compensate for signal attenuation at the window edge.The wave and frequency response of the Sine-bell window are shown in Figure 2. The win length is 128 samples, and the overlap is half of the window length. Multi-Constraint Non-Negative Matrix Factorization The selection of the cost function for the non-negative matrix factorization algor is determined by the type of data and the application environment.Although NMF been proven to be a useful tool in source separation, one drawback is that the separa performance tends to be poor in the case of noise.Moreover, NMF incurs a risk of deg ing the separation performance in compound fault signals due to the lack of knowledge.Meanwhile, in the process of feature extraction for multi-source fault sig the worse the correlation between source signals, the more obvious the locality displa and the better the effect on dimensionality reduction.On the contrary, there will b dundant components during the decomposition, which fails to describe the fault ch teristics.Therefore, the dual constraints with β-divergence and determinant are sele as the cost function for the non-negative matrix factorization algorithm based on the acteristics of the fault signal.The β-divergence constraint can reduce limitations on structures, and the determinant constraint can ensure the uniqueness of the base m W during the decomposition.The dual constraints can enhance local features effecti which are more conducive to subsequent signal reconstruction.The model of β-diverg [37] can be presented as: ( , ) ln 1 ln 1 0 From the above Equation ( 6), it is easy to prove the continuity about β-diverg when β = 0 and β = 1, and for any β, the following Equation ( 7) holds: When β = 0, it can be seen that Equation (7) has the property of scale invariance, w is independent of λ.The property of scale invariance indicates that energy componen the amplitude spectrum V have equal weight values during the decomposition.Whe 1, however, it overly relies on the higher energy components in the amplitude spec Multi-Constraint Non-Negative Matrix Factorization The selection of the cost function for the non-negative matrix factorization algorithm is determined by the type of data and the application environment.Although NMF has been proven to be a useful tool in source separation, one drawback is that the separation performance tends to be poor in the case of noise.Moreover, NMF incurs a risk of degrading the separation performance in compound fault signals due to the lack of prior knowledge.Meanwhile, in the process of feature extraction for multi-source fault signals, the worse the correlation between source signals, the more obvious the locality displayed, and the better the effect on dimensionality reduction.On the contrary, there will be redundant components during the decomposition, which fails to describe the fault characteristics.Therefore, the dual constraints with β-divergence and determinant are selected as the cost function for the non-negative matrix factorization algorithm based on the characteristics of the fault signal.The β-divergence constraint can reduce limitations on data structures, and the determinant constraint can ensure the uniqueness of the base matrix W during the decomposition.The dual constraints can enhance local features effectively, which are more conducive to subsequent signal reconstruction.The model of β-divergence [37] can be presented as: From the above Equation ( 6), it is easy to prove the continuity about β-divergence when β = 0 and β = 1, and for any β, the following Equation ( 7) holds: When β = 0, it can be seen that Equation ( 7) has the property of scale invariance, which is independent of λ.The property of scale invariance indicates that energy components in the amplitude spectrum V have equal weight values during the decomposition.When β = 1, however, it overly relies on the higher energy components in the amplitude spectrum V, which is not conducive to the separation of coupled signals.Therefore, β = 0 is chosen in this paper. In order to ensure the uniqueness of the base matrix W and achieve better reconstruction results during the decomposition, the determinant constraint is introduced in the objective function of the NMF algorithm.The space formed by n m-dimensional column vectors W 1 , W 2 , . . .W n is defined as P(W), and the volume of P(W) can be represented as the following Equation (8): When vol(P(W)) is at its minimum value, the corresponding vector W 1 , W 2 , . . .W n obtained can be determined uniquely. The β-divergence constraint and determinant constraint are used as new objective functions for the non-negative matrix factorization algorithm, which can be represented: where α is the equilibrium parameter and is taken as 1 (α = 1) generally, which is used to balance the proportion of matrix W and the reconstruction error. According to the gradient descent method, we derive the iterative update rule for the objective function as follows: When the objective function converges, the optimization with dual constraints can be achieved.The specific steps of Algorithm 1 are as follows: Step 1. Initialize non-negative matrices W and H randomly Step 2. Calculate the initial value of the objective function according to Equation (9) Step 3. Solve and update the matrices W and H alternately and iteratively based on Equation (10) Step 4. If the objective function (Equation ( 9)) converges, the iteration process is stopped, and the matrices W and H are output; otherwise, steps (2) and ( 3) are performed once again The advantage of the multi-constraint NMF algorithm is that the constraints of β-divergence and determinant are introduced in the objective function, which can be close to the source signal, and the redundant component is reduced during the decomposition. Construction of Parameter WK The kurtosis index is a numerical statistic that reflects the distribution characteristics of random variables.It is the normalized 4th-order center moment, which is a dimensionless parameter and is particularly sensitive to impact signals.The correlation coefficient can be characterized by the degree of similarity between two signals.Considering the advantages and disadvantages of two indicators, we constructed a comprehensive parameter called Weighted Kurtosis (WK) in this paper, which is defined as follows: where C is the correlation coefficient between the signals x and y, and E represents the mathematical expectation, K is the Kurtosis value of the signal.According to the Schwartz inequality |C| ≤ 1 can be inferred.Thus, the parameter WK can be seen as the weight of the Kurtosis value, called Weighted Kurtosis.We know that the early failures of rolling bearings are mostly characterized by impact, and kurtosis is used to detect the impact components in the reconstructed signal, while the correlation coefficient can be reflected in the correlation between the reconstructed signal and the original signal.Meanwhile, according to Equation (11), it can be seen when the signal is processed by the multi-constraint NMF algorithm; the larger the parameter WK in the reconstructed signal, the richer the feature information contained, which can represent the fault characteristic signal.Therefore, the parameter WK is constructed as a criterion for filtering the reconstructed signal in this paper. Signal Separation Method Based on Multi-Constraint NMF A separation method of multisource signals with multi-constraint non-negative matrix factorization is proposed for bearings in rotating machinery.The specific diagnosis steps of Algorithm 2 are summarized as follows: The flowchart is presented in Figure 3. Algorithm Simulation and Performance Analysis In this section, the performance of the proposed multi-constraint algorithm is simulated and analyzed.The following model is applied to simulate compound faults in rolling Algorithm Simulation and Performance Analysis In this section, the performance of the proposed multi-constraint algorithm is simulated and analyzed.The following model is applied to simulate compound faults in rolling bearing: where ζ is the damping coefficient, s 1 (t) and s 2 (t) are expressed as the following two feature parameters: The natural frequencies (f n ) are 2500 Hz and 4500 Hz, respectively, and the characteristic frequencies (1/T) are 67 Hz and 162 Hz, the sampling frequency is 100 kHz, and the sampling data is taken as 0.5 s time segments.The mixed matrix A(2 × 1) is generated randomly.The mixed source signal X(t) is obtained by Equation ( 15), and G(t) is Gaussian white noise (SNR = 5 dB) generated randomly.Figure 4 shows the mixed source signal and its normalized envelope spectrum.For the mixed source signals, the proposed method is performed for analysis.Firstly, the characteristic matrix M is obtained by the short-time Fourier transform, and the timefrequency distribution is shown in Figure 5. Secondly, the square value of the matrix M is obtained as the processing matrix of the multi-constraint NMF algorithm.Thirdly, the square-value matrix is decomposed by the multi-constraint NMF algorithm, and the base matrix W and the coefficient matrix H are obtained in dimensionality reduction.Finally, the obtained matrices are reconstructed by the inverse short-time Fourier transform in the subspace, presenting separated signals.Meanwhile, the WK values of the separated signals are shown in Table 1.For the mixed source signals, the proposed method is performed for analysis.Firstly, the characteristic matrix M is obtained by the short-time Fourier transform, and the time-frequency distribution is shown in Figure 5. Secondly, the square value of the matrix M is obtained as the processing matrix of the multi-constraint NMF algorithm.Thirdly, the square-value matrix is decomposed by the multi-constraint NMF algorithm, and the base matrix W and the coefficient matrix H are obtained in dimensionality reduction.Finally, the obtained matrices are reconstructed by the inverse short-time Fourier transform in the subspace, presenting separated signals.Meanwhile, the WK values of the separated signals are shown in Table 1.For the mixed source signals, the proposed method is performed for analysis.Firstly, the characteristic matrix M is obtained by the short-time Fourier transform, and the timefrequency distribution is shown in Figure 5. Secondly, the square value of the matrix M is obtained as the processing matrix of the multi-constraint NMF algorithm.Thirdly, the square-value matrix is decomposed by the multi-constraint NMF algorithm, and the base matrix W and the coefficient matrix H are obtained in dimensionality reduction.Finally, the obtained matrices are reconstructed by the inverse short-time Fourier transform in the subspace, presenting separated signals.Meanwhile, the WK values of the separated signals are shown in Table 1.It can be seen from Table 1 that the WK values of Group 6 and Group 8 are relatively high, which indicates that the feature information in the two groups of signals is rich and describes the source signal better.The normalized envelope spectra of separated signals are shown in Figure 6.It is obvious that the two characteristic components (67 Hz and 162 Hz) can be separated by the proposed method, and their harmonic components are distinct, respectively.Therefore, it can be concluded that the proposed method can be used to separate the source signal from the mixed signals effectively, and the characteristic frequency can also be extracted in the envelope spectrum, which verifies the effectiveness of the proposed method. Experimental Verification and Discussion In order to further validate the effectiveness of the proposed method, the mea compound fault signals of the roller bearing (N204) are used as the research objec defects are machined artificially using the electrical discharge machining method outer ring and rolling elements of the bearing.The vibration signals in the vertic horizontal directions are collected by the acceleration sensor (608A11).The platform simulation experiment and fault bearing are shown in Figure 7.The motor speed is 1300 rpm and 900 rpm, respectively, and the sampling frequency is 100 kHz (collec sample points in 1 s).The sensor is set to collect data for 10 seconds.The fault p frequency of rolling bearings can be calculated according to the structural paramete ble 2).The theoretical characteristic frequency is shown in Table 3. Experimental Verification and Discussion In order to further validate the effectiveness of the proposed method, the measured compound fault signals of the roller bearing (N204) are used as the research object.The defects are machined artificially using the electrical discharge machining method on the outer ring and rolling elements of the bearing.The vibration signals in the vertical and horizontal directions are collected by the acceleration sensor (608A11).The platform of the simulation experiment and fault bearing are shown in Figure 7.The motor speed is set to 1300 rpm and 900 rpm, respectively, and the sampling frequency is 100 kHz (collect 100k sample points in 1 s).The sensor is set to collect data for 10 s.The fault passing frequency of rolling bearings can be calculated according to the structural parameters (Table 2).The theoretical characteristic frequency is shown in Table 3. Experimental Verification and Discussion In order to further validate the effectiveness of the proposed method, the measur compound fault signals of the roller bearing (N204) are used as the research object.T defects are machined artificially using the electrical discharge machining method on t outer ring and rolling elements of the bearing.The vibration signals in the vertical a horizontal directions are collected by the acceleration sensor (608A11).The platform of simulation experiment and fault bearing are shown in Figure 7.The motor speed is set 1300 rpm and 900 rpm, respectively, and the sampling frequency is 100 kHz (collect 10 sample points in 1 s).The sensor is set to collect data for 10 seconds.The fault passi frequency of rolling bearings can be calculated according to the structural parameters ( ble 2).The theoretical characteristic frequency is shown in Table 3.The signals collected at 1300 rpm are used for analysis, and the data is taken as 0.5 s time segments randomly.The waveform and the normalized envelope spectrum of the signals are shown in Figure 8.The impulse component can be seen clearly from the time-domain waveform, w indicates that the bearing has malfunctioned.The periodic property, however, is no vious, and useful state information cannot be obtained.In the envelope spectrum, th fect feature of the outer race can be identified approximately, but the defect abou roller is submerged by the noise component and difficult to identify.In addition, p appear near 8 Hz and 16 Hz in the spectrum, which is close to the characteristic frequ of the cage and its second harmonic component, as well as the revolving frequency o roller.The appearance of these two peaks may be caused by the impact of the rollers According to the proposed method, the original signal is subjected to the short Fourier transform to obtain a feature matrix M, and the time-frequency distributi shown in Figure 9.The modulation and cluster of original signals can be seen clearly the time-frequency distribution.The square value of the matrix M is obtained as the cessing matrix of the multi-constraint NMF algorithm; after that, the square-value m is decomposed by the multi-constraint NMF algorithm to obtain the base matrix W the coefficient matrix H.The impulse component can be seen clearly from the time-domain waveform, which indicates that the bearing has malfunctioned.The periodic property, however, is not obvious, and useful state information cannot be obtained.In the envelope spectrum, the defect feature of the outer race can be identified approximately, but the defect about the roller is submerged by the noise component and difficult to identify.In addition, peaks appear near 8 Hz and 16 Hz in the spectrum, which is close to the characteristic frequency of the cage and its second harmonic component, as well as the revolving frequency of the roller.The appearance of these two peaks may be caused by the impact of the rollers. According to the proposed method, the original signal is subjected to the short-time Fourier transform to obtain a feature matrix M, and the time-frequency distribution is shown in Figure 9.The modulation and cluster of original signals can be seen clearly from the time-frequency distribution.The square value of the matrix M is obtained as the processing matrix of the multi-constraint NMF algorithm; after that, the square-value matrix is decomposed by the multi-constraint NMF algorithm to obtain the base matrix W and the coefficient matrix H. Finally, the obtained matrices are reconstructed by the inverse short-time Fourier transform in the subspace, presenting separated signals.Meanwhile, the WK values of the separated signals are shown in Table 4. It can be seen from Table 4 that the WK values of Group 2 and Group 7 are relatively high, which indicates that the feature information in the two groups of signals is rich and describes the source signal better.The normalized envelope spectra of separated signals are shown in Figure 10. the time-frequency distribution.The square value of the matrix M is obtained as the processing matrix of the multi-constraint NMF algorithm; after that, the square-value matrix is decomposed by the multi-constraint NMF algorithm to obtain the base matrix W and the coefficient matrix H. Finally, the obtained matrices are reconstructed by the inverse short-time Fourier transform in the subspace, presenting separated signals.Meanwhile, the WK values of the separated signals are shown in Table 4.It can be seen from Table 4 that the WK values of Group 2 and Group 7 are relat high, which indicates that the feature information in the two groups of signals is rich describes the source signal better.The normalized envelope spectra of separated si are shown in Figure 10.It is obvious that two leading constituents are obtained by the proposed appr which accords with characteristic frequencies of the outer race and the roller.Meanw their higher harmonic components are presented plainly.Furthermore, the featur quency of the cage (8 Hz) and its high-frequency components appear in Figure 10b the sideband structure is protruded, which is in conformity with the roller failure.T fore, the results indicate the effectiveness of the proposed approach, which can realiz separation of multi-source signals and their fault diagnosis of bearings. Similarly, the data is taken as 0. According to the proposed method, the time-frequency distribution is shown in ure 12, and the WK values of the separated signals are shown in Table 5.It is obvious that two leading constituents are obtained by the proposed approach, which accords with characteristic frequencies of the outer race and the roller.Meanwhile, their higher harmonic components are presented plainly.Furthermore, the feature frequency of the cage (8 Hz) and its high-frequency components appear in Figure 10b, and the sideband structure is protruded, which is in conformity with the roller failure.Therefore, the results indicate the effectiveness of the proposed approach, which can realize the separation of multi-source signals and their fault diagnosis of bearings. Similarly, the data is taken as 0.5 s time segments at 900 rpm randomly.The waveform and the normalized envelope spectrum of the signals are shown in Figure 11. Entropy 2024, 26, x FOR PEER REVIEW 12 It can be seen from Table 4 that the WK values of Group 2 and Group 7 are rela high, which indicates that the feature information in the two groups of signals is ric describes the source signal better.The normalized envelope spectra of separated si are shown in Figure 10.It is obvious that two leading constituents are obtained by the proposed appr which accords with characteristic frequencies of the outer race and the roller.Meanw their higher harmonic components are presented plainly.Furthermore, the featur quency of the cage (8 Hz) and its high-frequency components appear in Figure 10b the sideband structure is protruded, which is in conformity with the roller failure.T fore, the results indicate the effectiveness of the proposed approach, which can realiz separation of multi-source signals and their fault diagnosis of bearings. Similarly, the data is taken as 0. According to the proposed method, the time-frequency distribution is shown i ure 12, and the WK values of the separated signals are shown in Table 5.According to the proposed method, the time-frequency distribution is shown in Figure 12, and the WK values of the separated signals are shown in Table 5.Similarly, it is obvious that two leading constituents are obtained by the proposed approach, which accord with characteristic frequencies of the outer race and the roller.Meanwhile, their higher harmonic components are presented plainly.Furthermore, the feature frequency of the cage (6 Hz) and its high-frequency components appear in Figure 13b, and the sideband structure is protruded, which is in conformity with the roller failure.Therefore, the results support the effectiveness of the proposed approach in the field of compound fault diagnosis of bearings. Comparison with Traditional Method To demonstrate the advantages of the proposed method for multi-source signal separation, the traditional non-negative matrix factorization algorithm with β-divergence and KL-divergence are compared individually.The data at 1300 rpm is selected to illustrate it.The normalized envelope spectra of the separated signal are shown in Figures 14 The separation signals with high WK values are selected for envelope spectrum analysis to extract the fault features of bearings and their normalized envelope spectra are shown in Figure 13.Similarly, it is obvious that two leading constituents are obtained by the prop approach, which accord with characteristic frequencies of the outer race and the r Meanwhile, their higher harmonic components are presented plainly.Furthermor feature frequency of the cage (6 Hz) and its high-frequency components appear in F 13b, and the sideband structure is protruded, which is in conformity with the roller fa Therefore, the results support the effectiveness of the proposed approach in the fie compound fault diagnosis of bearings. Comparison with Traditional Method To demonstrate the advantages of the proposed method for multi-source signa aration, the traditional non-negative matrix factorization algorithm with β-diverg and KL-divergence are compared individually.The data at 1300 rpm is selected to trate it.The normalized envelope spectra of the separated signal are shown in Figur and 15.Similarly, it is obvious that two leading constituents are obtained by the proposed approach, which accord with characteristic frequencies of the outer race and the roller.Meanwhile, their higher harmonic components are presented plainly.Furthermore, the feature frequency of the cage (6 Hz) and its high-frequency components appear in Figure 13b, and the sideband structure is protruded, which is in conformity with the roller failure.Therefore, the results support the effectiveness of the proposed approach in the field of compound fault diagnosis of bearings. Comparison with Traditional Method To demonstrate the advantages of the proposed method for multi-source signal separation, the traditional non-negative matrix factorization algorithm with β-divergence and KL-divergence are compared individually.The data at 1300 rpm is selected to illustrate it.The normalized envelope spectra of the separated signal are shown in Figures 14 and 15.It can be seen from Figures 14 and 15 that the multi-source signals are not sepa effectively with the traditional non-negative matrix factorization algorithm based divergence and KL-divergence.The fault feature of the outer race is almost extracted the fault feature of the rolling element is submerged in environmental noise, which to describe the fault source signal accurately.Comparing traditional algorithms wi proposed algorithm, it can be seen that since the multi-constraint NMF algorithm hances the local features of fault components, thus the multi-source signal can be rated, and the fault feature can be extracted. Conclusions In this paper, a novel blind source separation method under a single channel on the multi-constraint NMF is proposed.The main research content and correspon conclusions are as follows: (1) The performance of several common window functio compared for compound fault signals, the Sine-bell window is selected as the proce method, and its parameter length is selected iteratively.It can be seen from Figures 14 and 15 that the multi-source signals are not sepa effectively with the traditional non-negative matrix factorization algorithm based divergence and KL-divergence.The fault feature of the outer race is almost extracted the fault feature of the rolling element is submerged in environmental noise, which to describe the fault source signal accurately.Comparing traditional algorithms wit proposed algorithm, it can be seen that since the multi-constraint NMF algorithm hances the local features of fault components, thus the multi-source signal can be rated, and the fault feature can be extracted. Conclusions In this paper, a novel blind source separation method under a single channel b on the multi-constraint NMF is proposed.The main research content and correspon conclusions are as follows: (1) The performance of several common window function compared for compound fault signals, the Sine-bell window is selected as the proce method, and its parameter length is selected iteratively.It can be seen from Figures 14 and 15 that the multi-source signals are not separated effectively with the traditional non-negative matrix factorization algorithm based on β-divergence and KL-divergence.The fault feature of the outer race is almost extracted, and the fault feature of the rolling element is submerged in environmental noise, which fails to describe the fault source signal accurately.Comparing traditional algorithms with the proposed algorithm, it can be seen that since the multi-constraint NMF algorithm enhances the local features of fault components, thus the multi-source signal can be separated, and the fault feature can be extracted. Conclusions In this paper, a novel blind source separation method under a single channel based on the multi-constraint NMF is proposed.The main research content and corresponding conclusions are as follows: (1) The performance of several common window functions are compared for compound fault signals, the Sine-bell window is selected as the processing method, and its parameter length is selected iteratively.(2) The constraints with β-divergence and determinant are introduced into the objective function of the traditional NMF algorithm, which can enhance local feature information and reduce redundant components during the decomposition.The iterative update rules for the multi-constraint NMF algorithm have been derived, and the convergence and practicality of the algorithm have been demonstrated in experiments.(3) The parameter Weighted Kurtosis (WK) is constructed as a criterion for filtering the reconstructed signals, and it has been proven to obser each group of observed signals contains different features (mixed features, sing or redundant information) represented by green squares and red triangles.E of the matrix m r  W contains the separated feature of the observed signal by t gorithm, which can be reconstructed to the original signal by multiplying the matrix r n  H .It shows the idea of representing the whole based on parts. Figure 1 . Figure 1.The model of the NMF algorithm. Figure 1 . Figure 1.The model of the NMF algorithm. Algorithm 2 : Signal Separation Method Based on Multi-constraint NMFStep 1.The algorithm of the short-time Fourier transform (STFT) is performed to obtain a feature matrix with local information.Step 2. Take the square value of the feature matrix, and the multi-constraint NMF algorithm is used to reduce the dimension, and obtain the base matrix W and the coefficient matrix H.Step 3. The matrix W and H are recombined in subspace, and the recombined signals with feature components in the time domain are obtained by the inverse short-time Fourier transform (ISTFT).Step 4. Calculate the WK values of the recombined signals Step 5.The separation signals with high WK values are selected for envelope spectrum analysis to extract the fault features of bearings. Figure 3 . Figure 3.The flowchart of the proposed method. Figure 3 . Figure 3.The flowchart of the proposed method. Figure 5 . Figure 5. Time-frequency distribution of the simulated signal. Figure 5 . Figure 5. Time-frequency distribution of the simulated signal. Figure 5 . Figure 5. Time-frequency distribution of the simulated signal. Figure 7 . Figure 7.The experimental platform and fault bearing of simulation experiment: (a) experim platform; (b) fault bearing. Figure 7 . Figure 7.The experimental platform and fault bearing of simulation experiment: (a) experiment platform; (b) fault bearing. Figure 8 . Figure 8.The signal of compound faults at 1300 rpm: (a) time-domain waveform; (b) the env spectrum. Finally, the obtained matrices are reconstructed by the in short-time Fourier transform in the subspace, presenting separated signals.Meanw the WK values of the separated signals are shown in Table Figure 8 . Figure 8.The signal of compound faults at 1300 rpm: (a) time-domain waveform; (b) the envelope spectrum. Figure 9 . Figure 9. Time-frequency distribution of the collected signal at 1300 rpm. Figure 9 . Figure 9. Time-frequency distribution of the collected signal at 1300 rpm. Figure 10 . Figure 10.Envelope spectra of separated signals with the proposed method at 1300 rpm: (a) lope spectrum of outer-race fault; (b) envelope spectrum of roller fault. Figure 11 . Figure 11.The signal of compound faults at 900 rpm: (a) time-domain waveform; (b) the env spectrum. Figure 10 . Figure 10.Envelope spectra of separated signals with the proposed method at 1300 rpm: (a) Envelope spectrum of outer-race fault; (b) envelope spectrum of roller fault. Figure 10 . Figure 10.Envelope spectra of separated signals with the proposed method at 1300 rpm: (a) lope spectrum of outer-race fault; (b) envelope spectrum of roller fault. Figure 11 . Figure 11.The signal of compound faults at 900 rpm: (a) time-domain waveform; (b) the env spectrum. Figure 11 . Figure 11.The signal of compound faults at 900 rpm: (a) time-domain waveform; (b) the envelope spectrum. Figure 13 . Figure 13.Envelope spectra of separated signals with the proposed method at 900 rpm: (a) Envelope spectrum of outer-race fault; (b) envelope spectrum of roller fault. Figure 12 . Figure 12.Time-frequency distribution of the collected signal at 900 rpm. Figure 13 . Figure 13.Envelope spectra of separated signals with the proposed method at 900 rpm: (a) Env spectrum of outer-race fault; (b) envelope spectrum of roller fault. Figure 13 . Figure 13.Envelope spectra of separated signals with the proposed method at 900 rpm: (a) Envelope spectrum of outer-race fault; (b) envelope spectrum of roller fault. ( 2 ) The constraints with β-d gence and determinant are introduced into the objective function of the traditional algorithm, which can enhance local feature information and reduce redundant co nents during the decomposition.The iterative update rules for the multi-constraint algorithm have been derived, and the convergence and practicality of the algorithm been demonstrated in experiments.(3) The parameter Weighted Kurtosis (WK) is structed as a criterion for filtering the reconstructed signals, and it has been prov separate redundant signals effectively.(4) The simulated and experimental results cate the effectiveness of the proposed approach, which realizes the separation of m source signals and extracts fault features.Meanwhile, compared with the NMF algo of the traditional objective function, the proposed method is more applicable for pound fault diagnosis. ( 2 ) The constraints with β-d gence and determinant are introduced into the objective function of the traditional algorithm, which can enhance local feature information and reduce redundant co nents during the decomposition.The iterative update rules for the multi-constraint algorithm have been derived, and the convergence and practicality of the algorithm been demonstrated in experiments.(3) The parameter Weighted Kurtosis (WK) is structed as a criterion for filtering the reconstructed signals, and it has been prov separate redundant signals effectively.(4) The simulated and experimental results cate the effectiveness of the proposed approach, which realizes the separation of m source signals and extracts fault features.Meanwhile, compared with the NMF algo of the traditional objective function, the proposed method is more applicable for pound fault diagnosis. Figure 15 . Figure 15.Envelope spectra of separated signals with the KL-divergence method: (a) Envelope spectrum of f 1 ; (b) envelope spectrums of f 2 . Table 1 . WK of the simulated signal. It can be seen from Table1that the WK values of Group 6 and Group 8 are relatively Table 1 . WK of the simulated signal. Table 1 . WK of the simulated signal. Table 4 . WK of the reconstructed signal at 1300 rpm. Table 4 . WK of the reconstructed signal at 1300 rpm. Table 5 . WK of the reconstructed signal at 900 rpm. Table 5 . WK of the reconstructed signal at 900 rpm. Table 5 . WK of the reconstructed signal at 900 rpm.
10,388
2024-07-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Hyperglycaemic Environment: Contribution to the Anaemia Associated with Diabetes Mellitus in Rats Experimentally Induced with Alloxan Background. Diabetes mellitus characterized by hyperglycaemia presents with various complications amongst which anaemia is common particularly in those with overt nephropathy or renal impairment. The present study has examined the contribution of the hyperglycaemic environment in diabetic rats to the anaemia associated with diabetes mellitus. Method. Sixty male albino rats weighing 175–250 g were selected for this study and divided equally into control and test groups. Hyperglycaemia was induced with 170 kgbwt−1 alloxan intraperitoneally in the test group while control group received sterile normal saline. Blood samples obtained from the control and test rats were assayed for packed cell volume (PCV), haemoglobin (Hb), red blood cell count (RBC), reticulocyte count, glucose, plasma haemoglobin, potassium, and bilirubin. Result. Significant reduction (P < 0.01) in PCV (24.40 ± 3.87 versus 40.45 ± 3.93) and haemoglobin (7.81 ± 1.45 versus 13.39 ± 0.40) with significant increase (P < 0.01) in reticulocyte count (12.4 ± 1.87 versus 3.69 ± 0.47), plasma haemoglobin (67.50 ± 10.85 versus 34.20 ± 3.83), and potassium (7.04 ± 0.75 versus 4.52 ± 0.63) was obtained in the test while plasma bilirubin showed nonsignificant increase (0.41 ± 0.04 versus 0.24 ± 0.06). Conclusion. The increased plasma haemoglobin and potassium levels indicate an intravascular haemolytic event while the nonsignificant increased bilirubin showed extravascular haemolysis. These play contributory roles in the anaemia associated with diabetes mellitus. Introduction Diabetes mellitus is a disorder of impaired carbohydrate metabolism resulting from a relative or an absolute deficiency of the hormone insulin. It is documented to have a global prevalence, ranking among the top causes of death in the Western world [1]. Without preference to classification, diabetes mellitus generally presents with hyperglycaemia. Hyperglycaemia is referred to as blood sugar greater than the upper reference limit for age, sex, and environmental and physiological condition [2]. In hyperglycaemic state, glucose supplies to metabolizing cells are usually impaired but not to the red blood cell. The glucose transporter on the red cell membrane, glucose-permease, is non-insulin-dependent; hence an excessively high concentration of red cell intracellular glucose in hyperglycaemic state is imminent [3]. Studies have shown that accumulation of intracellular glucose may increase peroxidation of red cell membrane predisposing to cell membrane defects [4]. This may influence deformability [5] as observed in the red cells of patients with diabetic retinopathy [6,7] and also contribute to reduced blood flow in the capillaries and microcirculation as hypothesized by other research workers [8][9][10]. It has also been reported that effective erythropoietin synthesis may be impaired following pathologic conditions of the kidneys, contributing to the anaemia observed in diabetes mellitus [11]. These factors and several others play a role in the anaemia 2 Anemia associated with diabetes; the effect of the hyperglycaemic environment on the red cell survival is therefore investigated by this study. Materials and Methods The experimental study was conducted at the Mercyland Campus of Ladoke Akintola University of Technology (LAUTECH), Osogbo. Sixty (60) white male albino rats weighing 175-250 g were acclimated for 14 days to the animal house of the Mercyland Campus of Ladoke Akintola University, Osogbo. The selected animals were housed in wire mesh, well aerated cages at normal atmospheric temperature (25 ± 5 ∘ C) and normal 12-hour light/dark cycle. They had free access to water and supplied daily with standard diet of known composition ad libitum. All animal procedures were in accordance with the standard recommendations for care and use of laboratory animals [12]. 2.1. Chemicals, Reagent, and Equipment. Alloxan monohydrate was purchased from Sigma-Aldrich Chemicals Co. (St. Louis, MO, USA), protected from direct light exposure, and stored at 2-4 ∘ C. All other chemicals including stains (Leishman) were of analytical grade and obtained from licensed laboratory reagent suppliers. Machines and equipment used were properly calibrated and quality-controlled before respective analyses. Induction of Diabetes. Rats were weighed and blood samples collected from the tail vein for baseline plasma glucose (glucose oxidase method) estimation using Randox glucose kit (Randox Laboratories Ltd., BT29 QY, United Kingdom). Subsequently, the animals were divided equally into two (2) groups. Experimental Design. On days 3, 6, and 9 after injection, rats were reweighed and glucose estimation was done in all the two (2) groups described above. On day 10, twenty-four (24) rats had very high plasma glucose level greater than 250 mg/dL and were included for group 2 while four (4) lower responsive rats were excluded. Animals were sacrificed by exposure to chloroform within a closed system and blood samples were collected for the various investigations into appropriate specimen bottles. The following investigations were carried out in the course of the study: haematocrit (HCT), haemoglobin (Hb), and red blood cell count (RBC), extracted from a complete blood count analysis using SYS-MEX Automated Hematology Analyzer (KX-21N, Sysmex Corporation, Chuo-ku, Kobe 651-0073, Japan); peripheral blood for reticulocyte count incubated with new methylene blue at 37 ∘ C, smeared, and estimated manually; serum total Results A total of fifty-four (54) rats, 30 controls and 24 (80%) alloxan induced hyperglycaemic tests rats, were used for this study. Two (6.6%) of the test rats were lost to death on days 2 and 3 after induction. In Table 1 we summarize the effect of alloxan administration on plasma glucose level. On day 9 after induction, significant hyperglycaemia (≥250 mg/dL) was observed in 24 (80%) rats. Four (13.3%) test rats showed no significant increase in plasma glucose after alloxan induction; there was no significant difference between the baseline glucose and glucose concentration after alloxan induction in these rats. Average plasma glucose level on day 9 after induction was significantly higher ( < 0.01) than the control (267 mg/dL versus 80 mg/dL). Since significant hyperglycaemia was not established in four (13.3%) of the rats, they were excluded from further studies. Data comparing the average mean values and standard deviations between parameters of the test and control group were summarized in Tables 2 and 3. In Table 2, we compare the results of plasma haemoglobin, plasma potassium, and total bilirubin concentration between the two groups. There was a significant increase ( < 0.01) in plasma haemoglobin and potassium concentration in hyperglycaemic rats; total bilirubin, however, was not significantly increased between the two groups although average mean was increased in the test group. Table 3 showed the relationship between the haematocrit, haemoglobin, total red blood cell count, and reticulocyte count in the two groups. We observed a statistically significant reduction ( < 0.01) in haematocrit, haemoglobin, and red blood cell count among Anemia 3 the hyperglycaemic rats; reticulocyte count was statistically higher in this group also. The red cell parameters (HCT, Hb, and RBC) were higher and stable in the control sets and reticulocyte count remained within normal limits. Discussion Induction of diabetes experimentally by alloxan (2,4,5,6tetraoxypyrimidine; 2,4,5,6-pyrimidineterione) remains one of the most effective methods of establishing experimental diabetes. It is a well-known diabetogenic agent and has been widely reported to generate stable hyperglycaemia for prolong period [15]. In our study as summarized in Table 1, there was progressive induction of hyperglycaemia following alloxan administration and a stable hyperglycaemic state in 24(80%) of the alloxan induced rats. This is in consonance with several research studies that induced hyperglycaemia using alloxan [13,16]. Misra and Aiman [17] in their study observed alloxan induced diabetes in 60% of rats using the same dosage as in our study; however they reported a dose-dependent mortality in 40% of the rats in this group. They hypothesized that susceptibility to diabetogenic and toxic effects of alloxan differs among animals of the same species. Alloxan has a narrow diabetogenic range of 160-180 kgbwt −1 [13]; induction therefore with a lower dose may autorevert the hyperglycaemic state following a regeneration of the pancreatic beta cells [18] while a higher dose may be cytotoxic, damaging not only the pancreatic cells but other important organs [19]. Hyperglycaemia was established on the sixth day of our study. This showed that the onset of alloxan action may be delayed [18]. Optimization of the diabetogenic agent is dependent on the dose range, route of administration, rate of injection, and age and species of experimental animal used [13,[17][18][19][20]. A study on the pharmacokinetic and pharmacodynamic profile of alloxan hypothesized unpredictable diabetes inducing alloxan effect except when administered by rapid intravenous injection [18]. Hyperglycaemic response and stability were monitored in the animals throughout the experimental process to rule out autoreversion (Table 1). Significant increase in plasma potassium and haemoglobin in the test group as depicted in Table 2 suggests episodes of intravascular red cell destruction (haemolysis within the peripheral circulation). This may be attributable to fragmentation of the red blood cells in the peripheral circulation as a result of the glucose permease enabled accumulated red cell intracellular glucose and generation of reactive substances, distorting the well programmed structural and functional character of the cell [3]. In addition, some red cells withstanding breakage in the circulation getting to the spleen lose deformability and are phagocytosed by the reticuloendothelial macrophages, releasing bilirubin [21]. We posit that this incidence must have informed the nonsignificant increase in total bilirubin seen in this study. The red blood cells are clinically important haematologic cells and uniquely identified as one of the early cells affected in diabetes [22], before development of other diabetic complications. Carroll and colleagues recorded that the red cells play important role in the onset and development of several diabetic complications [23]. The mechanism underlying red cell destruction in hyperglycaemia is complex. Normal erythrocytes are biconcave shaped cells, measuring about 8 m in diameter, with an average volume of 90fL and surface area of 140 m 2 . According to Mohandas and Gallagher, red blood cell has a membrane which is highly elastic, rapidly responds to applied fluid stress, and is stronger than steel in terms of structural resistance [24]. Despite this unique feature, a slight alteration in structural composition, small increased surface area, haemoglobin hyperviscosity, and autooxidation, amongst others, poses a challenge to the oxygen transporting cell and results in cell lysis [25,26]. In the process of performing oxygen transport function, RBCs are exposed to high level of endogenous and exogenous oxidative metabolites [27]. These accumulating reactive substances potentiate complex oxidative processes with severe damaging consequence on the cell membrane, structure, and function. However, to optimize their exclusive role as well as to survive the rigors of circulation, the highly specialized blood cells have evolved an extensive array of enzymatic and Anemia nonenzymatic antioxidants systems, including membrane oxidoreductases, cellular antioxidants such as catalase and superoxide dismutase (SOD), and enzymes that continuously produce reducing agents through the glutathione (GSH) system [28]. In hyperglycaemic state, generation of reactive oxidative substances is markedly increased creating a redox imbalance within the red cell environment and limiting the cell antioxidative potential [22,29]. Tiwari and Ndisang reported that glucose mediated increase in reactive oxygen species is one of the biochemical changes associated with type 1 diabetes (enhanced hyperglycaemia-mediated oxidative stress) [30]; other biochemical reactions associated with hyperglycaemia are diacylglycerol production and subsequent activation of the protein kinase C pathway, flux through the polyol metabolic pathway, secretion of cytokines, and modification of proteins and lipids that becomes nonenzymatically glycated forming Schiff bases and amadori products with resultant, irreversible generation and accumulation of glycated end products [31][32][33][34][35]. The red cells demonstrating an unregulated access to glucose uptake are in this state exposed to high glucose concentration both intracellularly and within the vascular environment [35]; this increases glucose oxidation and accumulates glucose metabolites, including NADPH which promotes susceptibility to lipid peroxidation, membrane damage, and intravascular cell death [4]. One of the greatest challenges to the well-equipped red cell antioxidant system as documented by Mohanty and colleagues is the increased autoxidation of haemoglobin (Hb) bound to the membrane in hyperglycaemic state which is relatively inaccessible to the antioxidant system [36]. Besides haemoglobin, ROS also critically affects other proteins in the red cell since they are easy target of ROS, majorly the spectrin, ankyrin, actin, and protein 4.1 [37]. Oxidation of biomolecules at amino acid active sites can also trigger rapid deactivation of enzyme and shut down the antioxidant system [22]. RBCs thus become highly susceptible to oxidative damage from accumulated reactive substance generation. A number of in vitro/in vivo studies have shown that several RBCs parameters are negatively affected by increased oxidative stress as observed in diabetes [38][39][40][41]. One of these is an assessment of heme degradation products (HDP) to determine the red cell oxidative status which was increased in older RBC as they tend to senescence [36]; this was also observed in RBC of diabetics [42] suggesting reduced membrane deformability [23]. In addition, oxidative stress also inhibits Ca-ATPase, responsible for regulating the intracellular concentration of calcium [43,44]. With increased intracellular calcium, the Gardos channel is activated causing leakage of intracellular potassium; this alters cation homeostasis resulting in cell shrinking and lysis [45] as described in our study. Besides the red cell are the vascular endothelial cells with high amount of glucose transporter and also an unrestricted access to glucose in-flow [35]. Vascular complications in diabetes are associated with formation of cross-links between key molecules in the basement membrane of ECM and eruption of basement membrane lesions; this results in thickening of the blood vessels subjecting the already weakened red blood cells to fragmentation and contributing to premature destruction of the red cell in circulation [46]. Hence, red cell fragmentation and intravascular haemolysis are common events associated with damaged blood vessels, especially within the microvascular environment (microangiopathy). Despite the undoubted fact that hyperglycaemia battles all the tissues in the body, it is established that diabetic complications are observed in a subset of cell types; capillary endothelial cells in the retina, mesangial cells in the renal glomerulus, and neurons and Schwann cells in peripheral nerves. Brownlee explained that, in hyperglycaemic state, most cells reduce transport of glucose inside the cell so that their internal glucose concentration stays constant. However, the cells damaged by hyperglycemia, including the red blood cells, are those that cannot do this efficiently because glucose transport rate does not decline rapidly, leading to high glucose inside the cell [35]. In view of this we propose that oxidative stress distorted biochemical processes and impaired deformability and cell membrane weakness, fragmentation, and intravascular and extravascular destructions; as observed in our study are the features characterizing the onset of anaemia associated with type 1 diabetes. Anaemia stimulated hypoxia has been shown to increase hypoxia-inducible factor 1 which promotes synthesis of erythropoietin, inducing reticulocytosis [47]. The bone marrow responsiveness to the haemolysis through significant reticulocytosis indicates that alloxan toxicity has no destructive effect on the bone marrow at dosage used. It is further established that anaemia observed in earlier diabetes is contributed to by intravascular and extravascular haemolysis while anaemia of chronic long standing diabetics is caused by renal pathology [48]. In conclusion, we infer from our study that red cell destruction due to hyperglycaemic environment is predominantly intravascular with minor contribution from the extravascular environment and the presenting anaemia is a responsive type differing from the nonresponsive chronic anaemia associated with diabetic nephropathy documented by other workers.
3,514
2015-11-30T00:00:00.000
[ "Medicine", "Biology" ]
Improving Students’ Writing Participation and Achievement in an Edpuzzle-Assisted Flipped Classroom Received Sept 26, 2020 Revised Dec 8, 2020 Accepted Dec 30, 2020 Writing is often considered as a dull but challenging skill to be learned. Hence, a learning innovation aside the conventional methods is needed to improve students’ participation and achievement. Flipped Classroom, a reverse teaching strategy was selected to overcome this problem because it involves the use of technology as a learning media that fits the characteristics of millennial students. This strategy was combined with Edpuzzle, a learning media which provides content from renowned education channels that can be customized and used freely by teachers. The combination of Flipped Classroom strategy and Edpuzzle has proven to be successful in improving students’ participation in learning activities (30.5%) as well as their achievement in writing (17%). Therefore, teachers are suggested to implement and adapt this practice in their class while considering the competence to be mastered, as well as students’ needs and characteristics. Introduction Students' participation, both physically and psychologically, should occur during learning activities because it can aid students in constructing their knowledge. When students are actively learning, they are actively building an understanding of the problems that they face in the learning process (Carr et al., 2015;Handelsman et al., 2007;Sardiman, 2011). Moreover, students who are actively participating in the learning process will tend to be actively involved in discussions or group work which can hone their higher-order thinking skills (Freeman et al, 2014). This idea is also regulated by the Standar Proses in Permendikbud Nomor 22 Tahun 2016 (Permendikbud, 2016) which states that "the learning processes in an educational institution is held in an interactive, inspirational, fun, and challenging way which motivates students to actively participate and provides space for them to develop initiative, creativity, and independence tailored to the talents, interests, and physical and psychological development of the students". Thus, students' participation is an essential element that teachers should generate in their students for the success of the learning process. In the English subject, students' achievement in writing is important because writing is an essential tool in both academic and social life (DeVoss et al., 2010). Writing is a complex skill that requires the students to search for detailed information which makes them often encounter difficulties, especially for those who study English as a foreign language such as in Indonesia. Moreover, students may find writing activities boring and uninteresting because they are often associated with activities that are limited to the use of pen and paper only (DeVoss et al., 2010; Zakiya, 2020). Those difficulties are also experienced by students of XI OTR 1 of SMKN 10 Malang, whose average success score in writing English texts is 62% and less than 70% of student learning participation. Therefore, learning activities that engage students' participation (student-centered) is necessary to be conducted to enhance their learning writing outcomes. This can be done by selecting the appropriate teaching strategy, which is the Flipped Classroom strategy. Flipped Classroom is a relatively new strategy among educators in Indonesia so that it can also be viewed as an interesting new variation in learning. Indahwati (2020) states that teachers are expected to continuously improve their learning models to overcome problems that occur in the classroom [16] . Additionally, the Flipped Classroom strategy also requires the use of technology as a learning medium so that it is considered in line with the characteristics of millennial students in the technological revolution era. The use of technology can attract students' attention and curiosity. The benefits of using the Flipped Classroom strategy in facilitating learning activities and improving students' achievement has also been proven in studies conducted by Zakiya (2020), Soltanpour & Valizadeh (2018), Afriyalsanti et al. (2016), and Engin (2014). To support the technology-integrated learning process using the Flipped Classroom strategy, this study utilized a web-based interactive learning media called Edpuzzle. It was chosen after considering the effectiveness of Edpuzzle in improving students' writing ability. Several studies have found that Edpuzzle can activate students' prior knowledge to construct their knowledge and is more effective in improving students' English text writing skills than other conventional media (Julinar & Yusuf, 2019; Yesyika, 2017). Moreover, Rahmiati & Emaliana (2019) agree that online learning is an interesting learning resource because it provides convenience and flexibility to the students. Based on the explanation above, the problems that become the focus of this study are 1) how are students' participation and achievement in learning writing after the implementation of the Flipped Classroom strategy using Edpuzzle, and 2) how to implement the Flipped Classroom strategy using Edpuzzle. Therefore, the objectives of conducting this study are 1) to describe the improvement of students' participation and achievement in learning writing after the implementation of the Flipped Classroom strategy using Edpuzzle, and 2) to describe the implementation of the Flipped Classroom strategy using Edpuzzle. Additionally, this study will provide benefits in the form of 1) detailed information about how the implementation of Flipped Classroom strategy using Edpuzzle can improve students' participation and achievement in learning writing, and 2) detailed information about the steps and techniques for implementing Flipped Classroom strategy using Edpuzzle. Flipped Classroom According to Bergmann & Sams (2012, in Engin, 2014, students who are taught using the Flipped Classroom strategy learn the basics of subject matter to be learned by collecting and searching for information from outside the classroom before the actual classroom activities (at school) begin. Wiginton (2013) adds that Flipped Learning involves the use of technology by changing the place of learning that usually exists in a conventional classroom into a classroom outside the conventional classroom or even anywhere else. Flipped Classroom strategy has attracted the attention of teachers and researchers because of its benefits, such as its ability to provide opportunities to learn independently (autonomous learning) both inside and outside the classroom. It is also more focused on student-centered activities in the classroom that are more in-depth and can sharpen students' higher-order thinking skills through activities such as discussions, questions and answers, presentations, and so on with the teacher as a facilitator the teacher provides students with online learning videos about the material to be learned and assigns students to observe the video and collect information related to the content of the video, 2) the students write questions related to the content of the video or other information they have collected, 3) the students submit and discuss these questions in class, 4) the teacher clarifies answers and provides feedback for the students. Edpuzzle Edpuzzle learning media is an online resource that allows the use of video clips to support the learning process. In Edpuzzle, teachers can search and use content that has been provided on leading education channels such as YouTube, Khan Academy, TED Talks, National Geographic, and Vimeo. Teachers can also upload videos of their work, add video content to specific learning objectives that can be integrated into the teachers' Learning Management System (LMS). Each video can be customized by cutting, embedding sound recordings, audio comments, multiple-choice questions, entries, notes, comments, written messages, and additional references. Besides, teachers can see students' learning activities in listening to and working on video assignments, find out the time spent by the students in completing assignments, the percentage of assignment completion, students' scores, and the result of evaluation analysis, all in real-time (Edpuzzle.com). A study conducted by Julinar & Yusuf (2019), shows that there are several reasons why Edpuzzle is very popular with students. In said study, students who are taught using Edpuzzle can study anywhere, repeat material at any time, and most importantly, they can get initial information about the material to be studied so that they have more confidence and motivation which ignite their curiosity and make them actively participate in brainstorming sessions. This shows that Edpuzzle can activate students' prior knowledge which is necessary for the initial stage of knowledge construction. Also, Yesyika's (2017) study about the effectiveness of Edpuzzle in improving the English writing skills of junior high school students proves that Edpuzzle media is more effective than other conventional media. Methods This study was conducted using the Classroom Action Research (CAR) design. 34 students from XI OTR 1 of SMKN 10 Malang were selected as the subject of this study. In this study, the researcher designed the lesson plan, prepared the video in Edpuzzle, set the criteria of success, observed the learning process, and conducting the reflection. Those processes were designed and conducted by referring to the Standar Kompetensi Bahasa Inggris Sekolah Menengah Kejuruan (SMK) (Depdiknas, 2017). The Classroom Action Research is done in several cycles, each of which is repeated in the next cycle if the criteria of success are not achieved (Latief, 2017). Therefore, if the first cycle did not generate a satisfactory result, the researcher will improve the lesson plan and conduct the following cycle until the criteria of success are achieved. Each cycle in a Classroom Action Research consists of planning, implementing, observing, and reflecting. The criteria of success for the implementation of the Flipped Classroom strategy using Edpuzzle are if there is an increase in the number of students who achieve the passing grade, the average of students' writing score is no less than 76, the percentage of students' participation in learning writing reaches 70%, and positive attitude is shown by the students towards the implementation of the Flipped Classroom strategy using Edpuzzle. In this study, the researcher created an assessment rubric to measure students' achievement in learning writing and an observation checklist to record the findings for students' writing participation. The assessment rubric includes grammar, vocabulary, content, and mechanics assessments. On the other hand, the observation checklist includes the students' frequency in performing the following activities: watching the Edpuzzle video at home, asking questions related to the material, having classroom discussions, and finding the solution to the problems given by the teacher. The observation checklist also includes students' attitude in consulting and presenting their works. Implementing At the stimulation stage, the teacher shared the online learning videos to the students on the Edpuzzle about the material to be learned from home. In this stage, the teacher assigned students to observe the video and collect information related to the content of the video at home. In this study, the video presented to the students was in the form of an initial introduction to Procedure Text as well as several questions to stimulate students' prior knowledge, as shown in the screenshot below: Figure 1. A screenshot of an online learning video in Edpuzzle Next, the question/problem identification stage was conducted in class. The teacher engaged students to ask which part of the video that they did not understand and write the questions on the whiteboard. Then, the teacher invited students to discuss what functions and information can be obtained from the video that they have watched in Edpuzzle to answer those questions. In the data collection stage, the teacher guided students to use Edpuzzle in discussing the answers to the questions and then provided clarification and feedback on the answers given by the students. Furthermore, at the data processing stage, the teacher assigned students to draft a procedure text according to the material that they have got from the video in Edpuzzle and the teacher's explanation. Students could playback the video if they want to review information about the material. The next stage was the verification stage, in which the students presented the assignments that they have arranged in front of the class and the teacher gave feedback to students who are presenting. Finally, at the generalization stage, students finalized their work based on the feedback from the teacher and other students. Observing The observation for the learning process in the classroom was done by the researcher by using the observation checklist. The researcher directly observed the learning process in the conventional classroom. Meanwhile, the observation for the learning process using Edpuzzle was done by observing the time spent by the students in completing assignments, the percentage of assignment completion, students' scores, and the result of evaluation analysis in the Edpuzzle. Reflecting The reflection refers to the activity to analyze the implementation of the Flipped Classroom strategy using Edpuzzle. The data was obtained from observing students' learning writing process through the cycle. The reflection includes the discussion on how the strategy was able to solve the problem and factors that might cause the strategy to success in achieving that. Finally, the result of data analysis was compared with the criteria of success. Findings and Discussion The results of data analysis showed that students have better achievement in their writing. This is reflected in the students' scores which showed an increase after the application of the Edpuzzle-assisted Flipped Classroom strategy compared to the scores they obtained through the conventional learning process. The average of students' writing scores after the application of the Edpuzzle-assisted Flipped Classroom strategy was 78. This indicates that, on average, students can pass the passing grade (Kriteria Ketuntasan Minimal/KKM) which is 76. Moreover, the number of students who achieved the passing grade also increased, from 62% after taught using a conventional strategy to 79% after taught using the Edpuzzleassisted Flipped Classroom strategy. The observation of students' participation showed that the students actively participated during the learning process. This can be seen from the learning activities that they did at home, such as watching the Edpuzzle video, asking questions related to the material, having classroom discussions, and finding the solution to the problems given by the teacher. The students also showed enthusiasm in doing and presenting the final assignment from the teacher. Besides, the results of students' participation observation checklist showed an increase, from 42% of students who were active during the conventional learning process to 72.5% of students who actively participated during the entire Flipped Classroom learning process as illustrated in Figure 2. Meanwhile, students showed a positive attitude towards the Flipped Classroom teaching strategy and the use of Edpuzzle learning media. The questionnaire for students' attitude showed that 70% students enjoyed and liked the Flipped Classroom teaching strategy and the use of Edpuzzle media, 71% of them admitted that they found it easier to learn the material through this teaching strategy and media, and 67% of them revealed that they had more fun while learning to write procedure text through this strategy and learning media. Furthermore, 76% of students felt they mastered the ability to write procedure text better through this strategy and learning media, while 64% of them thought that Flipped Classroom strategy and Edpuzzle media should continue to be used in learning other types of text. Students' attitude towards the Flipped Classroom strategy and the use of Edpuzzle media can be seen in Figure 3. Several factors made the implementation of the Flipped Classroom teaching strategy using Edpuzzle able to improve students' achievement and participation in learning writing. In this study, technology has an essential role in the learning process. Not only provide students with new opportunities to learn in a new environment, but technology also engages students to be more active in the class. Brown et al. (2011) concluded that technology provides vital opportunities to learn real-life activities and problem-solving skills. Second, the use of short videos in Edpuzzle helped students to pay their attention and comprehend the material better. Studies have shown that the use of videos lasting 7 minutes or shorter can retain students' attention better (Guo et al., 2014;Kim et al., 2014, in Mischel, 2018. Third, students can learn writing using their learning preferences and pace. Indahwati's (2020) study, which also focused on autonomous learning, yielded a similar result. In her study, she found that there was a significant increase in students' scores and participation after the implementation of an autonomous learning strategy. Conclusion and Suggestion Based on the results shown in Figure 2 and Figure 3, it can be concluded that the criteria of success have been achieved. The number of students who achieved the passing grade was 79%, the average of students' writing score is 78, and the students' participation in learning writing reached 72.5%. Moreover, students also showed a positive attitude towards the use of Flipped Classroom strategy and Edpuzzle in teaching writing. Thus, it can be concluded that the implementation of the Flipped Classroom strategy using Edpuzzle can improve students' participation and achievement in learning writing. Considering the learning outcomes and the participation and enthusiasm of students in learning writing through the Flipped Classroom strategy using Edpuzzle, it is highly recommended for teachers to try and develop it according to their field of study. Teachers can also use different materials and match it with students' learning interests and preferences so that better learning outcomes can be achieved, and the learning activities can also be improved. Edpuzzle can be used in teaching other English skills, and there are endless possibilities in creating the videos since the teacher is free to include any video files as the teaching material. Students' Attitude Towards Flipped Classroom and Edpuzzle Preferable Easy Allows Fun Learning Helps Material Comprehension Desire for Learning
3,963.4
2021-01-01T00:00:00.000
[ "Education", "Computer Science" ]
Using Wolbachia to Eliminate Dengue: Will the Virus Fight Back? ABSTRACT Recent field trials have demonstrated that dengue incidence can be substantially reduced by introgressing strains of the endosymbiotic bacterium Wolbachia into Aedes aegypti mosquito populations. This strategy relies on Wolbachia reducing the susceptibility of Ae. aegypti to disseminated infection by positive-sense RNA viruses like dengue. However, RNA viruses are well known to adapt to antiviral pressures. Here, we review the viral infection stages where selection for Wolbachia-resistant virus variants could occur. We also consider the genetic constraints imposed on viruses that alternate between vertebrate and invertebrate hosts, and the likely selection pressures to which dengue virus might adapt in order to be effectively transmitted by Ae. aegypti that carry Wolbachia. While there are hurdles to dengue viruses developing resistance to Wolbachia, we suggest that long-term surveillance for resistant viruses should be an integral component of Wolbachia-introgression biocontrol programs. IMPACTS OF EVOLUTION ON WOLBACHIA AS A BIOCONTROL TOOL The ability of Wolbachia to provide long term protection against DENV could be undermined by genome evolution of wMel, Ae. aegypti, and/or DENV. Evolution of wMel tracks slower than the mitochondrial genome of its natural host, D. melanogaster (27,28), and sequencing of wMel from Ae. aegypti collected 2 to 8 years postrelease in Queensland, Australia only rarely detected genetic polymorphisms (29). These studies suggest that the wMel genome is quite stable in Ae. aegypti, which will presumably aid in the continuation of its antiviral properties in this host. Plausibly, evolution of the Ae. aegypti genome could attenuate wMel-mediated viral inhibition by adapting to the endosymbiont over time. Ford et al. selectively bred wMelinfected mosquitoes that either established high or low levels of viral RNA after DENV infection. They found the low and high DENV levels were linked to genomic variation in Ae. aegypti (30). However, the mosquito phenotypes that were less resistant to viral infection were also less fit, suggesting they would be unlikely to be selected in the field. The stability of the Wolbachia-Ae. aegypti association has been demonstrated in Queensland (19,24) and Malaysia (31), where wMel and wAlbB, respectively, were introgressed into the Ae. aegypti population. Wolbachia has remained at a high frequency in these mosquito populations for up to a decade, and has retained its antiviral properties (31,32). Together, these studies indicate that the Wolbachia-Ae. aegypti relationship is unlikely to evolve rapidly in the field in a manner that quickly undermines the public health benefits of the Wolbachia introgression method. In contrast to Wolbachia and Ae. aegypti, RNA viruses like DENV have much faster mutation rates. Viruses that accumulate mutations in the genome (variants) that can replicate in Wolbachia-carrying mosquitoes may be rapidly selected. These variants could be maintained in a Wolbachia-Ae. aegypti population provided they can replicate well within the human host. Thus, whether DENV will remain susceptible to the antiviral state created by wMel and wAlbB infection in Ae. aegypti remains a key question to be addressed (33,34). In this review, we examine the risk and potential mechanisms by which DENV resistance against Wolbachia might evolve and discuss how viral resistance to Wolbachia could be identified and managed operationally. SELECTION AND EMERGENCE OF WOLBACHIA-RESISTANT VIRUS IN MOSQUITOES The urban transmission cycle sees DENV circulate between human and mosquito hosts. Mosquitoes become infected with DENV when the insect takes an infectious blood meal from a viremic person. Since Wolbachia resides within mosquitoes, selective pressure for the virus population to overcome Wolbachia's antiviral properties will only be present in this part of the transmission cycle. While the emergence of viral resistance to antiviral therapeutics in humans is a relatively common phenomenon (35)(36)(37), selection pressures applied to DENV by Wolbachia are likely to differ in many ways. For instance, while antiviral drugs have a defined mode of action, the mode of action of Wolbachia appears to be broad and may be indirect (38). In addition, while therapeutics are administered at optimized concentrations and have well-defined pharmacological properties (39), Wolbachia abundance (density) cannot be easily controlled and varies between Wolbachia strains, individual mosquitoes, and host tissues (40)(41)(42)(43)(44)(45). Control of the levels of DENV inhibition within specific Ae. aegypti tissues appears to be complex and is not just associated with Wolbachia density (40,42,46). In this section we postulate how Wolbachia-resistant DENV variants may emerge, based on our current understanding of DENV infection, dissemination, and transmission in mosquitoes. wMel and wAlbB Wolbachia strains provide incomplete protection against DENV. The wMel and wAlbB Wolbachia strains used in field releases have been rigorously tested in laboratory studies to determine their impacts on DENV infection dynamics in Ae. aegypti. Broadly speaking, these strains provide partial protection against fulminant DENV infection compared to mosquitoes without Wolbachia (46). Most important to the effectiveness of these strains in the field is their ability to both reduce the proportion of Ae. aegypti with infectious DENV in their saliva (22,23) and lengthen the extrinsic incubation period (time taken for mosquito saliva to become infectious following virus uptake in a blood meal), thus reducing the number of days in a mosquito's life span in which it can infect people (22,47). Nevertheless, Wolbachia-mediated viral inhibition is incomplete, such that a proportion of mosquitoes become infectious with DENV. For example, after feeding on blood from viremic dengue patients, infection was detected in the abdomen (53 to 61%) and saliva (6 to 12%) of wMel-carrying mosquitoes (22,23). Even at a population level, it has been estimated that introgression of wMel would not eliminate DENV in high-transmission settings indefinitely (6). Also of note, DENV-1 is marginally less inhibited by wMel than serotypes 2, 3, and 4 (22,23). Plausibly, a smoldering pattern of DENV replication and transmission could provide the opportunity for Wolbachia-resistant viruses to emerge and be selected (48). PROCESS OF WOLBACHIA-RESISTANT VIRUS SELECTION Within mosquito tissues, both wMel-carrying and wMel-free cells can be observed (42) and these cells are likely to possess different antiviral states. At the cellular level, Nainu et al. determined the antiviral effects of wMel to be cell-autonomous (i.e., viral protection is limited to Wolbachia-infected cells) (49). JW18 Drosophila cells with wMel were unable to protect Wolbachia-free JW18 cells from infection by Drosophila C Virus (DVC; Dicistroviridae, cripavirus) or Sindbis virus (SINV; Togaviridae, alphavirus) when cocultured in trans-wells separated by a porous membrane (49). Similarly, it seems that antiviral Wolbachia strains show a "superinfection exclusion-like phenotype," whereby cells that have Wolbachia prevent productive viral infection (50,51) and DENV and Wolbachia coinfected cells are rarely visualized in mosquito tissues and cell culture (50,52). These studies suggest that Wolbachia-free cells within mosquito tissues that can support productive virus infection may be the site where Wolbachia-resistant virus types may emerge, followed by their selection in Wolbachia-carrying cells. After ingesting a blood meal from a viremic person, DENV replicates in the Ae. aegypti midgut. The virus must then traverse the midgut barrier, enter the hemolymph, and infect other tissues, reaching the salivary glands after ;7 days. Once the virus enters the mosquito's saliva it can be passed to a new host when the next blood meal is taken. In blood-fed mosquitoes, only a small number of infectious units are thought to seed infection in the mosquito midgut (53,54). This reduction in virus population size, known as a population bottleneck, decreases the genetic diversity of the infective virus population (55,56). This event may cause low-frequency Wolbachia-resistant DENV variants already present in the incoming blood meal to be filtered out ( Fig. 1, step 1). Replication of DENV in the midgut leads to the generation of viral variants because the virus lacks proofreading capacity. These variants may be unable to disseminate beyond the midgut if they have reduced competitive fitness (57) or are susceptible to immune mediators within the midgut and hemolymph (58) (Fig. 1, step 2). If a fit Wolbachia-resistant DENV variant is generated in the midgut, this virus would possess a selective advantage over wild-type viruses in mosquitoes that carry Wolbachia. Selection may occur if the variant could similarly infect both Wolbachia-carrying and Wolbachia-free cells, or if a variant evolves to specifically target Wolbachia-free cells. Mechanistically, DENV could specifically target Wolbachia-free cells by adapting its affinity for viral entry receptors (59) to those that are differentially expressed between Wolbachiafree and -carrying cells. Lu et al. showed that wAlbB infection modulates the expression of DENV attachment factors dystroglycan and tubulin in Aag2 cells (60). Another study showed that expression of the cell surface insulin receptor is modulated by wMel infection, reducing the susceptibility of mosquito cells to DENV and ZIKV (Zika virus) infection (61). While the insulin receptor is not a known entry receptor for DENV, this study illustrates that Wolbachia has the potential to modulate expression of cell membrane proteins and thereby alter the permissiveness of these cells to viral infection. Preferential replication of Wolbachia-resistant DENV compared to wild-type virus would ultimately establish these fit viral variants in the salivary glands ( Fig. 1, step 3). Onward transmission of Wolbachia-resistant DENV variants would be limited if they are unfit in the human host ( Fig. 1, step 4). This scenario would be considered a fitness trade-off, where fitness increases in one host (i.e., the mosquito) are counterbalanced by fitness losses in the second host (i.e., humans). Alternatively, if the variant can establish infection in the human host, onward transmission may occur. Evolutionary processes that impact maintenance of DENV variants. Fitness trade-offs and population bottlenecks result in purifying selection, an evolutionary feature of DENV (62,63). In purifying selection, synonymous mutations, which do not cause amino acid changes, are more likely to be maintained than nonsynonymous mutations. Purifying selection purges deleterious variants from the transmission cycle, many of which are caused by nonsynonymous mutations since these mutations can impact protein stability, function, and viral replication (64). Arguably, nonsynonymous mutations in viral proteins might be more successful than synonymous changes in escaping the selective pressures imposed by Wolbachia. But these variants must still support efficient viral replication. While purifying selection may slow the emergence of Wolbachia-resistant variants, it may not eliminate them. Wolbachia-resistant variants could accumulate over time, eventually becoming dominant in transmission cycles. Considering mosquito populations are large and their susceptibility to DENV infection can fluctuate, continued (1) and are able to disseminate to distal tissues (2). Variants that are more resistant to the antiviral properties of Wolbachia may be selected, allowing the virus to replicate in Wolbachia-infected and -uninfected cells. These variants may possess a replicative advantage in disseminated sites of the mosquito with high Wolbachia density, such as the salivary gland (3). DENV variants that replicate efficiently in the mosquito might not always be infectious for humans (4), such that if a Wolbachia-resistant variant did infect a human, it may replicate poorly or be outcompeted by other variants that are better adapted for replication in humans. monitoring for virus evolution in Wolbachia-carrying mosquitoes will be important in regions where Wolbachia-carrying Ae. aegypti have been established. Certainly, compared to antiviral resistance events described for viruses that circulate in a single host, the sequential evolutionary speed bumps that DENV populations encounter are likely to delay Wolbachia-resistant viruses from emerging in transmission cycles. WOLBACHIA IMPACTS ON THE SUBCELLULAR DENV INFECTIOUS CYCLE Wolbachia is a complex organism thought to inhibit the infectious cycle of DENV within mosquito cells that carry the bacterium by numerous mechanisms (38). In the following section, we explore some of the proposed inhibitory mechanisms with the aim of speculating how DENV could evolve to bypass these on an intracellular level. To determine the stage(s) of the DENV life cycle that are impacted by Wolbachia, the progression of viral infection has been tracked in insect cell lines artificially infected with Wolbachia. Consistently, it has been shown that viral replication is significantly reduced in mosquito or Drosophila cells when antiviral Wolbachia strains are present, and it is widely agreed that DENV (as well as other related flaviviruses and unrelated alphaviruses) are likely to be inhibited after virus entry, at an early stage in RNA replication, or perhaps at translation of the incoming viral RNA template (51,(65)(66)(67). It should be noted that, for practical reasons, many of the studies characterizing the impacts of Wolbachia at the cellular level have been performed using cell culture models. In whole mosquitoes, these effects may vary between tissues, as the virus encounters different Wolbachia densities, as well as cell type-specific effects during infection and dissemination. Overcoming Wolbachia-induced host effects that contribute to viral inhibition. Both DENV and Wolbachia are known to alter their host environment. Wolbachia is present in the mosquito throughout its life cycle, and it is therefore likely that some of the Wolbachia-induced host changes interfere with essential stages of virus infection. Identifying how DENV is restricted will help us to determine how viral resistance may emerge against Wolbachia. Relevant host cell modifications induced by Wolbachia can be grouped into 3 main categories: host cell structural modifications, altered nutrient homeostasis, and induction of host immune/stress responses. Lindsey et al. provide a comprehensive review discussing the various ways Wolbachia may induce these changes and how they may impact on viral pathogens (38). While it is possible that DENV may adapt to overcome a specific antiviral factor that drives these modifications (either Wolbachia-or host-cell derived), we have kept our discussion broad, since it is not known which viral/antiviral factor interaction(s) is responsible for inhibiting DENV. Additionally, viral inhibition is probably induced by the collective contribution of several Wolbachia-induced host modifications (38). As such, several points in the DENV life cycle may be simultaneously under selective pressure when Wolbachia is present. While it is unlikely that a single mutation in the viral genome may allow complete viral resistance to emerge, it is possible that particular mutations may allow the virus to overcome one or some of these effects, reducing the overall impact of Wolbachia in inhibiting viral transmission. Here, we will focus on three subcellular modifications that are likely to be critical for Wolbachia to induce its antiviral effect, and consider whether viruses could adapt to overcome these pressures. (i) Altered lipid homeostasis. wMel and wAlbB infection of Ae. aegypti imparts minor costs on host fitness (41,68,69). Genomic studies of multiple Wolbachia strains show it must source a variety of amino acids and lipid complexes from its host to complement its own limited metabolic pathways (70). Several groups have examined the hypothesis that Wolbachia may alter the lipid profile of host cells, disrupting the requirements for productive DENV infection. Koh et al. (71) examined the lipid profile in whole wMel-Ae. aegypti and DENV-infected Wolbachia-free Ae. aegypti (intrathoracically injected with DENV-3). They reported that DENV infection of mosquitoes induced a lipid profile distinct from mosquitoes carrying wMel, suggesting that DENV-3 and Wolbachia are not in direct competition for lipids. In mosquitoes coinfected with DENV-3 and wMel, they found that DENV modulation of host lipids dominates the changes normally induced by Wolbachia. However, it is important to note that intrathoracic DENV infections are known to overwhelm the effects of Wolbachia and may not represent the virus-Wolbachia relationship in a natural infection (68). Furthermore, analysis of the lipidome in whole mosquitoes may mask smaller, tissue-specific lipid changes induced by Wolbachia. Manokaran et al. (72) also attempted to define the lipid changes that occur when wMel and/or DENV is present in Ae. aegypti. Using the Aag2 Ae. aegypti-derived cell line, they identified acyl-carnitines (a class of lipids involved in energy production) as specifically upregulated by DENV and ZIKV, but downregulated in the presence of wMel, including in wMel-Aag2 following viral infection. This may suggest that wMel and DENV are in fact competing for some lipids. The acyl-carnitine inhibitor etomoxir reduced DENV levels in Ae. aegypti without wMel, supporting an in vivo role for this lipid. It is possible that this shift in acyl-carnitines occurs in only a subset of mosquito cell types, which could explain why it was not observed by Koh et al. Other studies have also shown that supplementing or chemically modulating host lipid profiles in mosquito cell lines or Drosophila that carry various antiviral Wolbachia strains reduces the antiviral effectiveness of Wolbachia (73)(74)(75). This suggests that regardless of whether viruses are competing with Wolbachia for the same lipids, lipids are likely to contribute in some way to the antiviral state imposed by Wolbachia. Flaviviruses are highly dependent on cholesterol and other lipids for virion entry and exit, and formation of modified endoplasmic reticulum (ER) membranes for viral RNA replication (vesicle packets) (76)(77)(78). DENV infection also causes an accumulation of acyl-carnitines in the midgut of Ae. aegypti, suggesting the virus may divert energy to better support its own replication (79). Perhaps Wolbachia-modulated lipid levels change the cholesterol content of cellular membranes to impair intracellular trafficking or formation of membrane-associated replication complexes, or reduce energy availability for DENV replication (Fig. 2). Further work is needed to determine if these hypotheses hold true and whether DENV can adapt to overcome these cellular changes. (ii) Disruption of intracellular membranes. Studies examining wMel in a Drosophiladerived cell line have shown that Wolbachia is intimately associated with host cell membranes. wMel is contained within and around ER and Golgi-derived vesicles, causing regions of these organelles to swell (80)(81)(82). Given that specific remodeling of these organelles is required by DENV for replication and maturation, it seems likely that their disruption by Wolbachia could impair the establishment of viral infection. Work from Bhattacharya et al. has shown that the small amount of virus produced from insect cells carrying the wMel strain of Wolbachia has reduced infectivity and/or replication capacity in mammalian cells (51,67). This would be consistent with disrupted ER/Golgi structures, which are strictly required for viral RNA replication and the production and maturation of envelope proteins for flaviviruses (Fig. 2). In this scenario, perhaps disruption of viral RNA replication events could lead to the production of defective interfering viral particles (viruses that contain substantial deletions in their genomes) and/or perturbed ER/Golgi organelles may not allow the correct maturation and processing of viral envelope proteins, i.e., events which could reduce the infectivity of any viral particles produced. Notably, while alphaviruses replicate and form virions in quite distinct regions of the cell compared to flaviviruses, alphaviruses are still dependent on their replication complexes forming in association with ER membranes, and trafficking and maturation of their envelope proteins through the ER and Golgi secretory pathway (83). Thus, disruption of these organelles could potentially similarly impact the two virus families. It is yet to be determined whether wMel or other antiviral Wolbachia strains similarly occupy these organelles in vivo in Ae. aegypti, but it is certainly a compelling hypothesis for a mechanism that may contribute to Wolbachia's antiviral activity. If Wolbachia is colonizing regions of the ER and Golgi, preventing typical establishment of DENV replication complexes and virus-specific remodeling events at these organelles, then perhaps the virus could adapt to replicate in regions unaffected by Wolbachia, or else could adapt to bud from the plasma membrane like alphaviruses. Given the intricate association of DENV with these organelles, from viral replication to virion formation and budding, it seems that these adjustments would take an enormous number of compensatory mutations arising across interacting viral proteins, before functional virus would emerge. (iii) Changes to the host cell cytoskeleton. Other studies in Drosophila have revealed that Wolbachia utilizes microtubules and actin to support its localization, particularly in the Drosophila oocyte. This may allow the endosymbiont to persist throughout Drosophila development and to pass from generation to generation (84)(85)(86). Furthermore, Wolbachia has been shown to secrete the actin bundler protein WalE1. Overexpression of WalE1 in transgenic flies leads to an increase in Wolbachia titer, suggesting Wolbachia may manipulate actin to modulate its own replication (87). For DENV, each aspect of the virus life cycle, including entry, intracellular transport, replication, and egress is intimately tied to the host cell cytoskeleton. For example, DENV entry is dependent on actin filament integrity (88,89), while organelle remodeling and formation of vesicle packets are associated with changes in the cytoskeleton structure, including reorganization of the intermediate filament vimentin-critical for DENV replication (90,91). A situation where Wolbachia modulates the cytoskeleton to disrupt DENV trafficking into cells and/or formation of vesicle packets would be consistent with Wolbachia restricting DENV at a stage prior to RNA replication (Fig. 2) (51, 65, 66). (1) Virus uptake occurs through clathrin-mediated endocytosis, and the viral genome is delivered following fusion of the viral and mature-endosomal membranes. (2) Replication of viral RNA (red) is restricted in Wolbachia-carrying cells and so is vesicle packet formation on ER membranes. This could be due to disturbance of ER and Golgi apparatus membranes due to (3) occupation/disruption by Wolbachia (green). (4) Altered cellular lipid content, e.g., increased cholesterol storage (yellow) or reduced acyl-carnitines, may restrict trafficking of membrane-bound vesicles and/or lower energy resources for virus production. Similarly, Wolbachia-induced alterations of the host cell cytoskeleton (5) may interfere with trafficking of endosomes and/or ER and Golgi vesicles required for movement of incoming virions and the maturation of daughter virions. If Wolbachia disrupts DENV entry via endocytosis, could DENV entry adapt to occur in a pH-independent manner, at the cell surface? There have been reports that alphaviruses, including SINV, may be able to enter both by receptor-mediated endocytosis and at the plasma membrane (92,93). This would require mutations to accumulate in the viral envelope protein that allow fusion activation (conformational changes in the envelope proteins that drive the merging of viral and host cell membranes) to occur at a neutral and acidic pH. In fact, pH-independent entry has been described in laboratory experiments for flaviviruses and related hepaciviruses. Endosomal fusion activation events for these viruses are crucially controlled by specific histidine residues within the viral fusion protein (94)(95)(96). Boo et al. demonstrated that substitution of histidine with positively charged arginine enhanced entry of hepatitis C virus (hepacivirus) at neutral pH (94). VIRUS FAMILIES THAT ARE RESISTANT TO WOLBACHIA Perhaps another way we can consider how viral resistance may arise against Wolbachia is to examine the life cycle of viruses that are not inhibited by this endosymbiont. There are several reports that negative-sense RNA viruses, including bunyaviruses, are not inhibited by Wolbachia. The insect-specific virus Phasi Charoen-like virus (Bunyaviridae) can replicate effectively as both a persistent infection and following an acute challenge in the Ae. aegypti-derived cell line Aag2 coinfected with wMel or wMelPop Wolbachia strains (97,98). Bunyaviruses typically have three negative-sense RNA segments that are bound to multiple copies of the viral polymerase (L) and nucleoprotein (N), encased in a lipid bilayer. Similar to flaviand alphaviruses, bunyaviruses are internalized via clathrinmediated endocytosis, and transcription and translation are closely coupled, occurring in association with the rough ER (see reference 99 for a review on bunyavirus replication). However, the replication strategy for these viruses differs substantially to flaviand alphaviruses, since the incoming viral RNA must be transcribed to a positive-sense RNA (generating either an mRNA for translation or a positive-sense replicative intermediate), with the replicative intermediate copied again to generate the negative-sense progeny viral RNAs. Interestingly, these progeny viral RNAs may associate with newly formed L and N proteins in a structure called the viral tube before budding through the Golgi, where it collects its membrane and envelope proteins (100). Perhaps this distinct RNA replication and assembly strategy, whereby shorter viral RNAs are protected by L and N proteins at each stage, enables bunyaviruses to persist in the presence of Wolbachia. INVESTIGATING EVOLUTION OF WOLBACHIA-RESISTANT VIRUSES Further studies into the evolution of DENV in the presence of Wolbachia may direct us toward the mechanisms that underlie viral inhibition by indicating the regions of the genome that are under selective pressure. This in turn may allow us to predict the likelihood of these mutations arising in the field. To do this, we can use a laboratory setting to push conditions to favor viral sequence diversity. By continually passaging DENV in an invertebrate host with Wolbachia (whole insects or cell culture), we can remove the purifying selection usually associated with host alternation in order to broaden the repertoire of viral sequences being maintained over time. Such studies have been reported by two groups. One study passaged the RNA virus DCV through whole D. melanogaster with a native wMelCS infection (a strain closely related to wMel) over 10 passages (101). The other study passaged DENV-3 ten times in Ae. aegypti-derived Aag2 cells artificially infected with wMel (102). In both studies, the viruses replicated over time when consistently challenged by Wolbachia. However, these viruses grew to substantially lower titers and had no replicative advantage over those passaged in Wolbachia-free cells. Notably, no studies have yet examined viral passaging in the presence of wAlbB. While these are very artificial evolutionary experiments, they show that, in the laboratory, RNA viruses do not develop fit viral variants with resistance to wMel in a short time frame. DETECTION AND MANAGEMENT OF WOLBACHIA-RESISTANT DENV If a fit DENV variant that is able to replicate in Wolbachia-carrying mosquitoes were to establish itself in a transmission cycle, how would it be identified and how would we mitigate the impact? In regions such as Yogyakarta, Indonesia, where local transmission of DENV has ceased in areas where Wolbachia has been introgressed into Ae. aegypti populations (26), viral resistance could be suspected if persistent local DENV transmission chains were reported in areas of Wolbachia establishment. Since loss of Wolbachia-mediated virus inhibition could occur due to changes in the virus, Wolbachia, or the Ae. aegypti host, it would be essential to first determine the underlying cause(s) of the transmission events. Before assuming that a virus has evolved resistance to Wolbachia, it would be prudent to ensure Wolbachia has not been substantially reduced in density or frequency within a mosquito population, e.g., due to exposure to very high temperatures (103). It would also be important to rule out adaptation of the mosquito host or Wolbachia, which may allow the mosquito population to become permissive to DENV infection (30). This could be done by challenging wild-caught Wolbachia-carrying Ae. aegypti with a blood-meal spiked with laboratory viruses previously shown to be inhibited by that Wolbachia strain. To determine if viral resistance is the underlying cause of the transmission events, laboratory Wolbachia-carrying Ae. aegypti colonies could be infected with circulating virus isolates from the region. Measuring the replication/transmission of these viruses in laboratory-reared mosquitoes, alongside previously published Wolbachia-sensitive laboratory viral strains, would determine if the DENV genotypes circulating in the community were better able to overcome the inhibitory effects of Wolbachia. Sequencing of the circulating DENV isolates from both human and mosquito hosts over the course of an outbreak and comparison with recent historical isolates may provide insight into the genetic changes that may have led to viral resistance. Viral resistance against an introgressed Wolbachia strain could be managed using various strategies. One option is to not alter the existing mosquito population, as it is unlikely that Wolbachia-carrying mosquitoes would be more susceptible to DENV than wild-type mosquitoes. Initially, it is likely that only one DENV genotype would be resistant to the antiviral properties of wMel or wAlbB, and Wolbachia may still protect against all other genotypes/serotypes. Over time, the resistant genotype would likely become dominant, and in this scenario supplementary interventions may be of benefit. Releases of mosquitoes that carry a reproductively incompatible Wolbachia strain could be performed to remove an existing strain or to replace it as long as viral resistance does not extend to all Wolbachia strains. Management of viral resistance could also be achieved through the use of complementary interventions, such as vaccines or vector control strategies that are based on gene drive and/or population suppression. While many of these complementary methods are still undergoing development and evaluation, initial reports indicate they show potential for future implementation (14,104,105). CONCLUDING REMARKS With a body of evidence now demonstrating that Wolbachia-Ae. aegypti introgression methods can substantially reduce the burden of dengue in areas of endemicity, it is expected that application of this technology will undergo a major expansion in coming years (17,19,24,26). The intention is that this will lead to long-term control or local elimination of human-pathogenic arboviruses. Achieving long-term suppression in the field would be dependent upon the evolutionary stability of the Wolbachia, Ae. aegypti, and DENV tripartite interaction. Wolbachia and Ae. aegypti evolve slowly compared to DENV, and Wolbachia-carrying mosquitoes collected years after release have so far retained their antiviral profile. Yet the rapid mutation rate of RNA viruses suggests it is inevitable that viruses like DENV will eventually adapt to Wolbachia's selective pressure and become resistant to the intervention. The question is, how long will this take? There is no precedent for an antiviral intervention like Wolbachia, and we cannot be certain how viruses will adapt upon continued exposure to this endosymbiont. In the field, DENV will repeatedly face the selective pressures imposed by Wolbachia, but the genetic diversity generated and maintained by the virus will be limited by the need for the virus to infect a range of mosquito tissues, while also maintaining competence in the human host. In addition, since the mode of action of Wolbachia appears broad, it is most likely that multiple mutations across the viral genome will be necessary to allow the virus to adapt to this unique cellular landscape. While we have focused on factors that may affect the development of viral resistance to Wolbachia-introgression methods, these considerations are also highly relevant to any gene drive/replacement technology where the virus and host will coexist in a long-term evolutionary relationship. Finally, as Wolbachia-based biocontrol methods increase in scope and longevity, monitoring for the emergence of viral resistance to Wolbachia should remain a critical component of these programs. ACKNOWLEDGMENTS This work was supported by National Health and Medical Research Council, Australia, Ideas Grant 1182432 (J.E.F.), Program Grant 1132412 (C.P.S.), and Investigator Grant 1173928 (C.P.S.). We thank Patrick Lane (ScEYEnce Studios ) for graphical enhancement of the figures.
6,938.4
2021-04-14T00:00:00.000
[ "Environmental Science", "Biology", "Medicine" ]
A versatile quantum walk resonator with bright classical light In a Quantum Walk (QW) the “walker” follows all possible paths at once through the principle of quantum superposition, differentiating itself from classical random walks where one random path is taken at a time. This facilitates the searching of problem solution spaces faster than with classical random walks, and holds promise for advances in dynamical quantum simulation, biological process modelling and quantum computation. Here we employ a versatile and scalable resonator configuration to realise quantum walks with bright classical light. We experimentally demonstrate the versatility of our approach by implementing a variety of QWs, all with the same experimental platform, while the use of a resonator allows for an arbitrary number of steps without scaling the number of optics. This paves the way for future QW implementations with spatial modes of light in free-space that are both versatile and scalable. acting on the walker internal coin state at each position, flipping the coin and a shift operator, that propagates the walker right (left) at each position according the internal heads (tails) coin state. Concatenation of these operations to generate the step operator for the example of a Hadamard walk, then results in one full step when applied to the walker with an initial state such as |ψ 0 = |0 ⊗ |H . Implementation of the step operator, causes the state of the walker to change according to the number of steps or implementations, n, where c H,x and c T,−x are complex amplitudes indicating the probability of the walker occupying each position and d = (2n + 1) is the dimension of the space occupied by the walker. With each step, the walker moves to adjacent positions on the 1D line. Subsequently, when occupying two consecutive positions before the step, overlap in positional occupation occurs for shared movements. The complex amplitude of the walker thus interferes to generate a different probability distribution over the position spaces than the classical random walk [1]. Measurement of the QW superposition collapses the superposition, forcing the walker to localize at a particular position (x) with the associated probability P x,n = | x|ψ n | 2 . (S5) Figure S1 shows the probability distribution P(x) for the walker after taking n = 100 steps for symmetrical and asymmetrical initial states with a Hadamard coin. Here it can be seen that the probability of finding the walker is closest to the ends of the distribution as the walker destructively interferes with itself at the center. Moreover, by changing the phase of the initial state, the interference may also be altered to generate a distribution where the greatest probability is weighted more to one direction of the position space as seen by the larger spike in probability to the left in (b). Action of the coin operator additionally causes the coin and position states to become entangled as may be seen in the nonseparable form of Eq. S4. It is these dynamics which causes the QW to obtain the different characteristics such that it may be exploited for the simulation and computation applications with up to ballistic speedups. Fig S1. Graph of the probability distribution for a 1-dimensional Hadamard QW after 100 successive movements or steps for a (a) symmetrical and (b) asymmetrical input state. Methods and materials 2.1 Detailed experimental setup A schematic of the actual experimental setup is given as Fig. S2. Here a pulsed laser (Spectraphysics Quanta-Ray DCR-11) at a wavelength of 1064 nm generated a single input light pulse with 0 OAM (initial position of the walker). As we did not have an gated detector for this wavelength the laser was frequency adjusted by second harmonic generation, resulting in the high losses of the system and hence limiting the maximum number of steps possible. To this end a non-linear SHG crystal converted the wavelength to 532 nm through frequency doubling to remain within the ICCD (iStar AndOR) detection range. This step prohibited amplification within the cavity due to the lack of suitable gain media at this wavelength. Structuring a Gaussian intensity distribution of appropriate beam size was accomplished through a spatial filter which led to significant losses, thus limiting the maximum step number we could achieve. Propagation of the pulse through a HWP served to prepare the input polarization state symmetry (e.g., diagonal polarization for a symmetrical Hadamard QW), after which, it was injected into the resonator (3 m perimeter) by a 50:50 non-polarizing beam-splitter. Placement of the QP (q = 0.5) and WP concatenation initialized the QW and advanced it by one step with each consecutive round trip. The output pulse from the beam-splitter was subsequently imaged from the QP plane to the mode sorter. Alignment of the mode sorter was attained by constructing an adjoining OAM mode generation setup (see Supplementary Information). Here a 532 nm wavelength diode laser was expanded with a 10× objective lens and collimated through an f = 300 mm lens onto an SLM where phase and amplitude modulation was utilized to generate superpositions of LG beams. A 4-f system was built to isolate and image the 1st diffraction order onto the mode sorter. The SLM generated mode was then combined and aligned with the output mode from the resonator with a 50:50 BS. By passing test OAM superpositions from the SLM through the MS, optimal alignment of the elements was then achieved. The Fourier plane of the MS configuration was directed and imaged onto the ICCD plane with another 4-f system. Choice placement of a popup mirror before the MS and within the FP imaging system allowed the QP plane image to be re-imaged onto the ICCD with a lens placed between the pop-up mirrors. The second lens in the MS imaging system then served a dual purpose as the second imaging lens in the 4-f re-imaging system of the QP plane. This allowed the output beam structure for each pulse to be individually captured for each round trip, simplifying the alignment process. Subsequent capture of each round-trip pulse was achieved through utilization of an iStar AndOR ICCD camera with a temporal resolution on the 10 ns scale. Synchronization between the initial laser pulse and recording window of the camera was attained with a Stanford delay generation working in combination with the iStar on-board digital delay generator. The q-plate 2.2.1 Action of the q-plate The q-plate (QP) used in our experiment was a patterned liquid crystal static element which imparted geometrical phase to the light field transmitted through. Here the charge (q) of the plate dictates the OAM value generated while the polarization of the incoming beam controls the handedness of the OAM generated [2]. It follows that polarization could thus be used as a control for the laddering of OAM to higher or lower values in either handedness (positive or negative OAM). Operation of the QP in the circular polarisation basis is given by the Jones matrix [3,4] Incident right circular polarisation (RCP), |R = [1; 0], then gains the phase, e i2qφ , resulting in an OAM of l = 2q per photon. The polarization is subsequently flipped to left circular polarisation (LCP) in the process. Similarly, LCP, |L = [0; 1], acquires the phase e −i2qφ , resulting in negative OAM of l = −2q . The i-value multiplying the phase terms is global and thus may be ignored. It follows that the QP operation may be condensed into the following selection rules: Additionally, this effective twisting of the light beam produced by geometric phase has further implications in the physical interpretation whereby the CP polarization may also be seen in terms of spin angular momentum (SAM). Here when RCP is incident on the QP, OAM of 2q per photon is generated, and the flip in CP corresponds to a flip in SAM from 1 per photon to −1 . It is well known that transference of SAM and OAM can occur between light and certain matter [4]. Here, SAM interaction occurs in optically anisotropic media such as birefringent material and OAM in transparent inhomogeneous, isotropic media [2]. The combination of a thin birefringent (liquid crystal) plate with an inhomogeneous optical axis in the QP subsequently results in the element coupling these two form of angular momentum such that flipping in the SAM may be seen to generate OAM, making the QP a spin-to-orbital angular momentum converter (STOC) where the symmetry of the optical axis patterning effects the conversion values [4]. Characterisation of the q = 0.5 QP used was then carried out. Figure S3 illustrates the experimental setup implemented to achieve this. A horizontally polarized HeNe laser (wavelength 633 nm) was shone through a QWP before being incident on the QP. A polarization grating (PG) was placed before a Spiricon SPU620 camera which acted to spatially separate the left and right CP of the QP generated beam. Variation of the incoming SAM or CP onto the QP was obtained by rotating the QWP fast axis. It follows that superpositions of SAM with various weightings was incident onto the QP based on the QWP angle. These input weightings are shown in Fig S4 (a) through projection into RCP and LCP states. The calculated and measured outcome of passing these CP superpositions through the QP is given in Fig. S4 (b) which was also projected into the CP basis. Comparison of these two figures show that the LCP and RCP input weightings are inverted after passing through the QP. For instance, at 45 o , RCP generated by the QWP is detected as LCP after passing through the QP. Similarly, at 135 o , the generated LCP is converted to RCP after the QP. At 105 o , CP state with majority weighting is changed from LCP after the QWP to RCP after the QP. It follows that the QP acts to invert the SAM of the incident beam. Further observation of the spatial modes of the beams can be seen from the insets. Here the Gaussian profile of the input beam is evident in Fig. S4 (a) with the false colour map. The spatial profile of the beam after the QP shows the doughnut distribution with a central intensity null, characteristic of OAM carrying beams. These modes consequently, indicate that OAM is generated by the QP along with the reversal of CP for both incoming LCP and RCP as well as superpositions thereof, as expected from the selection rules in Eq. S7. Moreover, by altering the face through which the beam was incident from front to back, the directional consistency of the element was determined. The experimental outcome is given in Fig S4 (c) where the QP side of incidence was reversed in the setup, causing the incoming beam to traverse through the 'back' of the element. Comparison of Fig S4 (b) and (c) shows the experimentally measured projections are identical with the QP reversed, enacting the same SAM inversion on the incoming beam. Therefore, it may be concluded that the QP operation follows a directional invariance in performance. A classical entanglement generator When the input to the QP is a linear polarisation state, say horizontal, then the output may be expressed aŝ (S8) From the expected states formed, the spatial mode described by OAM = −1 is paired to RCP while OAM = 1 is paired to LCP. As a result, these OAM and polarization degrees of freedom in the beam form a non-separable relation such that neither can be factored out. Experimentally, the generated mode is shown as By replacing the PG shown in the experimental setup of Fig S3 with a linear polarizer, the generated mode was projected into the linear polarisation basis. Here, the non-separability of the spatial mode can be seen where lobes are detected that rotate with the polarizer orientation. Specifically, from the leftmost projected distribution, vertical orientation of the lobes coincides with the vertical orientation of the polarizer. Rotating the polariser though to anti-diagonal, horizontal and diagonal orientations as depicted by the arrows above these images shows the lobes assuming the same directionality. It follows that the projective measurements yield the polarization distribution overlaid on the image of the generated beam. The subsequent pairing of the OAM and SAM modes in this instance resulted in an azimuthal vector vortex beam being generated. Accordingly, the QP may be seen as a "classical entanglement" generator such that orthogonal modes may be both paired and laddered with this element. The mode sorter 2.3.1 Orbital angular momentum detection Orbital angular momentum mode sorting relies on the application of geometric coordinate transformation. The technique takes advantage of the circular geometry associated with OAM such that a geometrical mapping translates circular to rectangular geometry [5] as illustrated in Fig. S6 (a). The resultant phase distribution unwrapping causes OAM to be transformed into transverse momentum with a linear phase gradient [6] as demonstrated in the (u, v) coordinate space of the figure. Physically, this (x, y) → (u, v) transformation is achievable through application of a phase distribution, described in Eq. S9. The (a) illustration of a conformal mapping which 'unwraps' an OAM = 2 mode to a transverse phase gradient and the phase distributions preforming the (a) mapping transformation and (b) correction due to path length differences. Visualization of the distribution is shown in Fig. S6 (b). Here d is the fixed unwrapped beam length, b affects the location in the (u, v) plane, λ is wavelength and f the transforming lens focal length. Associated phase distortions in the 'unwrapped' beam from optical path length variation requires correction by a second phase distribution, described in Eq. S10 [5,6] and Fig S6 (c). The resultant phase distribution unwrapping causes OAM to transform to transverse momentum as it propagates. A transverse phase gradient of e il tan −1 ( y x ) = e il 2πv d across the beam length is then generated [5][6][7]. The unwrapped beam in the Fourier plane (FP) of a lens forms a diffraction-limited elongated spot [6]. As the unnwrapped mode contains a phase gradient limited to the length, d, each OAM mode results in a transverse phase gradient that is integer-multiples of the other (shown in Fig. S7 (a)). The lens then forms the spot at a gradient related position in the focal plane due to its Fourier transforming action. The spot position (t) is then OAM-dependent [5]. Moreover, intensity of the spot indicates the 'amount' of any OAM-mode present. The mode sorter technique employed with refractive elements allows for efficient detection of a large range of OAM modes and the associated weightings, enabling detection of low intensity sources and in comparison to other techniques such as the SLM modal decomposition mentioned earlier. A distinct disadvantage occurs with this technique, however, when considering cross-talk between adjacent modes. Due to the finite unwrapped beam size, the width of the spot created in the FP is diffraction-limited, resulting in a constant overlap between modes [5,6]. It was subsequently demonstrated that a maximum of 80% may be achieved in the required position with good alignment, while the other 20% spreads into the adjacent mode positions [7]. Increasing the unwrapped beam size may appear to remedy the situation as it would increase the separation distance between the spots. This, however, also decreases the phase gradient, resulting in the spot width to spacing ratio remaining similar [6]. Consequently, the overlap between modes is an intrinsic property of the system. A solution was suggested by Berkhout et al. [5], though, whereby the unwrapped beam length may be increased through simply copying the unwrapped modes and placing them next to each other. As the linear gradient ranges in periods of 0 to 2π for every OAM value, placing copies alongside the other does not disrupt the unwrapped beam configuration, allowing this technique to decrease the spot width without altering the spacing [6,8]. The copied beams then add an extra N c rotations to this 0 to 2π periodic gradient. Further investigation and subsequent implementation was then carried out by including two additional phase transformations on SLMs after the mode sorter optics [6,8]. Here the first transformation copied the phase-correction unwrapped beam according to the phase term: which resulted in N c = 2N + 1 copies placed alongside each other based on the angular separation, ω, in the x-direction, causing the element to be labeled a fan-out element (FOE) [6]. γ m and α m serve as phase and intensity parameters, respectively, for the different orders (m). The second transformation subsequently served as a correction for this fan-out operation. Ruffato et al. recently combined the log-polar coordinate and fan-out functions as well as the respective phase corrections to condense the operations into two diffractive elements [9]. Here the copying element was combined with the unwrapper so that the beam could be simultaneously copied N c times and unwrapped alongside each other before encountering the second element [10]. Subsequently, the correction terms for the path length difference from the unwrapper is combined with the corrections for joining the copied beams such that a dual and once-off correction is carried out on the beam. It follows that more accuracy should be expected with this method in addition to the convenience of utilizing a more compact system, where the alignment for the system is restricted to half the elements given in the scheme implemented by Mirhosseini et al. [6]. The subsequent working principal behind the elements by Ruffato et. al. is depicted in Fig. S8. Mode sorter performance To characterize the expected performance in detecting the QW, analysis of both the refractive and diffractive mode sorting elements used in the experiement was carried out. Table S3 gives the fabrication parameters of the tested sorters. The tested diffractive sorters varied in the number of copies that were created where 1 and 3 copies were generated respectively. As the 1-copy and refractive sorter both generated a single unwrapped beam, the expected difference in performance between the sorters was limited to the range of OAM modes that could be accurately sorted. This is due to the refractive sorter being able to receive beams of lager radii, thus compensating for the increased deviation of the angle of incidence for the beam rays directed to the sorter as the OAM increases. A higher range of the sorted OAM values being viably accurate [7] should thus occur. As a result, evaluation of the refractive sorter was restricted to the comparative OAM ranges while the 1-and 3-copy sorters were further characterized, allowing a more accurate comparison due to the parameter similarities. Figure S9 experimentally illustrates the transformation performed by the mode sorter on LG modes ranging from -3 to 3 OAM. Here the top row shows the modes generated by the SLM and directed through the sorter. The OAM per photon is given above the respective spatial modes. In the row below, the elongated spots formed in the FP of the sorting lens are shown. Cross-hairs in the images mark the position of the 0 OAM mode. Consequently, the OAM dependent sorting power of the elements may be clearly seen where the negative OAM spots are formed to the left of the cross-hairs and the positive OAM spots to the right. Additionally, the position moves incrementally in the OAM handedness direction, based on the OAM value. Spot positions Quantitative evaluation of the spot positions was carried out whereby experimental measurement of the spots formed for varying OAM was compared to calculated values determined from Eq. S11. Anticipation of both the accuracy and consistency Considering Fig. S10 (a), the experimental shift in spacing agrees well with the calculated value obtained from the design parameters with a difference of 30.1µm -29.6µm = 0.5µm. Additionally, the positions form a straight line as evidenced with the high R 2 -value of 0.9995. This indicates a consistent positional shift which should lead to defined boundaries between the experimentally formed OAM spots. As a result, the detected intensities at those positions should be an accurate reflection of the modes and weightings in the beams being sorted. Based on the spacing calculations, the expected separation distance for the 1-copy mode sorter was 59.6 µm compared to the average experimental value of 62.5 µm in Fig. S10 (b). This deviation is notable with a 62.5 µm -59.6 µm = 2.9 µm difference as it accumulates with the number of increased modes as evidenced by in Fig S10 (b) for the higher OAM modes. A straight line, however, is still formed by the experimental shift in position with OAM. This may be explained by the length of the unwrapped beam where it may have been slightly shorter in the experimental implementation than the parameter quoted. The performance of the 1-copy sorter shall thus be evaluated with the experimental positional shift where adequate performance may be expected with a similarly high R 2 -value of 0.9994 compared to the refractive sorter. Accordingly, the sorter performs adequately. For the 3-copy mode sorter, comparison of the calculated and measured values yielded a 126.9 µm -126.6 µm = 0.3 µm difference, indicating excellent agreement. Additionally, the R 2 -value of 1 gives rise to good expectations in terms of detecting the correct OAM values. Alignment range and cross-talk By binning rectangular areas on the CCD along the direction of the spot movement, the intensity was integrated over to determine both the presence and weighting of different OAM modes. Here the bin positions were determined by Eq. S11 with appropriate adjustment in the 1-copy case. Accordingly, determination of the viable range of OAM modes for which the MS may be used was achieved by binning for a range of OAM modes across the CCD image captured of each generated mode. OAM modes from -30 to 30 for the refractive sorter and -15 to 15 for the diffractive sorters were then sent separately through the MS and the detected modes then read out. The results are given in Fig. S11 for the different sorters. Here the presence of cross-talk is indicated by the offdiagonal elements. It may be seen that the minimum cross-talk achievable was 25% for the most aligned spot detected (as shown by the range of the color map) for both the refractive and 1-copy sorters in Fig S11 (a) and (b) respectively. It may also be observed that near 0 OAM, the refractive sorter generated a larger amount of cross-talk than in the diffractive mode sorter elements for the best alignments that were achievable. Figure S11 (c) yields the characteristic density plot for analysing the 3copy mode sorter. Here the minimum cross-talk achievable may be seen as substantially reduced with only 10% being detected in the incorrect modes. Alteration of the strongest detected modes away from the diagonal indicated the OAM mode detected is no longer correct. As a result, the viable sorting range may be established by observing the number of modes where the maximum intensity remains within the diagonal. It follows from the discussion on the effect of the skew angle of the incident beam, that deviation in detected modes is expected to occur as the OAM-value increases. This should occur later in the refractive sorter case, however. The deviation is observed as a decrease in intensity of the diagonal values and a spreading in the off-diagonal terms as the generated OAM values digress from the central 0 OAM value. In Fig. S11 (a), the range of modes for which correct OAM modes are detected is [-20,20] with the refractive sorter. From Fig. S11 (b), with the 1-copy sorter, about 15 modes fall within a range of acceptable accuracy where the intensity that is detected in the correct mode remains above 60%. A greater range may still be used for the mode sorter, provided correction terms are used for the non-existent modes being detected as well as loss of weighting in the correctly detected modes. However, this is also limited as past 20 modes, the defining spot disintegrates into a fringe array for which the positions are not indicative of the modes present. The range of viable modes with detected weightings greater than 60% for the correct mode increased to 23 in the 3-copy case. Additionally, the spread of the cross-talk between modes was reduced due to the additional number of unwrapped beams. A stronger along-side diagonal may be seen, however, in comparison to Figure S11 (b). This may be attributed to the greater misalignment between the elements as the sensitivity of the element increased with the number of the beams copied. Subsequently, additional fringes were caused directly adjacent to the spot, yielding a stronger presence of erroneous detection of adjacent OAM modes. It follows that greater accuracy of the detected modes was found with the diffractive sorters, however, the range of OAM modes were diminished by the small radius of the element. As a result, the refractive sorter remained consistent over almost twice the OAM range in comparison to the diffractive elements, illustrating this point for a beam size of 6 mm. Increasing the incoming beam size will subsequently also increase the sorting range. Spot resolution By superimposing sets of alternating OAM spots, the resolution of adjacent spots were investigated. This was done for the 1copy sorter in Fig. S12 (a). Here superpositions of even and odd OAM modes in the interval [-7,7] were separately sent through the sorter and imaged. Superimposing these modes clearly describes how the modes overlap (also seen shown with the 2D profiles below the spots). A clear overlap may consequently be seen which will lead to additional cross-talk for the detection of OAM modes not present. Additionally, a set of adjacent modes where OAM = [-1,-8] were sent through the mode sorter with the resulting distribution shown in Fig. S12 (b). The convolution resulted in a distortion of the intensity spectrum associated with the OAM present as well as eliminating the ability to visually distinguish a spot's position. The latter may be evidenced through the appearance of only 6 spot 'tails' at the bottom of the convolution when 8 OAM modes are present. The 3-copy mode sorter, however, effectively separated and defined the spots for adjected OAM modes as is illustrated for Fig. S12 (c). Here a superposition of adjacent OAM modes [-7,7] was generated the corresponding detected spots shown in the figure. A 2D profile is shown below the spots, clearly illustrating the reduction in overlap and increased spot resolution. Weighted detection Accurate detection of mode weightings was also evaluated with the results given in Fig. S13 (a) and (b) for 1-and 3-copy sorters respectively. Here a distinct superposition of OAM modes were multiplexed by the SLM and sent through the sorters. Comparative accuracy of the multiplexed and detected OAM modes was determined through its similarity as given by, where W th (l) is the theoretical or multiplexed weighting associated with the OAM mode l and W exp (l) is the detected equivalent of the mode. Observation of the Fig. S13 (a) and (b) indicate that the multiplexed and detected weightings resemble each other, showing that either of the sorters would be suitable for detection, however, a significant increase in the accuracy of modal detection occurs as the copy numbers increase. More specifically, the similarity of 79.1% for the 1-copy is increased by 22% when adding 2-copies. Consequently, both the refractive and diffractive sorters exhibited advantages and disadvantages associated with their implementation. Specifically, the refractive mode sorter was more robust, allowing a greater range of beam sizes to be easily sorted as well as maintaining a larger range of OAM values when generating larger input beam sizes. The accuracy associated with measuring the weightings associated with a 1-copy sorter, such as the refractive one, however, is adversely affected with a 79.1% accuracy which may be expected if reasonable alignment is achieved. Here, the 3-copy diffractive sorter is more advantageous with a large increase in accuracy with fair alignment. In addition, the cross-talk measured was substantially smaller with a reduction in the power erroneously detected for incorrect OAM modes. Implementation of this sorter resulted in a higher sensitivity to misalignment, resulting in a more difficult detection system as well as a significant restriction of the size of the beam that can be sorted. Furthermore, variation of beam sizes that may be sorted is small as only a deviation of 0.300mm is possible in comparison to the 8mm range achievable in the refractive sorter case. Subsequently, for more robust requirements, the refractive sorter was favorable at the expense of the system accuracy; conversely, greater accuracy was achieved with the 3-copy diffractive sorter at the expense of the range and some stability of the detection system. Beam profile and spatial filtering Elimination aberrations and unwanted modes contained in the initially laser generated beam was necessary through construction of a spatial filter. Figure S14 (a) shows the transverse output profile generated by the Spectraphysics DCR-11 laser. From the transverse distribution, the presence of aberrations and additional modes are evident in addition to a large beam width of 3 mm. A spatial filter was implemented with a 50 µm pinhole placed in the Fourier plane of the 4 − f system with lenses of focal lengths, f 1 = 750 mm and f 2 = 200 mm respectively. Here the pinhole was three times larger than the calculated Gaussian beam width in the FP, resulting in the extraneous modes being filtered out spatially. The focal length ratio ( f 2/ f 1) between the lenses additionally formed a demagnification telescope to reduce the beam diammeter to 2 mm. Fig S14. False color map images of the near-field pulsed laser output beam (a) before spatial filtering and (b) after spatial filtering. The spatially filtered beam is smaller than the original as it was de-magnified in the spatial filtering process. The spatial filter output beam is given in Fig. S14 (b) where a Gaussian intensity distribution can be seen along with the appropriate demagnification. In addition, the reduced size allowed for a more sustainable walk with the dimensions remaining below the size of the optics. Pulse overlap and adjusting the prediction The time gap (t) between each output pulse from the resonator is related to the length of the resonator (L) through the relation, t = L/c where c refers to the speed of light in air and t the time. Here, the resonator must be longer or equal to the temporal pulse width to avoid overlap of the circulating pulse and thus an overlap in the QW steps. However, experimental consideration as well as stability and alignment factors resulted in an upper limit restriction of 3 m for the resonator perimeter. It was subsequently designed in a 0.3m × 1.7m rectangular configuration. Measurement of the subsequent temporal pulse width of the laser was carried out by placing a Thorlabs DET210-a photodiode before the ICCD and averaging over 10 intensity pulses on a Tektronix TDS2024B 1GHz oscilloscope. The corresponding temporal parameters were then estimated by fitting the pulse to a Gaussian function of the form: where a determines the height of the pulse function, x refers to the x-axis position which is then adjusted by b. c indicates the width of the pulse and k adjusts the position along the y-axis. The corresponding parameters extracted from fitting Eq. S14 are summarized in Table S2 A SSE (sum of the squares of the error) of 0.002573 and R-squared value of 0.974 determined from the fitting both indicate that the function as well as the parameters are adequate reflections of the measured pulse for the walker. Here the parameters may subsequently be used to yield both the mathematical description of the pulse as well as indicate Table S2. Parameters for Gaussian fit to average pulse length measured from the Spectrophysics laser. Parameter Fitted Value a 0.06045 b 0.106 x 10 −9 c 6.107 x 10 −9 k 0.0040 the full-width half maximum (FWHM) and the full-width at one-tenth maximum (FWTM) values for the pulse length. The FWHM is then given by with the FWTM also determined by FWT M = 2 2 ln (10) × c = 26.6ns As the parameter b, indicates the position of the peak relative to a position space, b = 0 allows the function to be centred at the origin, making it easier to work with. Additionally, k indicates the y-position which is also irrelevant, hence k = 0. It follows that the intensity distribution of the pulse may be characterized by the Gaussian function, G(t): S15. Illustration of pulse overlap and intensity effects for each pulse emitted from the resonator or step in the quantum walk. It follows that the FWHM is close to the 10 ns resonator length with 14.3 ns. Comprehensive prediction of the mode spectrum may be determined by considering the FWTM value, however, which is almost 3 times larger than the 10 ns expected for the resonator design, indicating a significant overlap. Correction for this was achieved by modelling of the OAM mode spectrum overlap. Two factors were considered in the experimental simulation of the step distribution expected for each round trip. These factors were (1) how the pulses overlap and (2) the intensity decrease occurring each RT due to the partial transmission through the BS. More specifically, a FWTM of 30 ns and resonator length of 10 ns,the output pulse will contain contributions from the previous pulse as well as the next pulse. This is illustrated in Fig. S15. As the measurements were taken every 10 ns for each respective step, the overlap was minimized as indicated by the blue dotted box. The second factor regarded the decrease in the internal resonator intensity which lead to a difference in the intensities of the overlapping components. Again, this is illustrated in Fig S15. Subsequently, the correction factor for the 'previous' pulse overlap contribution was larger than that of the 'future pulse'. A general formula for a transmission percentage T and subsequent reflection, R = (1 − T ), of the pulse may easily be derived. This is given in Eq. S16 The temporal shape of the pulse additionally affects the overlap between the pulses. This was approximated by the best fit curve for the average temporal pulse as determined in Eq. S15. These intensity and temporal width factors were used to augment the simulated QW. An adjusted diagram of the pulse overlap is presented as Fig. S16 (a) and (b) below. Figure S16 (a) indicates the extent of the pulse overlap while (b) illustrates the effects of the different weightings for a 50 : 50 beamsplitter. The dotted lines indicate the section of the pulse that was gated by the AndOR camera for the nth step in the quantum walk series. This gated section of the pulse was determined using the formula: where PW is the complete pulse width (40 ns based on the modelled pulse) and GW is the gate width (i.e. the time section of pulse captured).Based on the AndOR gating time, the pulse was captured from Correction to the quantum walk probability distribution for the 1st pulse (n = 0) was thus: Accordingly, the following general model applies: n: Where QW P (n) is the quantum walk probability distribution of the nth step. For comparison, Fig. S17 shows the normalized expected and corrected Hadamard probability distributions for the 6th pulse (n = 5). The corrected distribution displayed is for the case of a 50 : 50 beam-splitter acting as the resonator output window. Fig S17. Step 5 of the OAM QW distribution with (Corr. QW) and without (Calc. QW) the experimental corrections for step, n = 5 or detected pulse 6. Comparing the calculated distribution to that of the corrected one, the same diverging trend is occurs. The maxima here still appear near the edges of the distribution with the highest weightings occurring at the same positions. It follows that the expected difference occurring from the overlapping modes within the cavity is a broadening of the peaks as well as the presence of probabilities in the adjacent positions. Consequently, the generated QW still retain the characteristic qualities associated with the pure QWs. Pulse synchronization and gating An AndOR iStar ICCD camera (734 series) with a photocathode optical shutter allowed gating times at the 10 ns scale and thus 11/19 isolation of the correct round tip. The opening and closing of the photocathode was determined by monitoring the electronic signals, sent to enact the operation in the camera, with an oscilloscope. As indicated in Fig. S18, the opening is determined with a negative pulse and closing with a positive pulse. Fig S18. Schematic illustrating the procedure necessary to correctly capture an output pulse for a single step or round trip in the quantum walk. The camera and laser pulse emission was synchronised with a digital delay generator so as to locate the desired output pulse (step). A specific time delay was then placed between the pulse emission and opening of the photocathode. The delay chosen was equivalent to the time taken for the electronic signals to travel to the respective components as well as the time for the pulse to reach the resonator, circulate the resonator a desired number of times, travel through the sorter and reach the camera. To determine the delay, a photodiode was placed at the same distance from the output as the camera. Arrival of the pulse once fired triggered the oscilloscope which was also monitoring the photocathode trigger signal in a second channel. Adjustment of the calculated time delay altered the first detected pulse (step 0) temporal position as depicted in Fig. S19. After determination of the initial pulse delay, isolation of the desired step or pulse output was achieved by adding an additional delay of Nt that is related to the resonator circulation time, t and number of steps, N. Determination of the appropriate delay required for synchronization of the camera and laser pulse is demonstrated in Fig. S19. The graphs along the lower row of the figure are signals recorded by a oscilloscope monitoring the photocathode triggering signals as well as the arrival of the emitted laser pulse due to a trigger pulse from the digital delay generator for the same nanosecond time scale (x-axis). The y-axis then represents the associated voltages. It follows that the blue profile is the laser pulse detected by the photodiode, placed the same distance as the camera from the output pulse and the red profile is the trigger signals operating the opening (first large dip) and closing (first large spike) of the photocathode. Transparent green overlays on the respective graphs emphasize the period for which the camera is recording the pulse intensity and thus the section of the pulse captured. As indicated, the gate width (capturing Fig S19. Experimental illustration of synchronization required between the AndOR camera and pulsed laser for accurate output pulse capture from the QW resonator. Delay time settings were decreased from (a-d). time) was set to 10 ns. Images above each of the oscilloscope graphs are the transverse intensity distributions captured in the respective 10 ns gated windows where the delay between the photocathode and laser triggers from the digital delay generator was systematically decreased from (a) through to (b) for a three-lobe spatial profile. It can thus be seen from (a)-(b) how the pulse moves into the capturing window of the camera before the maximum pulse intensity is captured in (c) and then moving past the range again in (d). The variation in intensity of the captured modes clearly show the synchronization effects of the time delay parameter and the desired position for ideal analysis of an output pulse with (c). Application of the appropriate synchronization time delay as determined with the method illustrated in Fig. S19 for the first pulse in the resonator setup is allowed for the capture of the desired QW step or output pulse. Subsequent addition of a steprelated time delay constant, Nt, to the initial delay allowed for the capture of the intensity distribution related to that step (N) or output pulse. This is illustrated in Fig. S20 for the experimental setup. Here successive output pulse intensity distributions from N = [0, 4] were captured with a 10 ns gate width in the mode sorter Fourier plane. A clear evolution in the distribution may be seen across the steps, indicating a spread in the intensity distribution and thus the successful capture of additional output pulses according to the associated delay settings. It follows that this method would be effective in attaining the OAM spectra of each QW step, allowing real-time observation of the walk. Experimental data correction Following the noted overlap occurring in the experimental setup and the associated derivation of the correction to the theory, the measured experimental results were expected to have an adjusted distribution as indicted by Eq. S18. Accordingly, the simulated distribution was altered as illustrated by Fig. S17. Comparison between the directly measured experimental distribution and altered theory thus allowed the QW distribution to evaluated. The results in this form are shown by the gray bar graphs (to the left) in Fig. S21 for the symmetric Hadamard QW case, Fig. S22 for the asymmetric Hadamard QW, Fig. S23 Fig S20. Example evolution of output intensity distributions as captured every consecutive 10ns from the first output pulse, marking each round trip made by the circulating light beam for the QW resonator. where the QW symmetry was changed with the QWP fast axis orientation, Fig. S24 for the Identity coin QW and Fig. S25 for the NOT-coin QW. However, in order appropriately evaluate the QW distributions with respect to what is traditionally expected, it was also possible to reverse the overlap and correct the measured distribution such that it reflected the characteristic QW commonly seen. This was accomplished by taking the QW distribution measured for the nth step and applying the reverse of Eq. S18 such that the QW P (n) meas could be retrieved or de-convoluted from the measured distribution. To generate the experimental correction, consider simplifying the equation by simplifying the terms specified in Table S3 by the variables listed. Equation S18 then becomes QW PCorr (n) =c (n−2) QW P (n − 2) + c (n−1) QW P (n − 1) + c n QW P (n) + c (n+1) QW P (n + 1) where QW PCorr (n) is the probability distribution measured for the nth step as a result of the overlapping pulses in the resonator. Now in order to correlate the measured distribution to the expected, they should both be normalized by dividing the distribution by the sum i.e. S = QW PCorr (n) and S Exp = QW P (n) meas . Letting QW P (n) NORMmeas = QW P (n) meas /S Exp , it follows that QW P (n) NORMmeas = c (n−2) QW P (n − 2) + c (n−1) QW P (n − 1) + c n QW P (n) + c (n+1) QW P (n + 1) + c (n+2) QW P (n + 2)] ÷ S . (S20) As we are now modifying the experimentally measured data, it follows that QW P (n) is becomes the corrected experimental data. Subequently, QW P (n) meas = QW P (n) NORMmeas × S c n − c (n−2) QW P (n − 2) + c (n−1) QW P (n − 1) + c (n+1) QW P (n + 1) + c (n+2) QW P (n + 2)] ÷ c n . (S21) After applying the correction, any miscellaneous negative values were taken as background errors and equated to 0 as a negative probability is not physically possible. The resulting distributions were then normalized and are respectively presented in the inset graphs to the right in Fig. S21 to Fig. S25. Here the blue bars indicate the traditional distributions expected for these types of QWs and no overlap adjustment is shown. This may be clearly seen by where the uncorrected measured values occupy adjacent OAM states or positions while the corrected results shown occupy alternate OAM states or positions. The double sided arrows in each case indicate the interchangeable corrections between the simulated and measured probability distributions and the subsequent matching correlations between the experimental and simulated distributions.
10,304
2018-10-16T00:00:00.000
[ "Physics" ]
Detection of artificial pulmonary lung nodules in ultralow-dose CT using an ex vivo lung phantom Objectives To assess the image quality of 3 different ultralow-dose CT protocols on pulmonary nodule depiction in a ventilated ex vivo-system. Materials and methods Four porcine lungs were inflated inside a dedicated chest phantom and prepared with n = 195 artificial nodules (0.5–1 mL). The artificial chest wall was filled with water to simulate the absorption of a human chest. Images were acquired with a 2x192-row detector CT using low-dose (reference protocol with a tube voltage of 120 kV) and 3 different ULD protocols (respective effective doses: 1mSv and 0.1mSv). A different tube voltage was used for each ULD protocol: 70kV, 100kV with tin filter (100kV_Sn) and 150kV with tin filter (150kV_Sn). Nodule delineation was assessed by two observers (scores 1–5, 1 = unsure, 5 = high confidence). Results The diameter of the 195 detected artificial nodules ranged from 0.9–21.5 mm (mean 7.84 mm ± 5.31). The best ULD scores were achieved using 100kV_Sn and 70 kV ULD protocols (4.14 and 4.06 respectively). Both protocols were not significantly different (p = 0.244). The mean score of 3.78 in ULD 150kV_Sn was significantly lower compared to the 100kV_Sn ULD protocol (p = 0.008). Conclusion The results of this experiment, conducted in a realistic setting show the feasibility of ultralow-dose CT for the detection of pulmonary nodules. Introduction Lung cancer is the leading cause of cancer deaths in men, and the second leading cause of cancer deaths in women after breast cancer [1,2]. Curative treatment in the form of surgical resection offers the best chance of survival. Therefore, detection of early-stage cancer by computed tomography imaging is indispensable for a successful treatment. The analysis conducted within a large multicenter trial (National Lung Screening Trial [3,4]) showed a 20% reduction in mortality rate of a high risk population undergoing annual lung cancer screening with low dose computed tomography (LDCT). However, debates about an assumed radiation-associated risk of cancer development from ionizing radiation continue to limit the widespread application of LDCT screening. Due to repetitive CT (computed tomography) examinations, patients, who receive an annual LDCT (effective dose of 5.2 mGy or approximately 1 mSv) from the age of 50-75 years showed an additional risk of 1.8% (95% CI 0.5-5.5) for lung cancer development [5]. Thus, following the ALARA (as low as reasonably achievable) principle, the radiation dose of each computed tomography study should be reduced to a level that is "as low as reasonably achievable". Various strategies have been developed for lowering the radiation dose of CT without influencing the signal-to-noise-ratio, including lowering the tube voltage and/or the tube current, noise reduction filters, iterative reconstruction selective in-plane shielding or automated exposure control [6][7][8][9][10]. Recently, ultralow-dose scans with doses close to conventional chest radiography (approximately 0.3 mSv) [1,2,11,12] can be performed by third-generation dual-source CTs by using a tin filter (TF) that is mounted in front of both x-Ray tubes. The so called spectral-shaping of the high-kVp beam leads to a more efficient x-Ray beam which results in a reduction of the radiation dose [13]. Furthermore, a new generation of IR was developed in third-generation dual-source CT, so called advanced model-based IR (ADMIRE), which allows for a decrease of the radiation exposure by retrospectively compensating for increased image noise. Most of the previous studies on detection of pulmonary nodules in ULD CT were performed in anthropomorphic chest phantoms or performed in second-generation dual-source CT with outdated IR versions [7,[14][15][16]. To our knowledge, there is little data available regarding the image quality and the diagnostic confidence of ULD chest CT and reconstructions using ADMIRE for the detection of pulmonary nodules in a realistic phantom model. Consequently, we created artificial nodules of clinically relevant sizes in an ex vivo lung phantom containing porcine lung explants. This experimental set up allows for repeated scanning using different combinations of tube potential and tube current-time-product in ULD chest CT. The purpose of this study was to evaluate the feasibility of ULD CT as early-stage lung cancer detection method using ULD CT protocols with 3 different tube potentials for the detection of pulmonary nodules in an ex vivo lung phantom. Ex vivo lung phantom For this study, a double-walled chest phantom (Artichest1, PROdesign GmbH, Heilikreuzsteinach, Germany) was employed as previously described [17,18]. The system comprises two copolymer containers with a 2-5 cm space between the inner and outer shell and an artificial diaphragm both of which were filled with pure water to simulate the attenuation of the chest wall and upper abdomen of an overweighed patient with a body weight about 100 kg as previously described [19]. Four freshly excised porcine lungs (including the heart) were subsequently placed in the bottom casing of phantom container and connected to room atmosphere via a 7.5 mm tracheal tube (Portex; SIMS Portex Ltd., Hythe, Kent, UK) through a dedicated outlet. In order to inflate the lungs for nodule placement the tracheal tube was connected to a resuscitation bag. After the nodule insertion the phantom casing was hermitically sealed and the lungs were passively inflated by continuous evacuation of the artificial pleura space to -2 --3 x 103 Pa. All heart-lung explants of mature pigs were obtained by a local slaughterhouse (Muenchner Schlachthof Betriebs GmbH, Zenettistraße 10, 80337 Munich, Germany) with special attention to intact pleura. Institutional Review Board approval or animal research ethics committee approval was not required because no human being participated in this study and no animal was euthanized for the particular purpose of this study. No anesthesia, euthanasia, or any kind of animal sacrifice was part of this study. Preparation of artificial lung nodules Lung nodules were simulated using agar (Roth Chemie GmbH, Karlsruhe, Germany) gel, prepared from a 3% agar-water solution. During continuous inflation via the resuscitation bag 20-30 injections of 0.5-1.0 mL hand-warm agar gel were carried using a 5 mL syringe (BD Discardit1II, Becton Dickinson, Heidelberg, Germany) and a 20 G cannula (Sterican1, B. Braun Melsungen AG, Melsungen, Germany). Injections were distributed over the whole lung at depths of 3-6 cm, resulting in artificial nodules with a mean density of 25 Hounsfield Units (HU) at 100 kV. CT scan settings All multi-detector computed tomography acquisitions were performed on a third-generation dual-source CT machine (SOMATOM Force, Siemens Healthcare, Erlangen, Germany). Each prepared lung was scanned using identical parameters: collimation: 0.6 mm, number of slices: 192, gantry rotation time: 0.25-0.5 s, pitch: 0.5-1.2, scan time 3-10 s. A standard dose protocol was carried out as reference at a tube voltage of 120 kV, resulting in a CTDI vol of 1.8 (approx. 1 mSv effective radiation dose). The ULD protocol was acquired at tube voltages of 70kV, 100Sn (tin) kV and 150Sn kV. To obtain a defined radiation dose level with a DLP of 7.5 mGy à cm (approx. 0.1 mSv simulated effective radiation dose), reference tube current-time products (mAs) were adjusted by Siemens automatic exposure control system (CARE Dose 4D) ( Table 1). Images of every protocol-including the reference standard protocol with 120 kV were reconstructed using advanced-modeled iterative reconstruction (ADMIRE) at level 3. Iterative reconstruction was performed using a commercially available algorithm (ADMIRE1, Siemens Healthcare, Erlangen, Germany) at level 3. ADMIRE is a model-based iterative algorithm. Level 3 stands for the number of iteration cycles and complies with standard image quality. The reconstruction resulted in data sets with a slice thickness of 0.75 mm, a slice increment of 0.6 mm, using a sharp tissue kernel (Br69), a matrix of 512 x 512 pixels and a field of view of 330 mm. Evaluation of image quality Two blinded readers (radiologists, 2 years and 10 years of experience, respectively) evaluated the diagnostic confidence for each nodule on a modified 5-point Likert scale: 1non-diagnostic quality, strong artifacts, insufficient for diagnostic purposes score; 2severe blurring with uncertainty about the evaluation; 3moderate blurring with restricted assessment; 4slight blurring with unrestricted diagnostic image evaluation possible; 5excellent image quality, no artifacts. The Siemens Syngo CT Oncology1software tool (Syngo MultiModality Workspace VE36A, Siemens Healthcare, Erlangen, Germany) was employed to measure the maximum diameter of the nodules and evaluating the image quality in coronal, axial and sagittal multiplanar reformations (MPR). For exact evaluation of the artificial lung nodules, the CT scans were separated in three different parts (upper, middle and lower part) to depict differences in image quality and artifacts caused by characteristics of the lung phantom, e.g. the artificial diaphragm. Statistical analysis All data were recorded in a dedicated database ( Characteristics of artificial nodules Overall, 214 artificial nodules were created. All nodules with good demarcation, round shape, and a solid character (n = 195) were judged to be typical for small solid nodules such as metastases or lung cancer. Lesions with poorly defined boundaries, a part-solid aspect, or distinct draining of mixture into a bronchus, blood vessel, or the injection pathway (n = 19) were excluded from evaluation. Mean diameter of the artificial nodules were 7.84 mm ± 5.31 (range 0.9-21.5 mm). Assessment of diagnostic confidence Representative images for the reference protocol and different ULD CT scans are displayed in The subjective impression of the image quality was excellent in images of the reference CT protocol and of all three ULD CT protocols as well. In the reference protocol every artificial nodule was rated with an excellent score of 5 due to excellent image quality and missing artifacts. With a score of 4, the ULD CT protocol with 100 kV Sn showed the best image quality. Close to the artificial diaphragm, some of the nodules were evaluated with a score of 2 or 3 in the 70 kV ULD protocol due to beam hardening artifacts. The overall diagnostic confidence of pulmonary nodule detection was rated best by the two blinded readers in the ULD CT protocol with 100 kV Sn with a score of 4.14 and in the 70 kV ULD protocol with a score of 4.06 (Fig 3-Total). The lowest mean score of 3.78 was achieved using the 150 kV Sn due to the lowest rated score of 1 that was assigned to 52 artificial nodules with strong artifacts and severe blurring of the nodules edges. For ULD CT protocols, the confidence of nodule detection was not significantly higher in images acquired at a tube voltage of 100 kV Sn compared with 70 kV (p = 0.244). Compared to ULD CT protocol with 100 kV Sn, the images acquired with 150 kV Sn showed a significant lower diagnostic confidence of nodule detection (p = 0.008). Images acquired with the 70 kV ULD CT protocol showed also higher scores compared to the 150 kV ULD CT protocol, but this did not reach statistical significance (p = 0.08). The median score for ULD protocol 1 with 70 kV was 4.5, for ULD protocol 2 with 100 kV Sn 5.0, and for the ULD protocol 3 with 150 kV Sn 4.0. The reference protocol with 120 kV had a median of 5 as well. When separating the lung phantom in three different parts (upper, middle and lower third), the best scores for confidence of nodule detection were still achieved using ULD CT protocols with a tube voltage of 100 kV Sn and 70 kV (Fig 3 Upper, Middle and Lower Part). Again, both protocols didn't show a significant difference (upper third: p = 0.439, middle third: 0.402, lower third: p = 0.107). Like presented before, the lowest score for diagnostic confidence was reached for ULD CT protocols with 150 kV Sn, irrespective of the location of the nodules in the lung phantom. The difference of the scores for diagnostic confidence were significant for the 70 kV and 150 kV Sn ULD protocols in the upper part of the lung phantom (p<0.001) but not in the middle or lower part (p = 0.066 and p = 0.141, respectively). The 100 kV Sn ULD protocols showed significant differences in the scores to the 150 kV Sn protocols in the upper and lower third of the lung phantom (p = 0.016 (corrected with Bonferroni's method) and p<0.001, respectively). In the middle part, the difference between the 100 kV Sn and 150 kV Sn did not reach statistical significance (p = 0.021). Interrater reliability and agreement for diagnostic confidence Overall, a good to excellent interrater reliability for the diagnostic confidence in detection of artificial pulmonary nodules in ULD CT protocols was achieved with a Cohen's weighted kappa ranging from 0.696 to 0.882 ( Table 2). The best interrater reliability for diagnostic confidence in total lung phantom was achieved in the 100 kV Sn ULD CT protocol with a Cohen's weighted kappa of 0.798, the worst in 70 kV ULD CT protocol with 0.775. In 70 kV and 150 kV Sn ULD CT protocols, an exact agreement between both observers of 87.7% of nodules was reached, whereas in 100 kV Sn ULD CT protocol the interobserver agreement accounted for 89.7%. For the upper part of the lung phantom, the best interrater reliability was in 70 kV ULD CT protocol with a Cohen's weighted kappa of 0.757, the worst in the 100 kV Sn ULD CT protocol with 0.696. In the middle part of the lung phantom, interrater reliability showed the best results in the 70 kV ULD CT protocol with a Cohen's weighted kappa of 0.882, and the worst in 150 kV ULD CT protocol with 0.823. Cohen's weighted kappa was best in the 150 kV Sn ULD CT protocol (0.795) in the lower part of the lung phantom meaning the best interrater reliability, and worst in 70 kV ULD CT protocol with 0.710. For the upper, middle and lower part of the phantom, the interrater agreement was best in 100 kV Sn ULD CT protocol with 89.8%, 89.9% and 89.7%, respectively. In 70 kV ULD CT protocol, the interobserver agreement was best in the middle part of the phantom with 87.9%, worst in the lower part with 87.7% and 87.8%. The best rate of interobserver agreement was in the middle part of the phantom with 150 kV Sn ULD CT protocol with 87.9%, the worst in the upper part with 87.6% and 87.8% in the lower part. Discussion In this study, we assessed the value and diagnostic confidence of ultralow-dose CT for detection of artificial pulmonary lung nodules acquired by a third-generation dual-source CT using CARE Dose 4D and advanced-modeled iterative reconstruction (ADMIRE). For this purpose, a ventilated ex vivo lung phantom was scanned containing artificial solid nodules of various sizes at random distribution resulting in an effective dose of 1/10th of the low dose value. Our results indicated that image quality remains at a high level by using ultralow-dose scan protocols, and diagnostic confidence of artificial pulmonary nodule detection as well as the interobserver reliability and agreement were best when using a protocol with 100 kV tube voltage with tin filtration and the newest generation of iterative reconstruction (ADMIRE). To the best of our knowledge, this is the first study to compare diagnostic confidence for detection of pulmonary nodules in a realistic ex vivo lung phantom simulating an overweighted patient in ULD CT scan protocols. The experimental setup allowed for scanning identical anatomical conditions repeatedly at multiple exposure settings from standard to ultralow-dose and with different tube voltages. In previous studies [15,16], the nodule density, lung parenchyma density and noise in the employed chest phantom were similar to images of heavy smokers participating in lung cancer screening, which results in a more realistic setting compared to other anthropomorphic chest phantoms. In these previous studies, the ex vivo lung phantom was acquired with a second generation dual source CT with low dose protocol and reconstructed by FBP and second generation IR (SAFIRE) [15,16]. Alkadhi et al. evaluated the image quality and sensitivity of ULD CT using third generation dual source CT with tube voltages similar to our study (70kV, 100 kv Sn and 150 kV Sn) [12]. As opposed to our study setting they used an anthropomorphic chest phantom simulating an intermediate-sized adult. A common critic of low dose studies on phantom is that they represent an ideal patient instead of an overweight patient with 100 kg like in our study for a more realistic experimental setting. In accordance with the study of Alkadhi et al [12], the diagnostic confidence of lung nodule detection in our study was best in images acquired with the ULD scan protocol at a tube voltage of 100 kV with tin filter. Unlike Alkadhi et al, we found the lowest diagnostic confidence of artificial lung nodule detection in ULD protocols with 150 kV Sn compared to the results of the study of Alkadhi which rated the ULD scan protocol with 70 kV of limited value for nonenhanced chest CT. These differences can be explained by the fact that a more realistic lung phantom was used in our study compared to the anthropormorphic chest phantom used by Alkadhi. Despite the increased attenuation due to the double chest wall, the quality of images performed with a 70 and 100 kV spectrum was much better than with the beam hardened 150kV (tin filter). The electron density of lung parenchyma is probably so low that high energetic, beam hardened photons passes through the lung parenchyma, which results a dramatic loss of contrast. In addition, Compton effects due to the more energetic photons are decreasing the quality and the contrast between lung parenchyma and artificial nodule further. Martini et al. tested ULD CT protocol with 100 kV and tin filter with third generation dual source CT with an anthropomorphic chest phantom that simulated normal weighted and obese patients, respectively [20,21]. However, they just assessed the sensitivity of nodule detection and the general image quality and did not compare different ULD CT protocols among each other. Another prior lung phantom study showed the feasibility of ultralow-dose CT in lung cancer screening [15] with nearly the same high sensitivity in lung nodule detection compared to the standard dose. But the images were acquired at a second generation dual-source CT scanner without circuit detector and spectral shaping and only with a tube voltage of 80kV. The utilized anthropomorphic chest phantom presented the characteristics of a 70-kg male individual and lacked of realistic lung parenchyma. Nevertheless, an anthropomorphic chest phantom has some advantages like the reproducible experimental conditions. The disadvantage of the ex vivo lung phantom used in our study are the sensitivity of the system on the first hand because it has to be sealed and stable to create a vacuum for lung inflation. That can just be realized by a double wall at the expense of the attenuation. On the other hand, the double wall is not able to exactly simulate a thoracic wall with ribs and the experimental setting is dependent on the quality and condition of the inflated pig's lung. Recently, Sui et al. [14] reported a high confidence for evaluating lung nodules in ultralow -dose CT in patients at 0.13 mSv with a tube voltage of 80 kV and a tube current-time of 4 mAs and reconstructed by second generation IR (SAFIRE) and FBP, respectively. The use of SAFIRE allows for a reduction of radiation dose by approximately 65% without the loss of diagnostic information in low-dose chest CT [22]. The third-generation dual-source CT machine which was used in our study, included the newest iterative reconstruction technique by Siemens, called ADMIRE. This technique includes statistical data modeling in the raw data domain and combines it with model-based noise detection in the image domain by using an iterative approach. ADMIRE shows an image noise reduction of 50% compared to SAFIRE in ultralow-dose chest CT [12]. Additional by adding a tin filter, the shape of the applied energy spectrum is significantly modified, and less efficient energy spectra are removed. Combining the newest third-generation iterative reconstruction technique and spectral shaping of the high kV-beam, allows for dose reduction and leads to ultralow-dose CT protocols with dose levels close to those of conventional chest x-Ray like in our study without decreasing image quality and diagnostic confidence, and despite a chest phantom presenting the characteristics of a 100-kg male individual. Our study has some limitations. First, we have to acknowledge the inherent limitations of a phantom study. Although the attenuation of a porcine lung is close to a human lung and the density of the artificial lung nodules are similar to realistic malign lung nodules, it can never substitute a real patient with individual body constitutions that have influence on the effective dose. In addition, our ex vivo lung phantom missed physiological motion (cardiac or residual respiratory motion). However, repetitive scanning with various CT protocols precludes application in humans for ethical reasons and the gold standard is missing in vivo. Second, no other tube voltages were applied like 80, 110 or 130 kV, and there was no comparison of images reconstructed by different ADMIRE strength levels. Furthermore, we used different pitch levels for each ULD CT protocol, which could have influence on the image quality, though it is negligible for a multi-line CT machine (192 lines). Finally, we examined ultra-low dose protocols and reconstruction techniques for solid, but not for ground-glass or part-solid nodules like in other studies [7,21]. In conclusion, our study suggests that detecting pulmonary nodules in ultralow-dose chest CT is feasible and the image quality and diagnostic confidence were excellent in a dedicated protocol with 100 kV with spectral shaping and when using third-generation iterative reconstructions techniques. Future studies with real patients will have to assess the feasibility of ULD CT screening protocols in intention to further reduce the effective radiation dose for patients. Supporting information S1 Dataset. Nodule sizes and scores of diagnostic confidence of each nodule in reference protocol and three different ULD CT protocols. (XLSX)
5,054
2018-01-03T00:00:00.000
[ "Medicine", "Physics" ]
Unrest at the Nevados de Chillán volcanic complex : a failed or yet to unfold magmatic eruption ? New eruptive activity at volcanoes that have been long quiescent poses a significant challenge to hazard assessment, as it requires assessment of how the situation may develop. Such incipient activity is often poorly characterised as most quiescent volcanoes are poorly monitored, especially with respect to gas geochemistry. Here, we report gas composition and flux measurements from a new vent at the onset of eruptive activity at the Nevados de Chillán volcanic complex (Chile) in January-February 2016. The molar proportions of H2O, CO2, SO2, H2S and H2 gases are found to be 98.4, 0.97, 0.11, 0.01 and 0.5 mol % respectively. The mean SO2 flux recorded in early February 2016 during periods of eruptive discharge amounts to 0.4–0.6 kg s−1. We show that magmatic gases were involved in this activity, associated with a sequence of eruptions. Tephra ejected by the first blast of 8 January are dominated by lithic fragments of dacitic composition. By contrast the tephra ejected from a subsequent eruption contains both lithic fragments of dense dacite, and a fresher, sparsely vesicular material of basaltic andesite composition. By October 2017, the ejected tephra was again dominated by dense dacitic lithic material. Together with seismic and ground deformation evidence, these observations suggest that a small intrusion of basaltic to andesitic magma at shallow level led to the explosive activity. Our serendipitous survey, right at the onset of eruptive activity, provides a valuable window into the processes of reawakening of a dormant volcano. Introduction Nevados de Chillán, a large volcanic complex built in the Southern Volcanic Zone of the Chilean Andes is formed along a 12 km northwest-trending ridge (Figure 1. It is considered one of the most hazardous volcanoes in Chile due to the proximity of the resort towns of Las Trancas and Termas de Chillán-approximately 10 and 5 km away from the active crater, respectivelywith permanent populations of 1600 rising to 30,000 during the high season. Whilst Holocene activity is represented by widespread pyroclastic flow and tephra fall deposits around the volcano, lahars associated with snow melt are considered to be the greatest potential hazard [Orozco et al. 2016]. In particular, the Las Trancas valley is covered with repeated sequences of lahar deposits separated by paleosols and intercalated with centimetre-thick pyroclastic flow deposits [Carrasco and Andrés 2012]. The latest hazard map shows high lahar threats extending more than 50 km away from the complex [Orozco et al. 2016 been the most active. This subcomplex comprises four cones, Nuevo, Arrau, Viejo and Chillán (from north to south; Figure 1). Viejo is the oldest stratocone, it was active from ca 9.3 ka to 2270 BP and is composed of interstratified lavas and pyroclastic units that include prominent densely welded andesite and dacite agglutinate layers [Dixon et al. 1999]. Chillán cone is located south-west of Viejo and partially overlaps it. It is a dacitic cone dominated by lavas intercalated with pyroclastic deposits, and its last eruption took place in 1883 [Brüggen 1948] possibly associated with the collapse of the southern flank of the edifice [Naranjo et al. 2008]. Nuevo and Arrau are dacitic lava cones that formed on top of the older volcán Democrático. Nuevo formed from 1906 to 1943 while Arrau formed from 1973 to 1986 [Deruelle 1977;Dixon et al. 1999;Naranjo et al. 1994]. From August to September 2003, a series of lowmagnitude explosive events generated gas and ash columns 400 to 500 m high, leaving a 64 m double crater in the saddle between the Nuevo and Arrau cones [Naranjo and Lara 2004]. On 29 January 2009, the Volcanic Ash Advisory Centre of Buenos Aires reported a small ash column rising 500 m above the volcanic complex. This eruption could not be confirmed by subsequent field observations, which found instead that an unnoticed eruption must have occurred between January and August 2008, producing a new lava field termed Volcán Sebastián, 1 km northeast of the Arrau cone [Naranjo and Moreno 2009]. The 2015-present unrest In December 2015, the Observatorio Volcanológico de Los Andes del Sur (OVDAS), part of the Servicio Nacional de Geología y Minería (SERNAGEOMIN), changed the warning level from green to yellow following an increase in seismicity observed during the month prior. On 8 January 2016 at 17:56 local time, an eruption occurred at Nevados de Chillán producing a small column of ash. On 13 January 2016, the authors observed a new vent, 30 m across ( On 29 January 2016, a second vent, 25-30 m wide, opened, followed by a third in early February 2016 along a NNE trend Figure 2B-C). The proximity of these vents precluded unambiguous identification of which was responsible for the frequent small eruptions. On 9 May, 8 August and 1 September 2016, three comparatively larger eruptions (in terms of plume height) occurred, producing ash clouds up to 2000 m above the vent and resulting in a widening of the 8 January vent Figure 2D). A fourth vent opened in October 2016 (Figure 2E), eventually merging with the first and third in March 2017 to produce a large crater more than 100 m across. This is the likely source of the activity occurring at the time of writing (December 2017). From March 2016, incandescence has been sporadically reported during night-time eruptions, and sustained weak incandescence was reported between 25 and 31 March 2017 [SERNAGEOMIN 2017]. All eruptions have been characterised by columns of gases and ash of low height (<2 km) with most of the erupted material deposited within 1 km of the crater. In-situ gas measurements Gas composition data were obtained on 13 January 2016, five days after the first eruption. The data were collected using a portable "Multi-GAS" instrument [Shinohara 2005] deployed a few meters downwind of the first vent, which was actively degassing at the time (S36°52'1.64"; W71°22'40.18"; Figure 2A). The instrument incorporated SO 2 , H 2 S and H 2 electrochemical sensors. The SO 2 and H 2 sensors have calibration ranges of 0-200 ppmv while the H 2 S sensor has a calibration range of 0-100 ppmv. A non-dispersive infrared sensor was used for CO 2 and calibrated for 0-10,000 ppmv with an accuracy ±2 %. A relative humidity (Rh) sensor (Galltec) was used to measure H 2 O, providing a measuring range of 0-100 % Rh with an accuracy of ± 2 %. The conversion from relative humidity to water mixing ratio was made following Buck [1981] and using the following equation: where H 2 O is the absolute water mixing ratio in ppmv, T is temperature in°C, Rh is relative humidity in %, and P is atmospheric pressure in mbar. The gas temperature used in this equation is measured in real time by the Multi-GAS, the pressure is also measured by the Multi-GAS and the average during the measurements is used. All sensors were housed inside a weatherproof box, with the ambient air sampled via Teflon tubing connected to a HEPA filter fed through an inlet in the box and circulated via a miniature 12V rotary pump through the sensors. An on-board data-logger captured measurements at a rate of 1 Hz. The complete system was powered by a small (6 Ah) 12V LiPo battery. Similar systems have now been deployed at many volcanoes and the system used here is the same as that reported in Moussallam et al. [2016]. All sensors were calibrated in the laboratory at INGV Palermo (in October 2015), with target gases of known amount. The differences in response time for the different sensors were corrected by finding the lag times from correlation analysis of the various time series (see Moussallam et al. [2014] for sensors response time). Post processing was performed using Ratiocalc [Tamburello 2015]. Below we report H 2 O, H 2 and CO 2 mixing ratios after correction for mean ambient air mixing ratios (measured by the Multi-GAS directly prior to entering the plume). The measured H 2 S mixing ratio is corrected for a laboratory-determined cross-sensitivity with SO 2 gas (amounting to 16%). The ultraviolet cameras were coupled to Pentax B2528-UV lenses, with focal length of 25 mm (FOV 24°), and 10 nm full width at half maximum (FWHM) bandpass filters were placed immediately in front of each lens one filter was centred at 310 nm (Asahi Spectra XBPA310) where SO 2 absorbs and the other at 330 nm (XBPA330) outside the SO 2 absorption region [Kantzas et al. 2010;Mori and Burton 2006]. Image acquisition and processing were achieved using Vulcamera [Tamburello et al. 2011]. Every image acquired is saved in a 24 bit Portable Network Graphics (png) file with lossless compression. A set of four SO 2 calibration cells were used (94, 189, 475 and 982 ppm·m) to calibrate the apparent absorbance [Kantzas et al. 2010]. Two parallel sections in our data series, perpendicular to the plume transport direction were used to derive plume speed (ranging from 3 to 7 m s −1 , with an average of 6 m s −1 ). The data processing was carried out following the protocols outlined Kantzas et al. [2010]. Tephra analysis Two tephra samples were collected on 13 January 2016 at S36°52'8.1097"; W71°22'38.1298" and S36°52'24.6853"; W71°22'41.7828". Both samples were collected on snow that was clean before the 8 January eruption and hence originate from ash fall produced by the first eruption. The first sample consists predominantly of fine ash (< 63 µm), while the second consists predominantly of coarse ash (< 2 mm). A third tephra sample was collected on 3 February 2016 at: S36°53.291' W071°23.606' from a patch of snow that was clean on 13 January 2016. The sample hence originates from ash fall from an eruption that occurred between 14 January and 3 February 2016. A fourth tephra sample was collected on 11 October 2017 at S36°54.808' W071°29.574' directly from ash fall in the village of Las Trancas. Given that the samples were collected opportunistically, and in very small amounts, no attempt to quantify grain sizes was attempted. Tephra samples were analysed for bulk composi- nA, and a defocussed beam of 10 µm. To mitigate for Na loss, Na was measured first in the analytical sequence, at reduced count times (10 s on peak; 5 s background) at a fixed peak position. Major elements were standardized against rhyolitic (VG568) and basaltic (VGA-99) glasses and pure oxides of Ti, Mn and Cr [Jarosewich et al. 1980]. Analytical spots were chosen to avoid microlites, thus capturing evolved interstitial glasses. Gas composition We obtained 12 minutes of high quality Multi-GAS measurements. Data acquisition time was restricted due to the unstable nature of the vent area (requiring rope access), and the need to limit time exposed in a hazardous area. Multi-GAS measurements are presented in Figure 4, which shows four scatter plots of CO 2 ; H 2 O; H 2 S and H 2 vs. SO 2 mixing ratios in the plume emitted from the first vent. The strong positive co-variations observed between SO 2 and the other detected volatiles confirm a single, common, volcanic origin. The gas/SO 2 molar plume ratios were obtained from the gradients of the best-fit regression lines. Scatter plots yield the following molar ratios: CO 2 /SO 2 of 9. SO 2 flux The ultraviolet cameras were deployed at distances of 2.5 km (03 February 2016), 2.7 km (31 Jan. 2016), 3 km (13 Jan. 2016), 5 km (02 February 2016) and 8 km (30 Jan. 2016) from the source. Weather conditions during these measurements were generally good, with long periods of clear sky, except on 3 February, which was cloudy. Results obtained over these 5 days of measurements indicate a negligible SO 2 flux outside eruptive episodes. For instance, on 13, 30 and 31 January there were no eruptions during the observation periods, and the recordings did not register any SO 2 release, despite the generally short distances between the UV cameras and the active crater. Eruptions did occur during measurement periods on 2 and 3 February. However, due to heavy cloud on 3 February, only the data collected on 2 February are useful. On that day, UV camera measurements were taken downwind of the active crater after most of the ash in the eruptive plume had deposited. Figure 5 shows the SO 2 emission rate obtained during two eruptions on 2 February. The emissions fluctuate between 0.1 and 1 kg s −1 with a mean value of 0.5 kg s −1 . These fluctuations of SO 2 emission rate are associated with observable successive pulses during the eruptive discharge period. High resolution videos of eruptions on 2 and 3 February are given in the supplementary material. Tephra composition Bulk tephra compositions are given in Table 3. Clear differences can be seen between the bulk composition of the tephra emitted during the first eruption on 8 January and that of the subsequent ash fall (occurring between 13 January and 3 February). In the total alkalisilica diagram, 8 January tephra are dacitic, while the subsequent tephra are andesitic Figure 6A). When examined by backscattered electrons (BSE), both samples of the 8 January tephra are similar, and consist of glassy, dense material with 30-40 % plagioclase microlites ( Figure 6B). In all samples, some of the dacitic fragments are bounded by corroded vesicles containing vapour-phase SiO 2 polymorphs ( Figure 6D). Microstructural phase identification of these SiO 2 crystals is beyond the scope of this work, but many of them show the "fishscale cracking" that is diagnostic of α-cristobalite that has undergone volume contraction during the β − α transition [Horwell et al. 2013]. These same ash fragments also have devitrified groundmass containing SiO 2 polymorphs ( Figure 6D) that we have not structurally identified, but that are similar to groundmass cristobalite in the holocrystalline cores of effusive silicic lava bodies [Schipper et al. 2015]. The small amount of cristobalite (and SiO 2 assumed to be cristobalite) is all intimately bound to glass and/or other crystal phases, and is therefore unlikely to pose any respiratory hazard at Chillán [Horwell et al. 2012], but it is rather an indicator that at least some of the dacitic ash fragments is derived from lava bodies extensively degassed at low pressure, under slow-cooling conditions [Schipper et al. 2017]. Interstitial glass in the dacitic material is rhyolitic ( Figure 6A). Similar materials are present in the subsequent ash fall, but with an additional component that is moderately vesicular, with fewer microlites, and with interstitial glass of andesitic composition ( Figure 6). No alteration products and/or hydrothermal minerals were observed in any of the ash samples. Magmatic-gas propelled phreatic eruptions The composition of the gases emitted from the first 2016 vent at Chillán was measured five days after the first eruption. Subsequent images of the summit area show that the vent was later buried by tephra and that passive outgassing had ceased (Figure 2). Assuming a gas mixture at equilibrium, following Giggenbach [1987] and Giggenbach [1996] and using the thermodynamic data of Stull et al. [1969], the gas-melt equilibrium temperature and oxygen fugacity (f O 2 ) can be calculated using: [1991]. The value of f H 2 O used here is 0.98 given that at 1 bar the fugacity of a gas is equal to its partial pressure and that P H 2 O = (P tot × n H 2 O )/n tot = 0.98 bar. The dominance of SO 2 over H 2 S and the high equilibrium temperature strongly support a magmatic origin for the gas emitted by the first 2016 vent. The computed equilibrium temperature of 856°C is much higher than the temperature at which scrubbing of magmatic gases by hydrothermal systems is expected to be significant [Gerlach et al. 2008;Symonds et al. 2001], Table 2 -X/SO 2 molar and mass ratios measured by Multi-GAS and gas composition of the plume at Chillán volcano. Error are expressed as the standard error of the regression analysis and subsequent error propagation. Table 3 -ICP-AES bulk composition of tephra samples. The first two samples were collected on 13 January and originate from ash fall produced by the first eruption on 8 January 2013. The third sample was collected on 3 February 2016, the eruption date producing this ash fall is unknown but constrained between 13 January and 3 February 2016. The fourth originate from an ash fall during an eruption on 11 October 2017. Note the significant difference in composition between the third sample and the others. giving confidence that the reported gas composition has not been affected by secondary processes other than cooling. The equilibrium temperature, whilst a minimum estimate of the magmatic temperature, is consistent with equilibrium with a dacitic to basaltic andesite magma at depth. The clear magmatic composition of the gas at the very onset of eruptive activity implies that the hundreds of small eruptions that have occurred at Chillán since, are not driven by a hydrothermal system but are instead propelled by pressurised magmatic gases. Gas A large hydrothermal system with hot springs, fumaroles and hot grounds is present at the Nevados de Chillán Volcanic Complex. The most significant fumaroles are located near the village of Termas de Chillán, where water derived from hot springs is used for the spa of the same name. Other nearby locations with hydrothermal activity include the Valle Hermoso and Aguas Caliente sectors. In all locations, fumaroles and hot springs have maximum temperatures around 90°C [Berríos Guerra 2015]. Our observations suggest that this large hydrothermal system was largely bypassed by the recent activity. A shallow level magmatic intrusion At the onset of eruptive activity, a single dacitic component was present in the erupted tephra ( Figure 6). This probably originates from the Arrau lava cone, made of glassy block-flow medium-Si dacites [66.6-67.6 % SiO 2 : Dixon et al. 1999]. In the following days, a second basaltic-andesite component appeared in the erupted tephra. It can be difficult to identify juvenile components in the material from small eruptions that may incorporate a variety of lithics [Pardo et al. 2014], and the probable origin of this second component cannot be readily ascertained since basaltic andesites are well-represented in the stratigraphy of Nevados de Chillán. However, these are much more common at Cerro Blanco than the currently active Las Termas subcomplexes, and we consider it less likely that these fragments are lithics recycled from the preexisting volcanic edifice-especially considering they disappeared from the mix of ejected material by October 2017. Although the fragments are vesicular and appear unaltered ( Figure 6C) they cannot be confirmed as juvenile. What is clear is that the gas emitted during the repeated eruptions originates from exsolution from a melt at high temperature (> 850°C). These elevated temperatures are consistent with the incandescence observed during night-time eruptions, reported intermittently by OVDAS-SERNAGEOMIN since March 2016. These observations suggest the presence of magma at a shallow level within the edifice. This is corroborated by seismicity recorded by OVDAS-SERNAGEOMIN with long period (LP) events located at depths less than 5 km. The significant tremor registered in the first few days of the new activity suggests magma or fluids motion at that time. Several periods of increased tremor have been recorded since, many temporally associated with explosions and incandescence ( Figure 3). Altogether, these observations suggest that the current eruptive activity was triggered by the rise and shallow emplacement of magma, accompanied by exsolution of volatiles triggering explosions that opened a new vent and ejected fragments of the cone. The activity since might have been sustained by periodic small-scale recharge and/or gas exsolution from the cooling and crystallising magma (Figure 7). The surficial expression of the intrusion, with several craters aligned along a straight NNE line ( Figure 2B-C), is consistent with diking. The size of the intrusion is unknown, but we note the lack of evidence for deformation that has been monitored by GPS and tiltmeter stations from OVDAS-SERNAGEOMIN and using interferometric synthetic aperture radar observations (unpublished data). Given the evidence for a shallow intrusion, this tends to suggest it has a comparatively small volume. Scenarios for evolution of the eruption With nearly two years since the onset of activity and with continued daily to monthly eruptive events at the time of writing (December 2017), it remains pressing to address the evolution of this episode of activity. However, our conceptual model for the intrusion (Figure 7) provides little basis for prognostication of how the episode will unfold. Nevertheless, we can pose three scenarios: The first is a gradual or abrupt end of the unrest with the intrusion stalling at shallow level without a magmatic eruption. This "failed magmatic eruption" scenario [Moran et al. 2011] is statistically likely, considering that globally, most recorded periods of phreatic eruptions at volcanoes are not followed by magmatic eruptions [Barberi et al. 1992]. Examples of such "failed eruptions" include the 1979-1982 unrest at Mt Ontake, Japan [Oikawa 2008] and the 2006-2007 unrest at Fourpeaked volcano, USA [Gardine et al. 2011]. The second scenario considers a transition to a magmatic eruption. In this case, the intrusion does reach the surface and lava is extruded either effusively or explosively ( (Table 3) and interstitial glass within grains. [B]-[C] BSE images of tephra fragments showing the dacitic material that dominated the 8 January blast material and 11 October material [B], and the andesitic component that is also present in the February blast material [C]. Components are marked glass (gl), plagioclase (pl), pyroxene (px), and oxides (ox). [D] BSE image of SiO 2 polymorph-bearing dacite from 8 January. Note that ash grain is bounded by a SiO 2 -bearing vesicle, around which the walls are corroded. One SiO 2 grain shows fishscale cracking that is diagnostic of cristobalite. Expanded view with high-contrast colour balance shows devitrified groundmass, in which the darkest grey phase is an unidentified SiO 2 polymorph. clude the 1980 eruption of Mt St Helens, preceded by two months of earthquakes and frequent phreatic explosions [Lipman and Mullineaux 1981] and the 1990-1995 eruption at Unzen volcano, where seismicity and phreatic eruptions escalated over a year, culminating in the extrusion of a lava dome and generation of pyroclastic flows [Nakada et al. 1999]. Between these two possibilities lies a third scenario in which activity could continue at a low level for several years. Such prolonged activity, induced by shallow magmatic intrusions, accompanied by magmatic degassing but with limited expulsion of juvenile tephra or lava, has characterized eruptions of Turrialba Conclusions In 2016, we measured the composition of gases emitted at the onset of the ongoing (at the time of writing in December 2017) eruptive episode at Nevados de Chillán. We also measured the SO 2 flux emission during eruptive discharge and the composition of tephra associated with these explosive events. The main conclusions we derive from this study are: 1. Right from the onset, eruptive activity was driven by magmatic gases, although ejecta were dominated by recycled material from the edifice. 2. A shallow magmatic intrusion is likely to be the trigger for the current unrest and may be periodically recharged.
5,376
2018-05-07T00:00:00.000
[ "Geology", "Environmental Science" ]
Long-time large-distance asymptotics of the transverse correlation functions of the XX chain in the spacelike regime We derive an explicit expression for the leading term in the long-time, large-distance asymptotic expansion of a transverse dynamical two-point function of the XX chain in the spacelike regime. This expression is valid for all nonzero finite temperatures and for all magnetic fields below the saturation threshold. It is obtained here by means of a straightforward term-by-term analysis of a thermal form factor series, derived in previous work, and demonstrates the usefulness of the latter. Introduction The XX chain is a spin chain with Hamiltonian [15] where the σ α j , α = x, y, z, are Pauli matrices acting on site j ∈ {1, . . ., L} of an L-site periodic lattice, σ α 0 = σ α L .The parameters J > 0 and h > 0 denote the strengths of the spin-spin interaction and of the applied magnetic field.We shall restrict the magnetic field to values below the saturation threshold, 0 < h < 4J. In our recent work [9] we have derived a novel form factor series for the transverse dynamical correlation function of the XX chain in equilibrium with a heat bath at temperature T .It measures the space-time evolution of a local perturbation relating two points at distance m and temporal separation t. Our series originates from a form factor expansion related to the quantum transfer matrix [7]. It can be resummed into a 'Fredholm determinant representation' consisting of a prefactor times a Fredholm determinant of an integrable integral operator [11].The latter is different from the Fredholm determinant representation derived by Colomo et al. in [5]. For Fredholm determinants and resolvent kernels of integrable integral operators a general method [6] is available that allows one to analyse their asymptotic dependence on parameters.Starting with the Fredholm determinant representation obtained in [5] the authors of [12] applied this 'nonlinear steepest-descent method' to the late-time, largedistance analysis of (2) at a fixed ratio α = m/(4Jt).They found exponential decay of the form where C, ν and ξ depend on T , h and α.The functional dependence differs according to whether α > 1 or α < 1.The former regime, in which the spatial distance in units of 4J is larger than the temporal separation, is called 'spacelike', while the latter is referred to as 'the timelike regime'. In [12] the authors considered magnetic fields below the saturation threshold, 0 < h < 4J.They obtained explicit expressions for ν and ξ in both, space-and timelike regimes.Later the 'constant term' C was obtained for h > 4J in [13].Although the nonlinear steepest descent method would allow one to calculate C for 0 < h < 4J as well, it seems that nobody has ever attempted to do so.This may be partially attributed to the cumbersome nature of the required calculations. In this work we reconsider the late-time, large-distance asymptotic analysis of the two-point function σ − 1 σ + m+1 (t) T in the spacelike regime.It turns out that the novel thermal form factor series derived in [9] allows us to obtain the asymptotics, including the constant term C, by a rather elementary term-by-term analysis of the series that avoids the use of any Riemann-Hilbert problem. On the other hand, our thermal form factor series can be resummed into a Fredholm determinant representation as well.As we shall see below this Fredholm determinant representation is rather different from the one of Its et al. [12] in that the term that provides the leading late-time, large-distance asymptotics in the spacelike regime appears to be pulled out as a pre-factor.Our finding strikingly resembles in structure the Borodin-Okounkov, Geronimo-Case formula [2,3,8] for a Toeplitz determinant generated by a symbol satisfying the hypotheses of the Szegö theorem. We should point out that the late-time, large-distance asymptotics considered here do not commute with the low and high-temperature asymptotics.At any finite temperature the asymptotic decay of the transverse two-point functions is exponential and given by (3).If, however, the temperature is send to zero first, the correlation functions will vary algebraically [14].We shall consider this limit for the more general XXZ chain in subsequent work.If we send the temperature to infinity first, then the behaviour of the correlation functions in 'time-direction' becomes Gaussian [4,16].We have recently analysed the latter situation in full generality in [11], which is one of two companion papers of this work.In the other one [10] we evaluate the two-point function numerically, for a wide range of temperature and space-time separations, directly from the novel Fredholm determinant representation. Thermal form factor series representation The starting point of our analysis will be a thermal form factor series for the transversal two-point function (2) derived in [9].The series is a series of multiple integrals which is most compactly expressed in terms of certain functions characteristic of the XX chain.These are in first place the momentum p and the energy of the single-particle excitations of the Hamiltonian expressed in terms of the rapidity variable, Here we choose the principal branch of the logarithm in the definition of the momentum function p(λ), cutting the complex plane from −iπ/2 to zero modulo iπ.Because of the πi-periodicity of the momentum, shared by all other functions in our form factor series, we may think of these functions as being defined on a cylinder of circumference π, which is equivalent to restricting their values to the 'fundamental strip' It is easy to see that has precisely two roots in S.These roots are called the Fermi rapidities.The value of the momentum function evaluated at the left Fermi rapidity is the Fermi momentum. Using the Fermi rapidities we can represent the energy function as Energy and momentum functions and p are real on the lines x ± iπ/4, x ∈ R, where they take the values Figure 1: Sketch of the hole and particle contours C h and C p .The Fermi rapidity The one-particle energy determines the function Most of the functions occurring in the form-factor series below are defined as integrals over two simple closed contours C h and C p , involving p, , z and some hyperbolic functions.The 'hole contour' C h and the 'particle contour' C p are sketched in Fig. 1.They are defined in such a way that C h encloses all roots of e − (x)/T − 1 located inside the strip −π/4 < Im x < π/4 ('the holes') as well as the left Fermi rapidity λ − F , whereas C p encloses the roots of e − (x)/T − 1 inside the strip π/4 < Im x < 3π/4 ('the particles') as well as the right Fermi rapidity λ + F .Given these contours we define the Cauchy transforms for all x ∈ S \ C h , and for all x ∈ S \ C p .For fixed . Since the integrands in ( 11), (12) are rapidly decaying for λ → ±∞, we may deform the contours and conclude that Another function needed below is the square of a generalized Cauchy determinant, After these preparations we can now recall the form factor series derived in [9].Using the above notation and performing several more or less obvious simplifications it can be written as where The contour C h is tightly enclosed by C h . 3 Asymptotics in the spacelike regime Theorem.In the spacelike regime m > 4Jt the form factor series ( 15) is absolutely convergent and determines the late-time, large-distance asymptotics of the transverse dynamical correlation function of the XX chain as where In preparation of the proof we introduce the short-hand notations and the function with real and imaginary parts u(λ) = Re g(λ) and v(λ) = Im g(λ).Then the 'wave factors' in (15) take the form e ±i(mp(λ)−t (λ)) = e ∓iht±τ g(λ) . We will be interested in the asymptotic behaviour of (15) for large positive τ and fixed α > 1.As we shall see below, it is determined by the poles of the integrands at λ ± F .The saddle points contribute only to the subleading corrections.This becomes clear when we consider the function g close to the lines R ± iπ/4 and on the lines Re λ = ±R for R > 0 large enough. (ii) Define the oriented contours where R, δ > 0. Then R and δ can be chosen in such a way that u(λ) < 0 for all λ ∈ C h,sd , u(λ) > 0 for all λ ∈ C p,sd and all hole roots are inside C h,sd , while all particle roots are inside C p,sd . Since α > 1, there is a unique ϕ > 0 such that α = cth(2ϕ).Using this parameterization we find for any λ = x + iy ∈ S that Here the first term in the square bracket is unbounded from above for x → ±∞, implying that the only roots of ∂ y u(λ) in S are at y = 0, π/2 if |x| is large enough.Taking into account (26) we see that, if the latter is the case, then It follows that u(λ) < 0 of the lines ±R + i(−π/4, π/4), while u(λ) > 0 on ±R + i(π/4, 3π/4), if R > 0 large enough.The statement about the location of the particle and hole roots follows by straightforward inspection of the integrands in (15). Proof of the Theorem.The function D {x j } n j=1 , {y k } n−1 k=1 is symmetric separately in all x j and y k .It satisfies if x j = x k or y j = y k for all j = k.Setting V h (x) = Φ p (x)e i(mp(x)−t (x)) πi e (x)/T − 1 , V p (x) = e −i(mp(y)−t (y)) πiΦ h (y) 1 − e − (y)/T (31) and using the above lemma we therefore obtain Here C h,sd and C p,sd are the contours introduced in ( 22).Notice that we consider res dx V h (x), x = λ − F as a functional acting on functions f holomorphic in a disc D ε (λ − F ) of sufficiently small radius ε centered about λ − F as and similarly for res dy V p (y), y = λ + F .In particular, Equation ( 32) implies that where the four series S (m, t) can be written as follows. with In order to show the convergence of the series and to estimate their asymptotic behaviour, we have to establish bounds on the individual terms.We start with the functions D {x j } n j=1 , {y k } n k=1 and recall the Hadamard bound for the determinant of an n × n matrix det j,k=1,...,n Since the contours C h,sd and C p,sd are finite and disjoint, we can use (38) to estimate where Likewise we set As follows from the above lemma, there exist κ, c > 0 such that With this we obtain a bound on every individual term in the series S 1 , for some constant C 1 > 0. This implies absolute convergence of the series S 1 and shows that, asymptotically for large τ , the series behaves like In a similar way one obtains for constants C j > 0, j = 2, 3, 4. It follows that the series S j , j = 2, 3, 4, converge absolutely and behave asymptotically as Inserting ( 44), ( 46) into (35) and recalling the explicit form (16) of F(m) we have arrived at the statement of the theorem. The theorem fixes the constant term of the asymptotics in the spacelike regime that remained undetermined in [12].Note that the function (λ − F ) can be easily calculated explicitly, For the other factors composing the constant C(T, h) we did not find any further simplification so far. Discussion For the interpretation of our result we would like to recall a Fredholm determinant representation of the transversal two-point function (2) that was obtained in [11], where it was used for the asymptotic analysis of the correlation function in the high-temperature limit. Referring to [11] we define the functions ϕ(x, y) = e y−x sh(y − x) and Using these functions we define two integral operators V and P acting on functions on the contour C p , Then (cf.[11]) the transversal correlation functions of the XX chain admit the Fredholm determinant representation Comparing with the asymptotic behaviour of the correlation function in the spacelike regime m > 4Jt we see that meaning that the Fredholm determinant collects the higher-order corrections to the main asymptotics.This is the analogy with the Borodin-Okounkov-Geronimo-Case formula [3,8] mentioned in the introduction. On the level of the Fredholm determinant representation it is easiest to compare our result with that of Its, Izergin, Korepin and Slavnov [12].For this purpose we rewrite their integral operators acting on functions on the the unit circle as integral operators acting on functions on . This is achieved by employing the map z → e ip(λ) to the Fredholm determinant representation in [12].Then where W is an integrable operator with kernel and Q is a one-dimensional projector acting as Comparing ( 51) and (53) we see that in (53) the late-time, large-distance asymptotics is entirely inside the Fredholm determinants and therefore harder to analyse.The fact that the late-time, large-distance asymptotic behaviour of the transverse dynamical correlation functions of the XX chain, including the constant term, can be obtained directly from the series representation (15) raises a number of interesting questions. (i) Is a similar analysis possible for the XXZ quantum spin chain?Unlike the XX chain treated in this work no Fredholm determinant representation for its two-point function is expected to exist, but a thermal form-factor series similar to (15) is still available [9].As the structure of the saddle-point equations is very similar, there seems to be a good chance that the answer will turn out to be positive. (ii) What can be done in the timelike regime?Here all terms in the series (15) contribute to the late-time, large-distance asymptotics.A further resummation would be necessary.Can we devise a method to find such a generalization?We would like to close with two remarks.First, in our recent work [10] we have compared the asymptotic formula of our theorem with a numerical evaluation based on the Fredholm determinant representation (51).As should be clear from the fact that the corrections are exponentially small for large m and t the asymptotic formula turns out to be very efficient.For an example see Fig. 2. Second, the constant term C(t, h), equation (18), does not depend on α.For this reason it should agree with the constant obtained by Barouch and McCoy [1] in form of infinite double products in their analysis of the static correlation functions (see equations (3.17)-(3.19) of their paper).We have numerical evidence that this is indeed the case.
3,474.8
2019-08-30T00:00:00.000
[ "Physics" ]
Millimeter-Wave Band Electro-Optical Imaging System Using Polarization CMOS Image Sensor and Amplified Optical Local Oscillator Source In this study, we developed and demonstrated a millimeter-wave electric field imaging system using an electro-optic crystal and a highly sensitive polarization measurement technique using a polarization image sensor, which was fabricated using a 0.35-µm standard CMOS process. The polarization image sensor was equipped with differential amplifiers that amplified the difference between the 0° and 90° pixels. With the amplifier, the signal-to-noise ratio at low incident light levels was improved. Also, an optical modulator and a semiconductor optical amplifier were used to generate an optical local oscillator (LO) signal with a high modulation accuracy and sufficient optical intensity. By combining the amplified LO signal and a highly sensitive polarization imaging system, we successfully performed millimeter-wave electric field imaging with a spatial resolution of 30×60 µm at a rate of 1 FPS, corresponding to 2400 pixels/s. Introduction The visualization of the electric fields of millimeter waves is of considerable importance.This band is used for communication in 5th-generation mobile communication systems.In the research and development of wireless communication devices, the design, prototyping, performance testing, and assessment of the applicability to radio-frequency exposure guidelines are repeated to achieve the desired performance.Electric field imaging is expected to make a significant contribution to improving the efficiency of operational diagnostics and the electromagnetic field (EMF) compliance assessment of wireless communication devices. In high-frequency electromagnetic field measurements, probes using antennas and coils have been widely employed, but metal elements and wiring are highly invasive and interfere with the measurement target [1][2][3][4].Sensors with an array of antennas have also been studied [5,6].This sensor can image the electric field image at once, but it is difficult to measure the near field for the same reason.Therefore, near-fields cannot be directly imaged, but must be inferred by simulation using data of the electric field measured from a distance [7].In this method, the simulation time depends on the simulation model resolution.Furthermore, measurement using an antenna probe requires between 40 min to 2 h of scanning per measurement, making real-time evaluation impossible [8][9][10]. On the other hand, probes using electro-optic (EO) crystals based on high-frequency photonics enable minimally invasive and broadband measurements [11][12][13][14][15].This method provides a highly sensitive measurement of the birefringence change of EO crystals due to electric fields.A method of obtaining electric field images by scanning a probe with an EO crystal attached to the end of an optical fiber has been proposed.This method enables electric field imaging with a high SNR because the amount of light that can enter the EO crystal is very high [16,17].However, this method is very slow, due to the mechanical scanning required, and the imaging speed is about 3 pixel/s [18].As a method that does not require mechanical scanning, a method to scan light with a Galvano mirror and capture the distribution of electric fields in EO crystals has been proposed.This method is capable of relatively high-speed imaging at 125 pixels/s [19].However, faster imaging is required to observe signals that change in the time domain.The method combining EO crystals and image sensors does not require mechanical scanning or optical scanning, and enables the acquisition of electric field distribution in a single shot [20,21].In this method, a single image is captured in a few seconds, which is very fast, but the optical limitations of the image sensor result in a low upper limit to the amount of light that can illuminate the EO crystals, which leads to poor sensitivity. To detect weak polarization changes, it is necessary to limit the incident light intensity into the polarization image sensor to avoid pixel saturation, whereas it is necessary to illuminate large light intensity to the observation target.In our previous works, we proposed and demonstrated a method for measuring weak polarization changes using a polarization image sensor with a double-polarizer structure [20][21][22]. In this study, we set up a millimeter-wave-capable light source composed of an optical modulator and a semiconductor optical amplifier (SOA).Increasing the amount of light irradiating the EO crystal by optical amplification made it possible to improve the electric field measurement sensitivity based on the polarization change enchancement by the double-polarizer structure.Also, a differential amplification circuit was designed and included in a polarization image sensor to improve the signal detection performance.The differential amplification in the chip led to the wide sensitivity range of the sensor.We also demonstrated imaging of the millimeter-wave near-field, which was used in the 5thgeneration mobile communication system, by combining these techniques.We successfully performed 30 GHz electric field imaging with a spatial resolution of 30 × 60 µm at a rate of 1 FPS, corresponding to 2400 pixels/s. Measurement Principle Figure 1 shows a conceptual diagram of the electric field imaging system.In the EO measurement, electric field information is obtained by detecting the polarization change of light passing through an EO crystal, in which the birefringence is changed by the applied electric field owing to the first-order electro-optic effect. In our previous works, to detect polarization changes with a high sensitivity, we proposed a method in which a polarization image sensor with on-pixel polarizers was combined with a signal-selective polarizer, as shown in Figure 1 [20][21][22].Generally, the extinction ratio of on-pixel polarizers is lower than that of uniform polarizers [23][24][25].The on-pixel polarizers of polarization image sensors with the 0.35-µm CMOS process cannot fabricate a polarizer pitch finer than the wavelength.When the polarizer pitch is wider than the wavelength, polarization is converted to light intensity based on light diffraction.This grating-based on-pixel polarizer has different polarization transmission characteristics compared with a normal wire-grid polarizer.Therefore, a wire grid polarizer shows a high transmittance when the grid is parallel to the polarization, but an on-pixel polarizer as a diffraction grating shows a high transmittance when the grid is vertical to the po-larization.In general, the extinction ratio of this polarizer as a grating is lower than that of the wire-grid polarizer.To address this issue, the role of the polarizer is divided into "modulation enhancement" and "conversion of polarization modulation into light intensity modulation" in this setup.The former is achieved using a uniform polarizer.In this paper, we refer to this polarizer as a signal-selective polarizer.The latter is achieved using on-pixel polarizers.In this method, two types of on-pixel polarizers, 0 • and 90 • , are placed on the adjacent pixels to fabricate a polarization image sensor.The signal-selective polarizer is placed to be ±45 • against these on-pixel polarizers.Here, the signal-selective polarizer significantly reduces the incident linear polarization.On the other hand, the orthogonal polarization component generated as a result of the birefringence of the EO crystal exhibits a high transmittance.As a result, the light intensity is considerably reduced and the degree of polarization change is enhanced.The frequencies of micro and millimeter waves are much higher than the frame rate of the image sensor.Therefore, frequency conversion by the optical heterodyne method is applied to enable measurement with an image sensor.In this technique, the EO crystal works as an optical mixer.Thus, the frequency components of the electric field f RF and the intensity modulated light f LO are mixed and the intermediate frequency component When the modulation amplitude of the incident light is constant, the amplitude of the intermediate frequency (IF) signal is proportional to the amplitude of the electric field intensity.By setting a sufficiently low f IF frequency, it is possible to measure the signal with an image sensor.The signal sources input to the MZM, RF signal source, and image sensor clock signal source are all synchronized by a 10 MHz reference signal.Therefore, the intermediate frequency and the image sensor frames were synchronized.As the image sensor was set at 360 FPS, the intermediate frequency was set at 90 Hz, which is 1/4 of the frame rate. Setup of Electric Field Imaging System Figure 2 shows the setup of the electric field imaging system.For single-point EO measurements, the 1.55 µm band is often used.However, image sensors are usually made of Si and are sensitive to light from the visible to a near-infrared range of up to about 1 µm.Therefore, a single-wavelength CW laser (DBR785P, Thorlabs, Newton, NJ, USA) with a wavelength of 785 nm was used as the light source in this system.In this wavelength band, waveguide-based EO modulators and SOAs are available.In this study, a LiNbO 3 -based Mach-Zehnder modulator (MZM) (NIR-MX800-LN-20, iXblue Photonics, Paris, France) is used for optical intensity modulation. Another important device is the SOA.The intensity launched into an MZM in this wavelength range is limited up to several mW owing to the photorefractive effect.In this work, the light intensity was increased by introducing an SOA.As shown in Figure 2, after passing through an optical isolator, the modulated light is amplified by a high-gain SOA (SOA-780-20-YY-30dB, Innolume, Dortmund, Germany) and enters the optical imaging system.The combination of MZM and SOA enables the generation of high frequency and high intensity LO optical signals.The maximum output power from the SOA was 41.5 mW.The light was collimated by a collimator to a beam diameter of 7.5 mm and launched to the optical system.When imaging was performed under these conditions, the output of the image sensor was about 15% of the pixel saturation level.As there is still some margin in the light-receiving level of the image sensor, SNR would be improved by using a higher-gain SOA.In this optical system, the polarization beam splitter works as a signal-selective polarizer.The quarter-wave plates and half-wave plates were used to adjust the polarization state.The EO crystal used as the electric field probe was (100)-ZnTe, which responds to the electric field perpendicular to the plane [26].In this setup, the EO crystal was placed directly on top of and in contact with the device under test (DUT) for near-field imaging.To reduce the invasiveness to the near-field or avoid physical interference, a floating arrangement of the EO crystal is also possible.However, the spatial resolution and field signal strength are reduced.The proposed method enables the visualization of millimeter-wave electric fields with a high sensitivity by combining a highly sensitive polarization imaging system and an electric field imaging method based on the EO effect. Specifications An image sensor equipped with on-chip polarizers was fabricated using a 0.35-µm 2-poly 4-metal standard CMOS process.Table 1 shows the sensor specifications.Figure 3 shows (a) a photograph of the fabricated polarization image sensor chip, (b) a block diagram of the image sensor, (c) the pixel layout, and (d) the cross-sectional view of the pixel structure.To achieve the configuration shown in Figure 1, each adjacent pixel is equipped with two types of polarizers that are mutually orthogonal to each other.The on-pixel polarizers are designed with the second (M2) and third (M3) metal wiring layers of the CMOS process to achieve a high extinction ratio.Here, the polarizer patterns of these two layers are identical.It is different from those in our previous work [22].For high-sensitivity electric field imaging, the image sensor with the best performance in this wavelength band should be used.As described in the previous section, we chose the 780 nm wavelength band for electric field imaging, which is detectable by Si-based image sensors and can be used for high-speed optical modulators and amplifiers.By taking the difference between 0 • and 90 • pixel pairs, the differential detection can be achieved by the sensor itself, which reduces the common mode noise and improves the signal-to-noise ratio (SNR).In our previous works, data processing to obtain the differential signals was performed on a computer.On the other hand, differential amplifiers were integrated into this chip.4a.We performed imaging using a 14-bit ADC in the 0 to 2 V range.The measured maximum SNR was approximately 61 dB.This result indicates that an optical signal change of less than 0.1% could be detected when the incident light level was sufficient. Extinction Ratio of On-Pixel Polarizers We evaluated the extinction ratio characteristics of the dual-layer structure of on-pixel polarizers.We compared three types of metal grid polarizers: a polarizer using only the first layer (M1), a polarizer using M1 & M2, and a polarizer using M2 & M3, which was not investigated in our previous work [22].A polarizer (LPVIS100-MP2, Thorlabs) was mounted on the spectrometer and the extinction ratio at each wavelength was measured.The results are shown in Figure 4b.The extinction ratio of the structure using M2 & M3 exceeded 2.5 over a wide wavelength range from 690 to 810 nm, with a peak extinction ratio at 780 nm of 3.27.Extinction ratio spectra depend on metal layer combinations, probably because of the difference in the resonance condition between the metal layers.Compared with our previously fabricated image sensors, the extinction ratio was improved by increasing the number of polarizer layers from one to two, as well as by changing the arrangement of the two polarizer layers [20][21][22]. Differential Amplifier Performance In addition to the normal pixel output, this chip integrates a differential amplifier that amplifies and outputs the difference between the 0 • -and 90 • -pixel outputs.The circuit diagram of the differential amplifier mounted on the sensor is shown in Figure 5.The bias component due to the difference in characteristics between the 0 • and 90 • pixels was corrected by adjusting the pixel reset voltages for the 0 • and 90 • pixels.One differential amplifier was placed for each pair of two rows, and the entire chip consisted of an array of 40 differential amplifiers.The differential amplifier was composed of a switched capacitor circuit with capacitors to provide ×5 gain.In our previous system, the subtraction between columns was done in software after image acquisition.However, by performing this process in-chip, noise can be reduced.To examine the performance of the differential amplifier, we carried out electric field imaging and compared the results with those obtained by the conventional software subtraction method.The system shown in Figure 1 was used.The DUT was a microstrip line.The frequencies were set to f RF = 100 MHz + 90 Hz and f LO = 100 MHz.Thus, the IF was 90 Hz.Measurements were taken at 360 frames/s, and 10,000 frames were acquired.The results were pixel averaged over 2 × 2 pairs of polarizers on the line with the highest electric field intensity.Here, the electric field intensity was obtained by fast Fourier transform (FFT).The modulated light intensity was set to 29.4 mW (high) or 1.2 mW (low).The results are shown in Figure 6a,b. When the incident light level was 29.4 mW, both the differential amplifier and the pixel output difference methods showed similar SNRs.Next, the incident light was set to 1.2 mW, which corresponded to a 14 dB reduction.As shown in Figure 6b, the SNR of the differential amplifier setup was reduced by 14.1 dB, which was almost the same amount as the light intensity reduction.On the other hand, the SNR calculated from the pixel output difference was reduced by 16.4 dB.Signal amplification by the differential amplifier reduced the effects of noise caused by the subsequent readout circuitry, and relaxed the requirement of the light intensity needed for detecting EO signals.The electric field imaging system assumed that the measurement was made in a region where the incident light intensity was sufficiently high, i.e., where the photon shot noise was higher than the readout noise.However, in actual optical systems, even if sufficient light intensity is obtained near the center, it may be insufficient at the periphery of the imaging range.Expanding the range of acceptable light intensity reduces the optical design requirements for field imaging systems and allows for the use of smaller optical systems.In addition, as differential detection is performed in the image sensor, the speed of the ADC can be reduced by half, thus relaxing the requirement for ADC performance.With a small EO crystal as in this case, the non-uniform distribution of irradiated light intensity is not a significant problem.However, this problem is expected to be more pronounced when large EO crystals are used for large-area field imaging.At low light levels, the noise associated with signal readout became dominant, resulting in a low SNR.Differential amplifiers improved SNR by amplifying the signal within the image sensor chip.Furthermore, by acquiring the difference between 0°and 90°pixels, the common-mode noise component due to variations in incident light intensity could be greatly reduced, enabling imaging with a high SNR [20]. Demonstration of 28 GHz Microstrip Line Electric Field Imaging 4.1. Optical LO Signal Source Two methods of generating local oscillation modulation [27] in the 28 GHz millimeterwave band were investigated: one is to supply a 28 GHz signal to the optical modulator and perform double-sideband (DSB) modulation to produce 28 GHz intensity-modulated light, and the other is to supply 14 GHz signal to the optical modulator and perform double-sideband suppressed-carrier (DSB-SC) modulation by adjusting the bias point and producing 28 GHz modulation with the ±1st-order sidebands. Figure 7a,b show the results of DSB modulation at 28 GHz and DSB-SC modulation at 14 GHz, respectively.The CW laser output was modulated by MZM, passed through the optical isolator, and amplified by SOA.Then, the modulation condition was measured with an optical spectrum analyzer.Both DSB and DSB-SC modulation were measured with the same SOA gain.The wavelength resolution was set to 0.01 nm, which corresponded to approximately 5 GHz.The 28 GHz DSB modulation results showed sidebands from the carrier wave to the left and right at a position corresponding to 28 GHz.For the 14 GHz DSB-SC modulation, the carrier component was well suppressed compared with the sideband component, and the sidebands could be observed at a position corresponding to 28 GHz between the sidebands.Theoretically, the two modulation schemes provided similar signal strengths under optimal modulation conditions.However, in this study, the 3 dB modulation bandwidth of the optical amplifier was 20 GHz, making it difficult to achieve higher modulation with DSB.In addition, the double-polarizer optical system used in this study was characterized by its ability to avoid pixel saturation of the image sensor and to achieve a high sensitivity using high-intensity optical LO signals. Different from the previous study [27], we introduce SOA in this study to take advantage of the above feature.In DSB-SC modulation, even if the modulation signal intensity was somewhat lower than the optimum point, the ±1st order sidebands became the largest wavelength component, and high modulation could be obtained.Therefore, efficient optical amplification was possible and suitable for this system. Imaging Target and Imaging Setup The DUT was a microstrip line formed on a high-dielectric-constant substrate with a dielectric constant of approximately 10.2.The EO crystal in this experiment was 3 × 3 × 0.3 mm (100)-ZnTe.The RF signal input to the DUT was a 28 GHz + 1 kHz signal with AM modulation at 910 Hz because the setting resolution of the signal generator was 1 kHz.With this setting, the frequency component of 28 GHz + 90 Hz was generated.The optical LO signal was set at 28 GHz.The IF signal at 90 Hz was observed using the image sensor at a frame rate of 360 frames/s.Data from the differential amplifier output for 3600 frames were processed to obtain the electric field image.It took 10 s to capture one image of the electric field-thus, 240 pixel/s. The IF component at 90 Hz on each pixel was extracted by FFT.By combining the FFT results, the intensity and phase distribution images were reconstructed. Imaging Performance Comparison between DSB-SC Modulation and DSB Modulation Electric field imaging was carried out with the setup shown in Figure 8a.An EO crystal with a high-reflection coating on the bottom and an anti-reflection coating on the top was placed on the DUT. Figure 8b shows the photograph of the microstrip line as the DUT; the line width of the sample was approximately 0.56 mm. The obtained intensity images with 28 GHz DSB modulation and 14 GHz DSB-SC modulation are shown in Figure 8c,d, respectively.The imaging results of the two modulation methods are almost the same.The electric field imaging results were normalized to display the maximum value as 0 dB.Comparisons using a calibrated EO probe were considered to be possible for converting the results to electric field strength and power density values.The EO crystal used in this experiment ((100)-ZnTe) was sensitive to the electric field vertical to the DUT [26].The electric field spread upward from the microstrip line, swung around, and returned to the ground on the back of the PCB.As a result, we confirmed that the electric field intensity was higher directly above the line, lowest at the edge of the line, and lower outside the line, and the phase difference between the inside and outside of the line was π rad.This was in agreement with previous reports [28,29].The DUT in this study caused signal reflections at the connector ends, and standing waves were observed in the line.Wavelength shortening occurred inside a substrate with a high dielectric constant of 10.4.The wavelength-shortening ratio was about 0.36, and the signal wavelength at 28 GHz was estimated to be about 3.85 mm.Half of the wavelength was about 1.93 mm.This result almost matched the wavelength of the standing wave in the image. The intensity profiles along the black dotted lines in Figure 8c,d are shown in Figure 9.The results show that a higher SNR was achieved with DSB-SC modulation.This may be due to the lower level of DSB modulation, as seen from the optical spectrum in Figure 7.When observing higher frequencies, the optical modulator and amplifier need to be compatible with higher frequencies.On the other hand, in this study, an optical intensity modulator was designed for 20 GHz, and DSB-SC modulation at 14 GHz was used to observe 28 GHz.The results suggest that optical LO signal generation using the harmonics of the optical modulator can be used for even higher frequencies.The DUT was a patch antenna fabricated on a high-dielectric-constant substrate with a dielectric constant of approximately 10.2, as shown in Figure 10a.This patch antenna is designed for 30 GHz.The EO crystal in this experiment was 3 × 3 × 0.1 mm (100)-ZnTe.The use of thin crystals reduces the optical path length, which decreases sensitivity, but it improves spatial resolution.The RF signal was set to 30 GHz + 90 Hz using a frequency doubler, amplified to approximately 28 dBm by an RF amplifier, and input to the DUT.Different from the previous experiment, the signal source used in this experiment has an upper frequency limit of 20 GHz, but can be tuned at 0.01 Hz.By inputting a 15 GHz signal to the MZM, 30 GHz DSB-SC modulation was performed to generate an optical LO signal.The IF signal at 90 Hz was observed using the differential amplifier output of the image sensor at a frame rate of 360 frames/s.Figure 10b-f show the results of performing the FFT by changing used frames and reconstructing the electric field image.Figure 10b shows the intensity distribution and phase distribution of the electric field image reconstructed from 360 frames.Figure 10c-f show the electric field intensity distribution calculated from 3600, 720, 180, and 80 frames.It can be seen that the noise level decreased as the number of frames increased, resulting in a clear image of the electric field.Because of the frame rate of the image sensor and the upper limit of the amount of light that the SOA can output, the result of the calculation from 360 frames in this setup was considered to be a good balance between image quality and imaging speed.Each electric field image took 1 which enabled electric field imaging at a rate of 2400 pixels/s.This imaging speed was much faster than electric field imaging systems that use mechanical or optical scanning.Compared with the previously reported optical scanning system, the acquisition speed of our system was 19 times faster [19].The spatial resolution for mechanical scanning was determined by the probe size, and for optical scanning, it was determined by the optical scanning resolution, represented by the beam diameter and the angular resolution of the galvano mirror.In our imaging setup, the spatial resolution was approximately 30 × 60 µm. In this setup, the irradiated light intensity was about 15% of the pixel saturation level, leaving room for improving sensitivity by increasing the light intensity.Therefore, using a more higher-gain optical amplifier such as a tapered SOA improved SNR and enabled even faster imaging.In addition, an increase in the image sensor frame rate enabled an increase in the number of images accumulated per unit time, which improved the SNR and was expected to increase the electric field imaging frame rate.Furthermore, the higher frame rate shortened the exposure time, allowing the image sensor to detect a larger amount of incident light, which, when, adjusted to the same signal level of incident light, significantly improved the SNR, enabling faster and more sensitive imaging.The intermediate frequency could also be improved as the imaging frame rate increased.Due to the imaging system, the intermediate frequency was set at a very low frequency of 90 Hz, but this frequency band was highly susceptible to mechanical noise.More stable field imaging is possible by increasing the intermediate frequency from a several KHz to several MHz [30].This high-speed imaging is difficult to achieve simply by improving the image sensor clock, so it is possible to achieve this by using an image sensor that stores pixel output [31][32][33][34].We studied this in order to improve the intermediate frequency by controlling the exposure time while keeping the frame rate of the image sensor the same, but this technique took a long time to capture a single electric field image [35].In addition, the resonant structure of EO crystals can dramatically improve sensitivity, although the frequency bandwidth that can be measured is narrower [29,36]. Sequential Electric Field Imaging Results In 30 GHz patch antenna electric field imaging, sequential electric field images were acquired by performing long duration imaging and computing each 360 frame sequentially.By setting the intermediate frequency slightly off the frequency of the numerical signal processing used to reconstruct the electric field image, it can be swept continuously [37].The signal input to the patch antenna was set to 30 GHz + 90.04 Hz.This makes it possible to observe the electric field image through phase rotation by 0.04 Hz by reconstructing the electric field image at 90 Hz.In order to combine the phase and intensity information into a single image, the normalized EO signal was calculated from each field image as A cos θ, where A is the normalized intensity of the field intensity and θ is the measured phase.Figure 11 shows the normalized EO signal distribution at 3 s intervals from 0 to 15 s.As time progressed, the phase of the input signal rotated, so the normalized EO signal decreased and reappeared after sign inversion. Conclusions In this study, we developed a system for the near-field imaging of millimeter waves.Our proposed high-sensitivity polarization imaging technique was introduced into the EO imaging system, and imaging of the electric near-field at 28 GHz and 30 GHz was performed. By introducing a differential amplifier circuit into the polarization image sensor, we confirmed that analog subtraction and amplification could be performed inside the chip and that the performance degradation rate could be suppressed at low light intensities.In addition, to obtain a high performance in our recently proposed dual-polarization optical system, we verified the method of amplifying the modulated light generated by a CW laser and an optical modulator using SOA and confirmed that a higher performance could be obtained with DSB-SC modulation.In the electric field imaging of the 30 GHz patch antenna, we successfully imaged the electric field image at 1 FPS and sequentially imaged the phase change at 0.04 Hz.Electric field imaging is possible at a rate of 2400 pixels/s, which is much faster than electric field imaging systems that use mechanical or optical scans. This electric field imaging system allows EO crystals to be placed near the antenna of a smartphone or other device to image the nearby electric field in a short period of time.Therefore, the system enables multifaceted evaluation by performing electric field imaging under conditions close to actual use, such as when the antenna is mounted on housing.Although the sensitivity is slightly lower than that of single-point electric field imaging, the spatial resolution and imaging speed are extremely fast, enabling imaging that follows time-varying signals.In particular, by feeding back the electric field frequency of the device-under-test to the LO system, it is expected to be possible to image the electric field in various communication conditions, such as frequency modulation [38]. It is known that the EO crystal responds to the THz band [12].By using broadband modulation techniques and synchronizing the IF with the frame of the image sensor, it will be possible to extend the observation of these very high-frequency electric fields [39][40][41]. Figure 1 . Figure 1.Conceptual diagram of the electric field imaging system. Figure 2 . Figure 2. Polarization-imaging system based on the proposed method. 3. 2 . Pixel Characteristics 3.2.1.Signal-to-Noise Ratio Because the signal intensities obtained by EO measurement were very low, an image sensor with a high SNR was required to detect weak polarization changes.The SNR measurement results are shown in Figure Figure 4 . Figure 4. (a) Signal, noise, and SNR of the pixel.(b) Extinction ratio spectra of the dual-layer grating structure. Figure 5 . Figure 5. Circuit diagram of the differential amplifier. Figure 6 . Figure 6.EO signal spectrum obtained from the differential amplifier and pixel output difference of (a) high incident light intensity (29.4 mW) and (b) low incident light intensity (1.2 mW). Figure 11 . Figure 11.Sequential electric field distribution image on the patch antenna from 0 to 15 s. Figure 12 Figure 12 . Figure12shows a time plot of the normalized EO signal intensity for points 1-4 shown in Figure11.In this figure, the results are shown from 0 s to 100 s.As the electric field image was generated at 1 FPS and the phase was rotated at 0.04 Hz, a phase change of 8π in 100 s could be observed.The normalized EO signals were along sinusoidal curves because the input signal phase varied linearly with time.The (100)ZnTe was sensitive to the electric field in the vertical direction from DUT.Therefore, it could be observed that the electric field strength at the corner of the patch antenna and on the microstrip line was stronger, and the phase was shifted by π.The EO signal strength at point 4 was very weak because the electric field strength at the center of the patch antenna was weak.
7,015.4
2024-06-26T00:00:00.000
[ "Engineering", "Physics" ]
Identification of Anaplasma ovis appendage-associated protein (AAAP) for development of an indirect ELISA and its application Background Ovine anaplasmosis is a tick-borne disease that is caused by Anaplasma ovis in sheep and goats. The pathogen is widely distributed in tropical and subtropical regions of the world. At present, diagnosis of the disease mainly depends on microscopy or nucleic acid based molecular tests, although a few serological tests have been applied for the detection of A. ovis infection. Results Here we describe the identification of an A. ovis protein that is homologous to the A. marginale appendage-associated protein (AAAP). We expressed a recombinant fragment of this protein for the development of an indirect enzyme-linked immunosorbent assay (ELISA) for the detection of A. ovis. Anaplasma ovis-positive serum showed specific reactivity to recombinantly expressed AAAP (rAAAP), which was further confirmed by the rAAAP ELISA, which also demonstrated no cross-reactivity with sera from animals infected with A. bovis or other related pathogens in sheep and goats. Testing antibody kinetics of five experimentally infected sheep for 1 year demonstrated that the rAAAP ELISA is suitable for the detection of early and persistent infection of A. ovis infections. Investigation of 3138 field-collected serum samples from 54 regions in 23 provinces in China demonstrated that the seroprevalence varied from 9.4% to 65.3%, which is in agreement with previous reports of A. ovis infection. Conclusions An A. ovis derived antigenic protein, AAAP, was identified and the antigenicity of the recombinant AAAP was confirmed. Using rAAAP an indirect ELISA assay was established, and the assay has been proven to be an alternative serological diagnostic tool for investigating the prevalence of ovine anaplasmosis of sheep and goats. Background Ovine anaplasmosis is a tick-borne disease of sheep, goats and small ruminants caused by Anaplasma ovis [1][2][3]. Anaplasma ovis is a non-motile, obligate intraerythrocytic Gram-negative bacterium that belongs to the order Rickettsiales [4]. Following the reorganisation of the order in 2001, this pathogen is classified along with A. marginale, A. centrale, A. bovis and A. caudatum which infect ruminants, A. phagocytophilum a zoonotic agent, and A. platys that infects dogs [4]. Biological vectors of A. ovis are ticks of the genera Dermacentor and Rhipicephalus and most likely other tick species [5][6][7][8][9]. The study of A. ovis was often neglected since it is considered to be moderately pathogenic and induce only mild clinical signs [7,10]. However, A. ovis infection resulting in severe disease has been reported in bighorn sheep, goats and sheep [9,11,12]. Although the pathogen is known to be widespread in tropical and subtropical countries, the extent of infection and the loss of livestock productivity remain poorly understood [7,12]. The detection of A. ovis in livestock has traditionally been based on the identification of acute infections, using a microscopic examination of Giemsa-stained blood smears. Light microscopy is the most inexpensive and quickest laboratory test, but also the least sensitive, and is highly dependent on examiner experience [12,13]. Moreover, it is crucial that the smears should be prepared during the early acute phase of signs and before initiation of effective antimicrobial treatments. Nucleic acids based tests, such as polymerase chain reaction (PCR), quantitative real-time PCR (qPCR), and loop-mediated isothermal amplification (LAMP) have been alternative tests for the direct detection of A. ovis infection in both experimental and field studies [14][15][16]. These methods are restricted by the limited sensitivity of the detection in persistently infected carrier animals with very low-level bacteremia [13,17]. In contrast, serological tests have the advantage of detecting antibodies from infected animals during all stages of Anaplasma infection [18]. A recombinant A. marginale major surface protein 5 (Msp5) based competitive inhibition enzyme-linked immunosorbent assay (CI-ELISA) has been developed and shown to detect A. marginale-infected cattle, including persistently infected carriers [19]. This assay was later confirmed to be suitable for the detection of antibodies to A. ovis infected goats due to the conservation of Msp5 epitopes among Anaplasma strains [12,20], and it was also found to detect antibodies from A. phagocytophilum and Ehrlichia species [21,22]. Because of the potential for cross-reaction when using the CI-ELISA, the results need to be interpreted cautiously. In this paper, we describe the identification of an A. ovis antigenic protein, AAAP, and the development of an indirect ELISA for the specific detection of A. ovis in sheep and goats. Bacteria and experimental animals The A. ovis isolate used in this study was obtained from an infected sheep in Haibei County in Qinghai Province, and the blood containing live pathogens and 8% dimethyl sulfoxide (DMSO) has been cryopreserved in liquid nitrogen since 2008 at the Lanzhou Veterinary Research Institute, Chinese Academy of Agricultural Sciences. Three-month-old sheep were purchased from a commercial farm in Jingtai County, Gansu Province. The sheep were screened for the absence of A. ovis, Babesia and Theileria by weekly examination of blood smear by light microscopy and previously described PCR protocols specific for each pathogen for a month before conducting animal experiments [3,23,24]. Preparation of serum samples Sheep (Nos. 103, 106, 134, 174, 183) were infected by inoculating each animal with 5 ml of bacteremic blood that had been collected from sheep No. 101 when the bacteremia was approximately 10%. The serum samples were collected every 2 days for the first 15 days, followed by twice a week till 43 days, once a week till 85 days, once 2 weeks till 181 days and once a month till a year period. Sheep (Nos. 420, 470, 489) were infected by inoculation of infected blood from sheep No. 101 twice in two-week intervals for hyperimmune serum preparation. The serum samples were prepared immediately after A. ovis was observed in the thin blood smears. Positive sheep sera against A. bovis, Mycoplasma ovipneumoniae, Mycoplasma capricolum capricolum, Babesia motasi, Babesia sp. Xinjiang, Theileria uilenbergi and Theileria luwenshuni, and positive yak sera against A. marginale were obtained from previous collections and stored at -20°C in our laboratory. Serum samples from sheep (Nos. 103, 106, 134, 174, 183, 420, 470, 489) before infection were used as negative controls. An additional 434 negative samples were obtained from experimental animals purchased from 2009 to 2016. These animals were determined to be free of A. ovis, Babesia and Theileria spp. as described above. Field samples (n = 3138) were randomly collected from domestic sheep and goats from 54 different locations in 23 provinces between 2010 and 2016 ( Fig. 1). All samples were collected in non-anticoagulation tubes, and the serum was separated and stored at -20°C in our laboratory. DNA specimens Whole blood was taken from the jugular vein of each experimentally infected animal and collected in a sterile tube containing an anticoagulant (ethylene diamine tetraacetie acid, EDTA). DNA was extracted from the blood using a genomic DNA extraction kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Bacterial purification The venous blood from sheep No. 101 (10% bacteremia) was harvested in a sterile flask containing anticoagulant (EDTA). The red blood cells were separated by centrifugation at 1000× g for 10 min, and the upper layer containing the white blood cells was discarded. The packed red blood cells were suspended in phosphate-buffered saline (PBS, pH 7.2), and then the remaining white blood cells were removed using a commercial leucocyte filter (Nanjing Shuangwei Biotechnology, Nanjing, China). The flowthrough was centrifuged as above, and the supernatant was discarded. The harvested red blood cells were suspended in four volumes of PBS containing 7% glycerin and placed at room temperature for 30 min, and then centrifuged again to harvest the red blood cells. The cells were then added to a flask containing four volumes of physiological saline to let the cells lyse completely. The lysate was centrifuged at 1000× g for 10 min to get rid of cell debris. The supernatant was then centrifuged at 10,000× g for 30 min to collect the bacterial pellet. The pellet was washed three times with physiological saline by centrifugation at 10,000× g for 10 min. The white pellet at the bottom of the tube was the purified bacteria, which were then stored at -70°C till use. Immunoprecipitation and mass spectrometric analysis Fifteen μg sepharose beads (CNBr-activated Sepharose™ 4B, GE Healthcare Life Sciences, Beijing, China) were added to 500 μl 0.1 mM HCl and gently mixed for 15 min. Agarose was pelleted by centrifugation at 12,000× g at room temperature for 10 s, and then resuspended in 600 μl washing buffer (0.1 mM HCL) and divided into 6 aliquots of 100 μl in 1.5 ml tubes. Equal amounts of sheep (Nos. 420, 470, 489) sera before and after infection was added into each tube respectively and incubated at room temperature for 30 min with gentle shaking. The agarose and antibody conjugates were pelleted by centrifugation at 3000× g at 4°C for 2 min, and the conjugates were washed three times using washing buffer. The purified bacteria were lysed using RIPA lysis buffer (Beyotime, Beijing, China), and 200 μl of bacterial lysates containing approximately 500 μg of antigen were added to the conjugates and incubated at 4°C overnight with gentle shaking. The samples were centrifuged to collect immunoprecipitation complexes at 3000× g at 4°C for 2 min. The complexes were washed three times using washing buffer. The antibody-antigen conjugates were eluted by washing with 20 μl elution buffer (100 mM Glycine, pH 2.5). The resulting samples were used for sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis. The separated bands were digested with trypsin at 37°C overnight. Peptides were extracted with 50% acetonitrile (ACN, Fisher Chemical, Shanghai, China) containing 5% formic acid (FA, Fluka, Shanghai, China), followed by 100% ACN. The peptides were dried and then resuspended in 2% ACN containing 0.1% FA. The peptides were then identified using liquid chromatographyelectrospray ionisation tandem mass spectrometry (LC-ESI-MSMS) (Triple TOF 5600, AB SCIEX, Concord). Resulting values for monoisotopic peaks were analysed using the computer program Mascot [25]. The sequences obtained from the mass spectrometry were used to identify the full-length open reading frame by search and alignment against an ongoing genome sequencing project for A. ovis strain Haibei (GenBank accession no. CP015994). Cloning of the truncated aaap gene PCR primers were designed based on the aaap gene sequence from the A. ovis strain Haibei genome sequence. The restriction sites, EcoR I and Hind III were introduced into the 5′ and 3′ primers, respectively. The primers were aaap-F: 5′-CCG GAA TTC AGG GTA CTG GTA ATG GGC-3′ and aaap-R: 5′-CCC AAG CTT CTA AAT AGC AAG ACT TTG CGT ATT AG-3′. Genomic DNA from an infected blood sample from sheep No. 101 served as a template for the PCR. The PCR had a total volume of 25 μl containing 12.5 μl Premix Taq™ (TaKaRa Taq™ Version 2.0 plus dye), 0.5 μl of each primer (20 μM), 2.0 μl of template DNA, and 9.5 μl of distilled water. The cycling conditions were as follows: 4 min of denaturation at 94°C, 35 cycles at 94°C for 1 min, annealing at 55°C for 30 s, and 72°C for 1 min, with a final extension step at 72°C for 10 min. The PCR products were cloned into the pGEM-T-Easy Vector (Promega, Beijing, China), according to the manufacturer's instructions, and then digested using EcoR I and Hind III restriction enzymes (New England Biolabs, Hitchin, UK). The resulting fragment was subsequently cloned into the pET-30a expression vector (Novagen, Shanghai, China) using the same restriction sites. The correct insertion of the aaap gene fragment was confirmed by sequencing (Sangon Biotech Company, Shanghai, China). Expression and purification of the recombinant AAAP protein The recombinant plasmid pET-30a-P35 was transformed into BL21 E. coli (DE3 strain). The cells were cultured in LB medium at 37°C for 6 h and expression was induced by addition of 1 mM isopropyl-β-d-thiogalactoside (IPTG) when the optical density (OD) reached 0.6. The bacterial cultures were harvested and lysed by ultrasonication in binding buffer (20 mM imidazole, 20 mM sodium phosphate, 0.5 M NaCl, 8 M Urea, pH 7.6) and then purified as inclusion bodies from E. coli cells. The target protein was purified with the AKTA design system (Amersham Bioscience, Uppsala, Sweden) using 5 ml HiTrap FF crude. The column was washed with 3-5 column volumes of distilled water and then equilibrated with at least 5 column volumes of binding buffer. The flow rate was 2 ml/ min for 5 ml columns. The pretreated sample was applied using a syringe pump, and then the column was washed with binding buffer (80 mM imidazole, 20 mM sodium phosphate, 0.5 M NaCl, 8 M Urea, pH 7.6) until the absorbance reached a steady baseline. The sample was eluted with elution buffer (250 mM imidazole, 20 mM sodium phosphate, 0.5 M NaCl, 8 M Urea, pH 7.6) until the absorbance reached a steady baseline. Preparation of AAAP specific rabbit immune serum Two New Zealand white rabbits were immunised three times by injecting with 200 μg of recombinant AAAP protein at 2-week intervals. For the first immunisation, the recombinant AAAP protein was emulsified with Freund's complete adjuvant (FCA) (Sigma-Aldrich, Shanghai, China) at a ratio of 1:1. For the remaining immunisations, AAAP was emulsified with incomplete Freund's adjuvant at a ratio of 1:1. Serum samples were collected 2 weeks after the last immunisation, and stored at -20°C until use. Western blotting analysis Optimal amounts of the recombinant protein AAAP and the crude antigen (Bacterial lysate) were separated in SDS-PAGE using 12% polyacrylamide gels under reducing conditions and transferred to nitrocellulose (NC) membranes. The NC membranes were blocked with 5% skimmed milk powder in 0.1 M Tris-buffered saline (pH 7.6) containing 0.1% Tween-20 (TBST) at 4°C overnight. To verify the expression and purification of the recombinant protein AAAP, the RGS-His™ mouse anti-histidine antibody (1:4000, Qiagen, Hilden, Germany) and secondary alkaline phosphatase (AP) conjugated goat anti-mouse IgG + IgM (H + L) antibody (1:10,000, Dianova, Hamburg, Germany) were used to detect the His-tag on the recombinant protein. To test the antigenicity and specificity of recombinant proteins, 1:100 diluted sheep serum samples positive for A. ovis, A. bovis, M. ovipneumoniae, M. capricolum capricolum, B. motasi, Babesia sp. Xinjiang, T. uilenbergi, T. luwenshuni, as well as negative control serum from uninfected sheep were used as primary antibody and 1:5000 diluted AP conjugated monoclonal anti-goat/sheep secondary antibody (Sigma-Aldrich) were used. To detect native AAAP, pre-immunization rabbit serum and rabbit AAAP antiserum were tested with the crude antigen on western blot. The secondary antibody was AP-conjugated goat antirabbit immunoglobulin antibody (1:5000, Sigma-Aldrich). All of the serum samples and the secondary antibodies were diluted in dilution buffer (TBST containing 1% bovine serum albumin, pH 7.2). Binding of secondary antibody was detected with 5-bromo-4-chloro-3-indolyl phosphate (BCIP)/ nitroblue tetrazolium (NBT) substrate (Sigma-Aldrich). The approximate molecular weights of the presented protein bands were calculated by comparing their migrations with the standard Protein Ladder (Thermo Scientific, Beijing, China). Identification of A. ovis aaap We performed immunoprecipitation assays using A. ovis bacterial protein extracts and serum samples collected from A. ovis infected animals. As a control, serum from animals before infection was used with the same bacterial extracts. In the immunoprecipitation assay, four bands were detected as a novel or at higher densities in the group immunoprecipitated with the positive sera as compared with the control group immunoprecipitated with the negative sera as shown in Fig. 2. These bands were further analysed using mass spectrometry, and the resulting peptide sequences were BLASTed against the A. ovis genome. Band 4 was identified as aaap, which corresponds to AOV_03180 with an open reading frame of 972 bp in size. The translated protein contains 323 amino acids with a predicted molecular weight of 35.5 kDa. The aaap sequence has been deposited in GenBank with accession number KY670611. There is a second gene in the genome in tandem with AOV_03180 that has similar features, designated AOV_03175, which appears to be a truncated version of aaap (Fig. 3). The deduced amino acid sequence of aaap showed 31% identity to the appendage-associated protein of A. marginale (AAAP; AM878; GenBank accession no. AAV86790) (Fig. 3). A truncated aaap fragment encoding 299 amino acids (aa 25-323) was cloned into the pET-30a expression vector for recombinant protein expression. The pET-30a-P35 plasmid was expected to express an rAAAP protein with a molecular weight of 40.0 kDa. When the rAAAP was tested for reactivity with A. ovis-positive serum samples, a clear band of the appropriate size was observed, while no cross-reactivity was seen with serum samples containing antibodies to M. ovipneumoniae, M. capricolum capricolum, B. motasi, Babesia sp. Xinjiang, T. uilenbergi, T. luwenshuni, A. bovis or negative serum samples from healthy sheep (Fig. 4). This result indicated that the aaap gene encodes a potential antigenic protein of A. ovis. To identify native AAAP protein in A. ovis, rabbit anti-rAAAP protein serum was prepared and used in Western blot analysis with purified A. ovis lysates. Both native AAAP protein in the lysates and the rAAAP protein were recognised by the rabbit anti-rAAAP sera, while no reaction was observed when preimmune rabbit sera were used (Fig. 5). The molecular weight of native AAAP appeared lower than rAAAP in the Western blot, most likely due to an extended protein structure of rAAAP leading to a retarded migration during electrophoresis (34,35). These data confirmed the A. ovis origin and antigenicity of the AAAP protein. Establishment of the rAAAP indirect ELISA The rAAAP based indirect ELISA was eventually established with 100 μl of 2.5 μg/ml rAAAP protein, 100 μl of a 1:100 dilution of each serum sample to be tested, and 100 μl of 1:20,000 diluted secondary antibody in each well in the reaction system. These conditions were used in all subsequent experiments. The specificity of the rAAAP indirect ELISA was evaluated using control samples, which were used previously in Western blot analysis and yak serum samples of A. marginale. The positive results were detected with (Fig. 7), and no cross-reactivity was seen with the serum samples from A. bovis, M. ovipneumoniae, M. capricolum capricolum, Babesia sp. Xinjiang, B. motasi, T. uilenbergi, or T. luwenshuni (Fig. 8). The kinetics of antibody response in experimentally infected sheep The serum samples from five experimentally infected sheep (Nos. 103, 106, 134, 174, and 183) were collected at different time points during infection. These samples were used to test the kinetics of antibody response against rAAAP using the established ELISA (Fig. 9). A significant increase of antibodies against rAAAP was observed after the sheep were infected. However, the earliest antibody response differed from 5 to 13 days post-infection between individual animals. From then on, a sharp increase of antibody response was observed, and the infected animals typically retained high antibody titers for approximately 100 days, when antibody a cycling pattern of decreasing and increasing antibody titers appeared in some of the animals (especially Nos. 103, 174). Moreover, the antibody response could be detected a year after infection, indicating that a test is a suitable tool for monitoring persistent A. ovis infections. Detection of the field samples with the rAAAP indirect ELISA The rAAAP indirect ELISA tested the field samples. The results showed that the mean positive rate was 35.3% (1106/3138) with the highest positive rate of 66.7% (66/ 99) in Yunnan Province and the lowest rate of 9.4% (8/ 85) in Henan Province (Table 1). Discussion Ovine anaplasmosis has been neglected perhaps due to knowledge gaps on its pathogenicity, morbidity, mortality, clinical signs and economic losses. The causative agent, A. ovis, is considered as a moderate pathogen typically inducing only subclinical signs [4,7]. However, an exception was found in sheep and goats in Ejinaqi, Western Inner Mongolia in China, where the morbidity of the disease was as high as 40-50% and the mortality was 17%, and the clinical signs such as anemia, jaundice and emaciation were observed [9]. Anaplasma ovis infection causing severe disease has also been reported in bighorn sheep, domestic sheep and goats in North America and Africa [11,12]. The A. ovis Haibei strain used in this study caused several deaths in sheep herds in Haibei County in Qinghai Province in 2008. In addition, when healthy and splenectomized sheep were inoculated with infected blood from the Haibei strain the animals died. With the development and application of DNA-based tests such as conventional PCRs, specific qPCRs [16,27], more and more studies have demonstrated high infection rates of A. ovis in North America, Europe, Africa, the Middle East and Asia, which have been well summarized by Renneker et al. [7]. These data indicate that more attention should be paid to the economic impact and health implications of A. ovis infection in small ruminants. Two types of serological tests have been applied for analysis of A. ovis infection including complement fixation (CF) and a recombinant A. marginale Msp5 based competitive inhibition ELISA (CI-ELISA) [9,10,12,28]. Although the A. marginale-derived CI-ELISA will successfully detect antibodies of A. ovis infection in sheep and goats, the fact that the Msp5 B-cell epitope is conserved among Anaplasma species [21,29] means that the results need to be interpreted cautiously due to the potential for coinfections with other Anaplasma species in sheep and goats [30][31][32][33]. Further, the CI-ELISA cannot be used to quantitatively evaluate antibody titers, in the manner of a direct ELISA. A lack of knowledge of A. ovis-specific antigens has restricted the development of a species-specific test. Although a few A. ovis antigens, including Msp2, Msp3 and Msp4 have been reported [3,34,35], none of them have been developed into an A. ovis-specific serological test. In the present study, we identified the aaap gene from the A. ovis genome from mass spectrometry data. Recombinant AAAP showed a specific reaction with A. ovis-positive sera in both Western blot and ELISA analysis, while no crossreactivity was observed with positive serum of A. bovis and other related agents. However, cross-reactivity with A. marginale-positive sera occurred in the rAAAP ELISA, most likely due to the presence of similar AAAP amino acid sequences in both A. ovis and A. marginale, such as multiple imperfect peptide repeats centred around the sequence ELKAIDA [36]. Rabbit anti-rAAAP serum was able to detect native AAAP on Western blots of purified A. ovis lysates from infected blood, which revealed multiple protein bands with a molecular size around 35 kDa. That is consistent with the fact that A. ovis contains tandemly duplicated copies of aaap in its genome [37]. Although a few studies have reported the infection of A. marginale in wild ruminant species such as bighorn sheep, white-tailed deer, etc. [3,38], A. marginale infection of sheep and goats was not found in China and most of the world, which indicates that the AAAP indirect ELISA has the potential to be applied for establishing species-specific diagnostic assays for sheep and goats. However, only limited A. bovis serum samples and none of the serum samples of A. phagocytophilum, A. capra as well as Ehrlichia spp. were included in the present study, further evaluation of the specificity of the rAAAP ELISA method are needed. In this study, an indirect ELISA was established using rAAAP for detection of antibodies to A. ovis infection. The accuracy of an ELISA test is dependent on the cutoff value used to classify samples as seropositive or not, and changing the cut-off value can change the results of the test [38,39]. The cut-off for the rAAAP-ELISA was determined to be 6.0% (AbR%) when the minimal total number of diagnostic errors (false positives plus false negatives) was calculated after testing 434 negative and 163 positive reference sera. This is the most direct approach in defining the optimal cut-off for a serological test [38,39]. With this threshold, the sensitivity and specificity of the rAAAP ELISA were calculated to be 91.4 and 94.5%, respectively. The prospective use of the rAAAP ELISA in detecting A. ovis infection was verified by testing the antibody kinetics for 1 year in five experimentally infected sheep. The test could detect an antibody response 5 to 15 days postinfection, with this time frame being in agreement with the biological features of Anaplasma infection, which usually takes one to several weeks to establish infection [9,40]. After the infection was established, a sharply increasing antibody response appeared, and the high antibody titer lasted for around 3 months. A persistent antibody titer was detectable until the end of the experiment using the rAAAP ELISA. These results demonstrated the potential usefulness and applicability of this ELISA for detecting early infection and monitoring persistent A. ovis infection. In addition, a fluctuating antibody titer was seen during persistence in sheep Nos. 103, 106, 134 and 174, but was not so apparent in sheep No. 183. This is in line with two patterns of persistent bacteremia in A. ovis infected goats. The first pattern was characterised by cyclic fluctuation, similar to the pattern described for A. marginale infected cattle, while the bacteremia levels were relatively constant in the second pattern [34,41,42]. Whether these phenomena are related to the antigenic variation of the msp2 and msp3 multigene families resulting in cyclic rickettsemia during A. ovis persistent infection remains unknown [34], although this pattern has been demonstrated in A. marginale [41,42]. With the established rAAAP ELISA, a large-scale study of 3138 sera from sheep and goats collected from 54 different locations in 23 provinces was undertaken. As a result, the seroprevalence of A. ovis infection was detected in almost all of the sampled regions, except Baotou in Inner Mongolia, and Jinghua and Lishi in Zhejiang Province. Negative results for these three regions are most likely due to the limited sample size because the presence of A. ovis infection has been demonstrated in Inner Mongolia and Zhejiang in our previous studies [9,31]. A wide distribution of A. ovis infection in the investigated regions likely reflects the true situation in China, since the existence of A. ovis infection in most of these provinces has been reported recent years [30,31,43,44]. Conclusion An A. ovis derived antigenic protein, AAAP, was identified in the present study. The antigenicity of the recombinant AAAP was confirmed by testing A. ovis-positive sera and rabbit anti-rAAAP serum. Using rAAAP an indirect ELISA assay was established, and the assay has been proven to be an alternative serological diagnostic tool for investigating the prevalence of ovine anaplasmosis of sheep and goats. However, this method may be not specific for indicating exposure to A. ovis. Thus further studies are needed to characterise the aaap gene in other related pathogens and systematically evaluate the detection ability of the rAAAP ELISA for future application in the field.
6,130.6
2017-07-28T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Coupled reversion and stream-hyporheic exchange processes increase environmental persistence of trenbolone metabolites Existing regulatory frameworks for aquatic pollutants in the United States are idealized, often lacking mechanisms to account for contaminants characterized by (1) bioactivity of both the parent and transformation products and (2) reversible transformations (that is, metastable products) driven by chemical or physical heterogeneities. Here, we modelled a newly discovered product-to-parent reversion pathway for trenbolone acetate (TBA) metabolites. We show increased exposure to the primary metabolite, 17α-trenbolone (17α-TBOH), and elevated concentrations of the still-bioactive primary photoproduct hydroxylated 17α-TBOH, produced via phototransformation and then converted back to 17α-trenbolone in perpetually dark hyporheic zones that exchange continuously with surface water photic zones. The increased persistence equates to a greater potential hazard from parent-product joint bioactivity at locations and times when reversion is a dominant trenbolone fate pathway. Our study highlights uncertainties and vulnerabilities with current paradigms in risk characterization. Cattle growth hormone metabolites found in agricultural runoff are primarily removed from surface waters by photodegradation. Here, Ward et al. develop a model of stream transport, finding reversion in perpetually dark hyporheic zones increases environmental persistence of these endocrine disruptors. W ith tens of thousands of synthetic chemicals discharged to the environment, it is unrealistic to individually collect comprehensive environmental fate data to accurately define human and ecological health risks. Accordingly, human and ecological risk assessments currently use approaches built upon expectations from the relatively small subset of contaminants whose environmental fates and associated biological risks are well understood. However, the danger implicit in this strategy is that prediction of presumptive biological risks can fail catastrophically for chemical contaminants with reactivities that deviate from expectations. To address this shortfall, computational and numerical modelling efforts can project the environmental fate of chemicals based on a combination of chemical properties and environmental system conditions. Such modelling also can be used to initially screen ecological risks and guide future research efforts such as field sampling. Especially for chemicals of high production volume, widespread environmental occurrence, or unusually high potency and potential toxicity, such efforts are critical to improving assessment of biological risk. As an example, we point to our recent research efforts on the product-to-parent reversion of trenbolone acetate (TBA) metabolites, a previously unidentified chemical process with a poorly understood influence on the environmental fate of these widely used and potent growth promoters 1 . The use of growth promoting steroid hormones is ubiquitous in the US beef cattle industry, with these pharmaceuticals reducing production costs by up to 7% and representing B$1 billion annually in incremental economic value 2 . Administered by ear implantation, TBA is converted to the androgen 17b-trenbolone, which is far more potent than testosterone and primarily responsible for anabolic effects such as increased weight gain. 17b-trenbolone is excreted along with other TBA metabolites including 17a-TBOH and trendione that are subsequently mobilized in runoff ( Fig. 1) 3 . 17a-TBOH represents about 95% of identifiable excreted metabolites by mass 4 , although a number of unidentified TBA metabolites likely exist. Given the typical application rates of TBA to cattle, mass balance calculations predict that thousands of ng l À 1 are possible in agricultural runoff 5,6 , although concentrations observed in receiving waters are typically far lower when detected 7,8 . Field observations have documented 1,700 ng l À 1 17a-TBOH in manure lagoons 9 , and 55 ± 22 ng g À 1 in solid manure and surface soils 10 . TBA metabolite exposure is problematic for ecosystem health because 17a-TBOH and 17b-TBOH are potent endocrine disruptors, with exposure to as little as 10-30 ng l À 1 significantly reducing fecundity, resulting in phenotypic sex reversal in fish, or altering endocrine function [11][12][13] . Until recently, rapid photodegradation of TBA metabolites in sunlit surface waters was believed to effectively mitigate their environmental risk 14 . However, we recently reported that the major photoproducts of recognized TBA metabolites, hydroxylated trenbolone species (hereafter hydroxy-17a-TBOH), are metastable and able to regenerate their respective parent compounds 1 . These reversible transformations (that is, 'product-to-parent reversion') occur when the rapid forward reaction by direct photolysis (that is, photohydration) in sunlight is countered by slower dehydration reactions when dark (see overview of physical-chemical system in Fig. 1). The relative rates of the forward and reverse reactions ultimately control net photolysis or reversion as a function of both physical characteristics (for example, light exposure) and biogeochemical conditions (for example, pH, temperature). The first objective of this study is to quantify the impacts of these newly reported reversible reactions on TBA metabolite concentrations in fluvial systems. We designed a series of numerical experiments to highlight key differences between our former (without-reversion) and current (with-reversion) understanding of the environmental fate for TBA metabolites. We hypothesize that the net effect of these processes operating in stream-hyporheic systems is increased persistence of TBA metabolites in the stream system, resulting in enhanced hydraulic transport and higher concentrations. This increased persistence is particularly troublesome for compounds like TBA metabolites where both the parent species and transformation products retain bioactivity 15 . Our second objective is to characterize expected joint bioactivity (JBA) arising from the suite of TBA metabolites and their photoproducts, including their spatial and temporal occurrence, partitioning and interactions with hydrological dynamics in stream systems. Accordingly, we calculate JBA withand without-reversion to estimate the risk represented by atypical characteristics like reversible transformation processes, and extend our findings to consider a broader range of risk, wherein JBA is a blend of product and parent concentrations with different relative potencies. This work extends the concept of 'joint persistence' originated by Fenner et al. [16][17][18] to account for the range of potencies (that is, bioactivity metrics) expected for parent compounds and related transformation products. We note that, although our simulations are based on TBA metabolites as the pollutant of interest, they are applicable to a range of compounds for which reversible transformations driven by interacting, heterogeneous physical and/or chemical processes occur in the environment. Our results demonstrate that interactions of primarily physical and chemical processes can yield unexpected transport and fate behaviours, ultimately retaining a substantial fraction of TBA-derived potency in the system over many days. Finally, we generalize model results for cases where differential reactivity controls parent-product partitioning and JBA in the system. Although this modelling effort focuses on an idealized case for the transport and fate of TBA metabolites to better understand the environmental implications of their reversion pathway, results are applicable more generally to any system where heterogeneity in the physical or chemical system affects parent-product fate dynamics. Ultimately, we demonstrate that current assessment approaches remain prone to unintended consequences because they are ill-equipped for pollutants with atypical transformation pathways and they cannot accurately predict how heterogeneous physical and chemical environments control JBA. Results Reversion increases 17a-TBOH persistence in stream networks. The numerical simulations demonstrate that reversion cycling increases in-stream and hyporheic 17a-TBOH concentrations compared with the without-reversion case throughout the stream network (Fig. 2). Spatial profiles for cases with-and withoutreversion are shown for both 17a-TBOH and hydroxy-17a-TBOH throughout a 24-hr cycle in Fig. 2 (see also Supplementary Movie 1, Supplementary Fig. 1). The spatial and temporal scales to achieve removal of peak concentrations during a 24-hour cycle as the management target of interest vary between cases (maximum of shaded 24-hour range for 17a-TBOH; Fig. 3). Advective travel times to achieve 50% removal of input concentrations are 14.4 and 13.8 h (38.0 and 36.2 km) for the cases with-and without-reversion, respectively. Times to 90% removal are 287.4 and 16.2 h (757.4 and 42.7 km) for with-and withoutreversion cases, respectively, and the disparity grows to 406.9 and 25.9 h (1,072.4 and 68.2 km) to achieve 99% removal (linear extrapolation beyond the model domain for the with-reversion case). In contrast, timescales for the forward reaction in idealized laboratory studies were reported as 1.1 to 2.5 h. In our simulations, the complexity of the fluvial environment-particularly the perpetually-dark hyporheic zone where reversion processes would dominate even during periods of light-results in the increased persistence of 17a-TBOH in the stream. Patterns of 17a-TBOH are spatially and temporally variable in the stream network. At night, the inflowing 17a-TBOH is primarily advected in the stream channel. Hyporheic exchange decreases maximum in-stream concentrations of 17a-TBOH due to exchange between the relatively high concentration stream water and the low concentration hyporheic water in both withand without-reversion cases ( Fig. 3(1)). The decrease is more pronounced in the without-reversion case, where concentration gradients between the stream and hyporheic zone are steeper (hyporheic reversion to 17a-TBOH decreases this gradient in the with-reversion case). The results in Fig. 3 provide data that could be used to estimate both acute and chronic exposure, best characterized by the maximum and mean concentrations, respectively. After 12 h of advection, maximum in-stream concentrations are 77.1 and 86.3% of the input concentration for the without-reversion and with-reversion cases, respectively. When forward photolysis rates exceed reversion rates (during periods of light), rapid decreases of in-stream 17a-TBOH concentrations occur in both cases ( Fig. 3(2)). In-stream 24 h minimum values are dominated by the forward photolysis reaction (Fig. 3(3)), achieving initial removals of 50 and 90% at 0.34 and 1.20 h for the withoutreversion case, and 0.35 and 1.24 h for the with-reversion case. Removal of 99% of the input 17a-TBOH is achieved for the without-reversion case after 25.9 h (Fig. 3(4)), while daily peak 17a-TBOH concentrations of 30.4 and 40.9% persist in the stream and hyporheic zone, respectively ( Fig. 3(5)). Finally, beyond 38 km in the network, hyporheic zone concentrations of 17a-TBOH are higher than in-stream concentrations throughout a complete 24 h cycle ( Fig. 3(6)). Upstream of this location, hyporheic concentrations oscillate between being higher and lower than in-stream concentrations. This location defines a transition point from an upstream source-dominated reach to a downstream reversion-dominated reach. The transition location is primarily set by in-stream advection and photoperiod, specifically the distance 17a-TBOH is transported during periods where the reversion rate exceeds the photolysis rate. Longer photoperiods or slower advective velocities move this location closer to the source. In these cases, the partitioning between the stream and hyporheic zone, responsible for the declining maximum concentrations in-stream (Figs 2-4), decreases in importance. Hyporheic zones exhibit time-variable source-sink behaviour. Transport and transformation in the stream interact with storage and reversion in the hyporheic zone along the stream, leading to extended resupply of 17a-TBOH to the stream and elevated in-stream concentrations (Figs 2 and 3; Supplementary Movie 1). The hyporheic zone acts as a net source of 17a-TBOH to the stream when hyporheic concentrations exceed those in-stream (that is, stream time series above hyporheic time series), and a net sink at times when 17a-TBOH concentrations are lower in the hyporheic zone than in the stream (that is, stream time series below hyporheic time series; Fig. 4 and Supplementary Fig. 2). Net source or sink behaviour is defined by the dynamic development of concentration gradients between the stream and hyporheic zone, and the magnitude defined by the exchange rate (a) and relative sizes of the stream and hyporheic zone (commonly A s /A). This dynamic source-sink characteristic is in contrast to definitions of source and sink used for chemical systems without considering product-to-parent reversion, where sinks often indicate irreversible loss, and sources indicate new mass (as opposed to reverted mass) entering the system. The input 17a-TBOH is primarily advected downstream of the input plateau concentration during periods of darkness (Fig. 4). The exchange of water between the stream and hyporheic zones results in a hyporheic concentration always trending towards the in-stream concentrations, but never reaching equilibrium (Fig. 4). The characteristic timescale of exchange is longer than the timescale driving in-channel dynamics (that is, the photoperiod). Exchange of water between the stream and hyporheic zone reduces the amplitude of the diel in-stream concentration time series. Similar hyporheic buffering for stream temperatures varying on a diel basis has been broadly reported. Temporary storage of 17a-TBOH advected downstream during periods of darkness, exchanged into the hyporheic zone, stored temporarily in this domain with no photolysis, and then slowly released back to the stream increases in-stream minimum concentrations of 17a-TBOH. Similarly, the inverse processes occur in daylight and decrease in-stream maximum concentrations. Hyporheic buffering of in-stream concentrations occurs throughout the model domain for both the with-and without-reversion cases. For the case without-reversion, the timing and magnitude of the net source and sink behaviour are solely a reflection of the characteristic timescale of hyporheic exchange relative to the timescale of diel in-stream fluctuations. Large exchange rates and small hyporheic zones cause hyporheic 17a-TBOH concentrations to converge on dynamics identical to those in the stream alone, reducing the buffering effect, while the opposite occurs for the inverse case. For a smaller hyporheic zone and/or faster exchange rate, the stream and hyporheic zone would more closely track in their timeseries. At one extreme, the hyporehic zone would be instantaneously well-mixed with the stream and have minimal impact on the 17a-TBOH timeseries at a location, due to either a small hyporheic zone or extremely rapid exchanges with the stream. Conversely, a low exchange rate would functionally decouple the stream and hyporheic zone. For the case with reversion, the exchange processes described above interact with slowly increasing hyporheic 17a-TBOH concentrations as product-to-parent reversion occurs. After 24 h of transport, daily average in-stream 17a-TBOH concentrations are 12.8 and 0.4% of the input concentrations for the with-and without-reversion cases. Throughout the model domain, particularly the reversion-dominated reach, reversion maintains higher 17a-TBOH concentrations in the stream and hyporheic zone. Increased hyporheic concentrations reduce the net sink function of the hyporheic zone, proportionally increasing its source function (Fig. 3). The magnitude of the net source function is controlled by the rate of 17a-TBOH regeneration relative to the stream-hyporheic zone exchange rate. The source function is minimized wherever the exchange rate removes 17a-TBOH faster than it can be regenerated. In contrast, the without-reversion case loses 10% of the total simulated mass (during the forward photolysis reaction) and then conservatively transports the remaining 90% as hydroxy-17a-TBOH. Although we expect that other attenuation processes affect transport and fate of 17a-TBOH and hydroxy-17a-TBOH, sufficient data characterizing such processes are not available for both species and thus we omitted them from our simulations. Nevertheless, expected behaviour can be described. Reversible sorption-desorption processes would create additional time lags in the temporary storage of 17a-TBOH, increasing buffering of in-stream concentrations (assuming minimal photolysis in the sorbed phase, primarily within the hyporheic zone). Sorption would also represent an irreversible removal of dissolved mass for a permanently bound fraction of the sorbed species. Biotransformation of 17a-TBOH is expected to compete with reversion and reduce concentrations in the stream and/or hyporheic zone. We note, however, that such competing reactions are already daylight (photolysis dominance in the stream), respectively. For cases without-reversion, hyporheic zones alternate between net sources of 17a-TBOH to the stream. The hyporheic zone is a net source of 17a-TBOH when stream concentration is less than hyporheic concentration (that is, during daylight periods when photolysis dominates in the stream). Hyporheic zones are a net sink for in-stream 17a-TBOH when the stream concentration is greater than the hyporheic concentration (typically during periods of darkness). Cases with and without-reversion have similar source-sink behaviour for the first 38 km of transport, but hyporheic zones become persistent sources of 17a-TBOH to the stream beyond this location. In these downstream locations, hyporheic reversion is able to increase in-stream concentrations to nearly 40% of their input concentration in some locations. The diminishing night time plateau in the downstream direction represents decreased influence of the night time input signal to the diel pattern at a spatial location. 15 ). This represents a continuous loss of mass over time as product and parent continue to cycle. Additional competing reactions would be expected to simply increase the magnitude of this loss term. If all loss processes are considered first order, the net sum of their reaction rate coefficients would dictate the behaviour of the system. For example, a hyporheic zone with a biotransformation rate for 17a-TBOH larger than its reversion rate would exhibit increased sink behaviour. Periods of net source behaviour are expected to persist even in this case. Finally, characterization of non-linear interactions (for example, those associated with differing behaviour of sorbed and dissolved compounds in the hyporheic zone) is generally lacking and would require increased model complexity. Yield and bioactivity partitioning control ecological risk. One benefit of our modelling approach is that we can explore fate and risk scenarios that might otherwise be too difficult to assess via traditional approaches wholly reliant on demanding and expensive analytical measurements. For example, a scenario that complicates risk management for TBA metabolites is that their photoproducts seem to retain some aspects of bioactivity distinct from their parent metabolites 15 . Thus, confining ecological risk considerations only to the known bioactivity of 17a-TBOH would underestimate the ecological hazard posed by the mixture of parent and product bioactivity. This case is not just limited to TBA metabolites, as a growing number of studies demonstrate increased JBA, or total combined bioactivity, attributed to mixtures of parent compounds, metabolites and transformation products. In fact, summed mixture persistence in a network has been termed 'joint persistence' where several reactions occur in series to account for the summed persistence of all related compounds [16][17][18] . Inclusion of bioactivity metrics in a similar framework is a logical extension of this concept. We note that the concept of JBA is beginning to be addressed using regulatory concepts such as 'predicted no-effect concentrations (PNECs)' for which mixture bioactivity is the key metric of interest rather than any single concentration of individual constituents 19,20 . Indeed, it is probably most logical to move away from regulatory approaches structured around individual constituents in general, and towards biologically driven end points such as those structured within adverse outcome pathways 21 . In some cases, accounting for JBA causes mixtures to exceed PNECs that would not otherwise be met if only the parent compound were considered 22 . Trenbolone metabolites are ideal examples to illustrate mixture bioactivity effects because their product-to-parent reversion dynamics will be key contributors to JBA in affected watersheds. Further, our modelling approach enables us to quantify JBA for 17a-TBOH and its primary photoproduct, hydroxy-17a-TBOH in a streamhyporheic system at a resolution that is not possible from laboratory experiments or field data alone. We used our model to consider five representative scenarios where bioactivity was apportioned to 17a-TBOH and hydroxy-17a-TBOH through a range of relative potencies (see Methods section). Bioactivity coefficients include subscript '17a' for 17a-TBOH and 'OH' for hydroxy-17a-TBOH. The subscripts 'w' and 'w/o' denote cases with or without product-to-parent reversion, respectively. As a baseline for comparison (Case 1 w/o ), we set a JBA value of 1.0 to represent the bioactivity expected solely by input of 17a-TBOH at the upstream end of the model domain, without considering reversion. This case is considered the baseline for risk assessment because it represents the current regulatory approach to 17a-TBOH fate in the United States (that is, no reversion, no bioactivity attributed to transformation products). For all subsequent cases, we present both absolute JBA and the change in JBA relative to the baseline case (Fig. 5). For the baseline scenario (Case 1 w/o ; B 17a ¼ 1; B OH ¼ 0), bioactivity is equal to the concentration of 17a-TBOH, rapidly decreasing in the stream and hyporheic zone upon 17a-TBOH transformation. Daily average JBA in both the stream and hyporheic zone is reduced by 99% at a distance of about 53 km in the network. We take this network location as a key point for comparison between scenarios relative to the baseline. Withoutreversion cases approach a plateau beyond this distance, where only dilution by inflows reduces in-stream JBA. For Case 1 w , we found increases in in-stream and hyporheic JBA by 12 and 39% above the baseline at 53 km, respectively. Beyond this network location, all cases with reversion exhibited a persistent JBA fraction in the network, only decreasing by about 1.3% of input JBA per 100 km due to the 10% partitioning to irrevertible, non-bioactive products during each photolysis-reversion cycle. This rate applies to the all source-dominated cases that include reversion. For where the primary transformation product hydroxy-17a-TBOH retains partial bioactivity, JBA in the stream and hyporheic zone are increased over the baseline scenario by 27% at 53 km. For Case 2 w , increases in JBA are 34 and 53% for the stream and hyporheic zone, respectively. For any cases where B 17a 4B OH , concave-up patterns of JBA will be produced in the network, indicating decreasing exposure in the downstream direction and equilibrium favours the less-active form, hydroxy-17a-TBOH. For Case 3 w/o (B 17a ¼ 1; B OH ¼ 1), in-stream and hyporheic JBA are about 89% larger than the baseline value at 53 km because the product is substantially more potent than in Case 2. Despite the fact that parent and product are equipotent, input JBA is not perfectly conserved because the stream and hyporheic zone must come into equilibrium. Declines in JBA along the stream reflect the unequal distribution of steroid mass between the stream and hyporheic zone. Elevated JBA levels persist indefinitely in these simulations and are only reduced by competing physical or chemical attenuation processes (for example, dilution by inflow, sorption, biotransformation) in the network. For Case 3 w/o , the 10% partitioning to non-bioactive products occurs only once (that is, no mass undergoes photolysis more than one time), resulting in a constant exposure of JBA through the network after this stream-hyporheic partitioning (horizontal lines in Fig. 5). For Case 3 w/o , in-stream and hyporheic JBA are 86% larger than baseline values at 53 km. The slight reduction in JBA compared with Case 3a reflects reverted 17a-TBOH being photolysed and including 10% conversion to stable products over multiple cycles. Cases 4 and 5 reflect systems where product potencies exceed parent potency (B 17a oB OH ). In each case the input JBA is less than the baseline in relative terms because of the lower potency parent, but JBA rapidly increases in the network as the forward reaction proceeds, exhibiting a characteristic concave-down profile. For Case 4 w/o (B 17a ¼ 0.3; B OH ¼ 1), in-stream JBA at 53 km is increased by 88% compared with baseline conditions in both the stream and hyporheic zone. For Case 4 w , increases in JBA over the baseline case in the stream and hyporheic zone are 77 and 58% at 53 km, respectively. For Case 5 w/o (B 17a ¼ 0; B OH ¼ 1), JBA in the stream and hyporheic zone is 88% higher than baseline values at 53.3 km. For Case 5 w , increases over baseline are 73 and 47% at 53 km. For cases with B 17a oB OH , representing formation of relatively more potent products, it is not surprising that environmental risk assessment is especially challenging because the location of maximum JBA exposure is not proximal to the release, which necessitates analysis of hydraulic retention times and reaction times. Thus, for cases where products retain some bioactivity, especially where product bioactivity arises from distinct pathways (for example, interacts with a different receptor, different biochemical pathway), it is reasonable to expect maximum impacts at locations distant from contaminant sources. Discussion We found that hyporheic exchange and reversion processes increase downstream JBA by 34% in the stream and 53% in the hyporheic zone (Case 2 w ) compared with those estimated by considering only the irreversible forward reaction (Case 1 w/o ). At the time of publication, reliable field sampling and measurement protocols for TBA metabolites and photoproducts have not been established. Thus, the outcomes detailed here are projected based on state-of-the-science laboratory results and numerical models. Compared with a baseline case in which mass is assumed to be fully removed by photolysis occurring rapidly in the stream, these sustained concentrations pose a significant, additional hazard. At a minimum, we believe these results justify the need for revisiting regulatory strategies and considering the use of mixture based PNECs. For endocrine disrupting compounds in general, where source concentrations have been documented at thousands of ng l À 1 and impacts demonstrated for tens of ng l À 1 or less, these increased concentrations may be ecologically relevant. This modelling also provides a clear demonstration that simplified fate models, such as half-life, are most applicable only proximal to the source. To estimate downstream concentrations or JBA accurately, a new, more complete paradigm is needed. This approach may need to be compound specific, and it must explicitly consider the dominant interacting physical and chemical processes, including formation of transformation products, that govern transport and fate outcomes to avoid underestimation of environmental risk. This issue would be particularly acute for bioactive species with moderate-to-high persistence or potent species known to yield bioactive products, as these compounds may lack sufficient attenuation mechanisms to reduce JBA and associated risk in environmental systems. For compounds where even trace concentrations are ecologically relevant, current risk assessment paradigms are particularly vulnerable to atypical processes such as the reversible transformation driven by physical and/or biogeochemical heterogeneities. For the TBA metabolite simulations presented here, if a PNEC limit is not achieved within the source-dominated reach where rapid decay of JBA dominates, subsequent transport in the reversion-dominated reach will maintain concentrations above the PNEC threshold and JBA will persist at unsafe levels. In systems characterized by product-to-parent reversion, effective removal in the source-dominated reach is critical if a reaction process, such as rapid photo-or biotransformation, dominates expected reductions in environmental risk. Management targeting the source of TBA metabolites or their removal within the sourcedominated reach is therefore important to reduce any impact on fluvial ecosystems, rather than relying on naturally occurring attenuation processes farther downstream in the network. Model simulations demonstrate increased persistence for 17a-TBOH and associated JBA in stream systems, where the hyporheic zone becomes a hot-spot for product-to-parent reversion and unexpectedly acts as an important diffuse source of bioactive parent to surface waters. Although 17a-TBOH dynamics are similar in the source-dominated reach for withand without-reversion cases, where B60% of removal occurs, reversion subsequently increases removal timescales by a full order of magnitude in the reversion-dominated reach. Finally, depending on relative parent-product potency, which admittedly remains uncertain for most classes of environmental pollutants, we demonstrate the substantial divergence of bioactivity predictions with-and without-reversion, particularly with respect to defining long-term contaminant persistence in the stream. Current regulatory paradigms that focus on individual species fail to address joint persistence and JBA in the environment. Furthermore, those based on representative timescales of an individual reaction are insufficient to characterize ecosystem exposure for chemical systems characterized by product-toparent reversion and reaction rates driven by heterogeneity in environmental systems (for example, differential reactivity in perpetually dark hyporheic zones versus dynamic forward reactions in streams). The key issue highlighted by this study is demonstrating the potential implications that an atypical or unexpected contaminant characteristic (for example, arising from either environmental reactivity, fate or bioactivity) has on the accuracy of environmental risk assessment. Our simulations demonstrate that conservation of bioactivity between species, especially in systems where physical and chemical heterogeneity interact to control partitioning of parent and transformation products, can represent a critical yet underappreciated component of JBA and subsequent environmental risk. Relative to parent compounds, bioactivity in transformation products and mixtures remains an emerging area of research and is an integral aspect of any transition to systems analysis driven by biological endpoints. Relative environmental risk is a function of yield, potency and persistence. During assessment, unexpectedly high values for any of these parameters represent opportunities for significant underestimation of environmental risk 23 . Accurate understanding of the relationships between these key controls, particularly those efforts that develop insight into potency across biological endpoints, would build more confidence in the effectiveness of any particular environmental risk assessment. Similarly, we demonstrate that, for instances of atypical reaction characteristics and pathways, computational models can play an important role in directing research effort (for example, planning a field campaign) and improving our characterization of the environmental implications of contaminant discharges at systems scales. Methods Numerical model formulation and simulation. To quantify the potential impact of product-to-parent reversion on TBA metabolite persistence, we conducted numerical simulations of 17a-TBOH transport and fate in a stream-hyporheic system and compared cases with and without-reversion. The dominant transport processes simulated are advection, dispersion and transient storage in the hyporheic zone 24 . Key assumptions of this model include a well-mixed stream and hyporheic zone at each spatial step, an exponential residence time distribution in the hyporheic zone, and no down-stream transport in the hyporheic zone. We constructed a modified version of this model to represent two interacting species, similar to recent work for the Resazurin-Resorufin tracer 25 . Our 1-D transient storage model simulates transport and fate of 17a-TBOH and its primary photoproduct, hydroxy-17a-TBOH. A version of the computer code use for simulation may be obtained by contacting the corresponding author. For our simulations, we selected physical parameters to represent Fourmile Creek near Ankeny, Iowa, USA, a USGS Field Laboratory for the study of Contaminants of Emerging Concern 26 . The physical system is representative of a third order stream in the Midwestern United States. We assumed a wide, rectangular, plane-bed stream with constant discharge of 500 l s À 1 (average annual average discharge 535 to 4,989 l s À 1 for water years 2007-2013 at USGS gauge 05485605) and longitudinal slope of 0.1% (average gradient for the first 3.1 km downstream of the USGS gauge site on Fourmile Creek), Darcy-Weisbach friction factor of 0.05. We assigned longitudinal dispersion at 10 À 8 m 2 s À 1 . We used a 4-point implicit numerical solution scheme to iterate to a steady-state normal depth and velocity for the network 27 . We specified a hyporheic cross-sectional area of 1 m 2 , compared with a stream cross-sectional area of 0.68 m 2 (2 m width, 0.34 m depth). This ratio of hyporheic to stream area (commonly A s /A) is 1.5. We applied a hyporheic exchange rate of 0.036 h À 1 (10 À 5 s À 1 ). Both hyporheic area and exchange rate are within typically observed ranges for studies using the transient storage model 28 . We simulated a constant-rate input of 5 ng l À 1 17a-TBOH at the upstream end of the model domain into the stream. This loading is representative of a low concentration, sustained input due to, for example, leaching from field-applied manure to an agricultural landscape or a leaky manure storage pond. Forward and reversion rate constants were estimated using photolysis and reversion data presented in Qu et al. 1 The reversion rate from hydroxy-17a-TBOH to 17a-TBOH was assumed constant in both the stream and hyporheic zone with a rate constant of 0.02 h À 1 . The peak photodegradation rate constant for 17a-TBOH was simulated as 2 h À 1 . No photodegradation was assigned to hydroxy-17a-TBOH. The photodegradation rate was simulated as time-dependent, using a half sinewave to assign a value during a 12 h photoperiod (06:00 to 18:00) and the rate set to zero outside of the photoperiod 29 . We assumed a constant photodegradation rate through the well-mixed water column, given the shallow stream depths simulated. In other systems light penetration through the water column may be controlled by suspended solids concentrations or changes in solar loading in space (for example, shading) and through time (for example, cloud cover). On the basis of observations in Qu et al. 1 , photolysis of 17a-TBOH was assumed to yield 90% revertible products (that is, hydroxy-17a-TBOH) and 10% other products not subject to reversion and assumed non-bioactive (for example, species with a substantially modified structure, such as the 11,12-dialdehyde product we previously reported 15 ). All other parameters were assumed constant in space and time to isolate the effect of the reversion process on results. The model was run continuously until a 24-hour steady-state oscillation was achieved, determined by a change in 24-h peak values of less than 1% between subsequent 24-h cycles at all spatial locations. This spin-up period was at least 480 h for all simulations. Daily maximum and minimum values were extracted from a 24-hour period at this dynamic steady-state. Daily average values were calculated as the arithmetic average of a 24-hour period at this dynamic steady-state. Model variants were completed with-and without-reversion. The model was solved using a Crank-Nicholson solution scheme with spatial steps of 100 m over a 200 km domain and temporal steps of 60 s, after Runkel 30 . Our model represents an improvement in the representation of physical and chemical processes compared with the de facto baseline model used for regulation of 17a-TBOH (photolysis in the stream water only; no hyporheic exchange; no product-to-parent reversion). The model includes product-to-parent reversion in both the stream and hyporheic zone, and time-variable photolysis in the stream. Other attenuation processes (for example, sorption, biodegradation) were not simulated because sufficient data are not available to characterize these processes for 17a-TBOH and its major photoproduct, hydroxy-17a-TBOH. As such, this model represents a system where these attenuation processes are minimized, a worst-case scenario for risk from TBA metabolites. While this model increases the realism of the suite of interacting processes that control the transport and fate of TBA metabolites in the environment, we acknowledge that the assumptions of spatially homogeneous reaction rates and omission of competing attenuation pathways (for example, sorption-desorption, biodegradation, dilution) are limitations of this framework. At this time, there are no published studies characterizing pathways for hydroxy-17a-TBOH other than photolysis. We included only processes for which rate constants and controls are characterized in the literature for both 17a-TBOH and hydroxy-17a-TBOH. Published partitioning coefficients for 17a-TBOH include linear distribution coefficient (K d ; 2.2-41.1 l kg À 1 ) with positive relationships with organic content, pH, clay fraction and cation exchange capacity of the soil [31][32][33] . Additional available partitioning coefficients include organic carbon normalized (log K oc ; 2.77±0.12), Octanol-water (log K ow ; 2.72 ± 0.02), Hexane-water (log K hw ; À 0.114 ± 0.006), chlorophyll-water (log K chw ; 3.36±0.01), cyclohexane-water (log K cyw ; 0.39±0.07) and toluene-water (log K tw ; 1.987 ± 0.01) partitioning coefficients 31,33 . First-order biodegradation rates for 17a-TBOH were recently reported as 0.0034 h À 1 for 5°C and 0.0071 to 0.013 h À 1 for 20°C in aerobic batch microcosms 31 . Using these values, we calculate a y for the Arrhenius equation ranging from 1.038 to 1.072. For typical stream temperatures of 18°C, biodegradation rate constants are 0.0066-0.011 h À 1 (0.33-0.58% of the peak forward reaction rate; 33-58% of the reversion rate). Assuming hyporheic temperatures of 12°C, biodegradation rate constants are 0.0053-0.0077 h À 1 (0.26-0.38% of the peak forward reaction rate; 26-38% of the reversion rate). These values demonstrate the potential rates of other processes controlling 17a-TBOH transport and fate, but are limited in their applicability to our system. These rates were derived from aerobic mesocosms and we do not expect them to be applicable in the typically anaerobic hyporheic zones. To our knowledge, no further data are available about biotransformation rate dependencies on pH, redox conditions, nor other controls for 17a-TBOH. Our model omits any inflows and outflows of water along the domain. Evapotranspiration from the stream could enrich concentrations, though we expect this to be minimal compared with down-stream flux of water in the stream. Finally, we note that gross losses of water from the system to regional groundwater would not effect in-stream concentrations, but would reduce the mass flux of 17a-TBOH and hydroxy-17a-TBOH through the stream and hyporheic zone. Environmental samples of 17a-TBOH and hydroxy-17a-TBOH for model validation are not currently available. Samples collected before discovery of the product-to-parent reversion pathway are extremely limited in their usefulness because interconversion occurs constantly during sample collection and processing, resulting in limited ability to draw meaningful conclusions about field concentrations of 17a-TBOH and hydroxy-17a-TBOH. Indeed, standardized methods to analyse trace concentrations in complex matrices when looking for hydroxylated products and accounting for or eliminating artifacts from reversion do not currently exist. Furthermore, sampling is complicated by the continuous interaction of multiple species between sample collection and processes; transport and storage in the dark would be subject to reversion, while sample prep in the light (as on a laboratory bench) would be subject to photolysis. Calculation of JBA for representative cases. To assess the influence of reversion on total bioactivity in the network, we apply a conceptual framework where JBA equals the sum of relative potency multiplied by parent or product concentrations, calculated as: where B is a bioactivity coefficient describing the relative bioactivity of each compound compared with a baseline, (bioactivity per concentration) assumed here as 17a-TBOH, and C is the simulated 24-h mean concentration of each compound normalized to the input concentration of 17a-TBOH. In this study, we use the subscripts 17a for the parent (17a-TBOH), and OH for the product (hydroxy-17a-TBOH). This framework is similar to that used for oestrogenic compounds, where oestrogenicity is commonly reported as additive 17b-estradiol equivalents 34 . At this time, no standard scale akin to oestrogenicity is yet established for androgens, nor are compound-specific bioactivities widely calculated for such compounds. In our study, we consider the de facto regulatory assumption as a baseline case (Case 1; B 17a ¼ 1; B OH ¼ 0). As relative bioactivities are yet unquantified for TBA metabolites, and to extend our results to other compounds with reversible transformation, we consider four additional cases: Case 2 parent is more bioactive than product (for example, 17a-TBOH and hydroxy-17a-TBOH; B 17a ¼ 1; B OH ¼ 0.3); Case 3 equal bioactivity of parent and product (B 17a ¼ 1; B OH ¼ 1); Case 4 product is more bioactive than parent (for example, reduction of estrone to 17b-estradiol or deconjugation of glucuronides to free steroids; B 17a ¼ 0.3; B OH ¼ 1) 35 and Case 5 only the product is bioactive (B 17a ¼ 0; B OH ¼ 1). These calculations are designed to explore downstream bioactivity exposure as a function of the relative bioactivity of identifiable compounds, one of the emerging cases that must be considered in risk management for compounds' bioactive products.
8,919.4
2015-05-08T00:00:00.000
[ "Biology" ]
Determination of diclofenac concentrations in human plasma using a sensitive gas chromatography mass spectrometry method Background A gas chromatography mass spectrometry (GCMS) method for the determination of diclofenac in human plasma has been developed and validated. Results This method utilizes hexane which is a relatively less toxic extraction solvent compared to heptane and benzene. In addition, phosphoric acid and acetone were added to the samples as deproteination agents, which increased the recovery of diclofenac. These revised processes allow clean extraction and near-quantitative recovery of analyte (approx. 89–95 %). Separation was achieved on a BP-1 column with helium as carrier gas. The molecular ion peaks of the indolinone derivatives of diclofenac ion (m/z 277) and the internal standard, 4-hydroxydiclofenac ion (m/z 439) were monitored by a mass-selective detector using selected ion monitoring (SIM) mode. The linear range for the newly developed and highly sensitive assay was between 0.25–50 ng/mL. The detection and lower quantifiable limits were 0.125 and 0.25 ng/mL, respectively. The inter-day and intra-day coefficients of variation for high, medium and low quality control concentrations were less than 9 %. The robustness and efficacy of this sensitive GCMS method was further demonstrated by using it for a pharmacokinetic study of an oral dosage form of diclofenac, 100 mg of modified-release capsules (Rhumalgan XL), in human plasma. Conclusions This method is rapid, sensitive, specific, reproducible and robust, and offers improved sensitivity over previous methods. This method has considerable potential to be used for detailed pharmacokinetics, pharmacodynamics and bioequivalence studies of diclofenac in humans. While GCMS methods have been the favorite choice in the past, many derivatisation reagents have been tried and tested. Borenstein et al. used pentafluoropropionic anhydride (PFPA) as a derivatising agent with lower limit of quantification (LOQ) of 1 ng/mL with a 95 % recovery [30]. Choi et al. used a mixture of PFPA and a mixture (1000:2:3,v/w/w) of N-methyl-N-trimethylsilyltrifluoroacetamide (MSTFA), ammonium iodide (NH 4 I), and dithioerythritol (DTE) as derivatisation reagent. With this method the LOQ was 0.5 ng/mL and the recovery approximately 97 % [32]. Yilmaz et al. described a method where MSTFA was used as the derivatising agent (silylating reagent), and the hydroxyl group of diclofenac was O-silylated. Here the LOQ was 5 ng/mL with a recovery of about 96 % [31]. In our work PFPA was chosen as the best derivatising agent due to it giving a better sensitivity and maximum recovery. HPLC-UV methods have been reported to measure plasma diclofenac in the range ca. 10-100 ng/mL [18][19][20]. Plasma matrix and other diclofenac metabolites are also known to cause interferences in accurate diclofenac estimation in human matrices [29]. To ensure good specificity and reproducibility, lengthy and comprehensive sample preparation procedures are often required [16][17][18]. On the other hand, mass spectrometric methods offer potentially better precision, accuracy, sensitivity and recovery, with a detection limit of between 0.2-2 ng/mL [26,27,30,33]. The reported mass spectrometric methods used benzene and heptane as extraction solvents. However, the sensitivity of these methods was not good enough to carry out a thorough and accurate lower dose pharmacokinetic analysis of diclofenac in human plasma. In the present study, we have modified existing methods [29][30][31] introducing hexane, acetone and sodium bicarbonate to develop a more sensitive, specific and reproducible method for the determination of diclofenac in human plasma. Having developed and validated a method for the quantification of diclofenac in plasma we sought to demonstrate a proof-of-concept application. For this purpose, plasma samples were obtained from 30 volunteers who had been given an oral dosage of 100 mg of diclofenac sodium (Rhumalgan XL 100 mg modified-release capsules). Human plasma samples were analysed between 0 and 12 h to evaluate the pharmacokinetic parameters of diclofenac. Apparatus and assay conditions GCMS was performed with a Hewlett Packard model 6890 Gas Chromatograph (GC) fitted with a 6890 autoinjector for a pulsed splitless injection coupled to a model 5973 Mass Selective Detector (MSD) (Agilent Technologies, USA). Separation was achieved using a BP-1 fused silica capillary column (15 m × 250 µm × 0.25 µm). Helium (99.95 %, BOC Gases, Surrey, UK) was used as a carrier gas at a flow-rate of 1.2 mL/min. The injection volume was 2 µL. The syringe size was 10 µL. Pulse pressure and pulse time were 20 psi and 0.5 min respectively. Total run time was 14.5 min. Injector temperature was 280 °C. The initial oven temperature was 150 °C, whilst the final oven temperature was 300 °C. The final high temperature purged residual materials from the column. The column temperature was initially held at 150 °C for 4 min (total run time 4 min), increased at 4 °C/min to 180 °C in 7.5 min and held there for 0.5 min (run time 12 min), then increased at 60 °C/min to 300 °C in 2 min and held there for 0.5 min (run time 14.5 min). Carrier gas flowrate at the split vent was 54.3 mL/min. The injector was set to auto clean itself by pre-injecting hexane. The mass selective detector was operated in the selected ion monitoring mode (with electron impact) and set at m/z [M + ] 214, 242 and 277 and m/z 376 and 439 for the detection of diclofenac and 4-hydroxydiclofenac, respectively. The corresponding retention times of diclofenac and 4-hydroxydiclofenac were 7.5 min and 8.5 min respectively (for a 100 ms dwell). The relative retention times of diclofenac to 4-hydroxydiclofenac was 1.13 with a standard deviation of 0.01. Solvent delay was 3 min, electron multiplier accelerating voltage 2494 V and electron ionisation energy 70 eV. Mass spectrometer source, quadrupole and transfer line temperatures were 230, 150 and 280 °C, respectively. The accelerating voltage was set at 3.5 kV. The system was controlled and detector output data was processed using a Chemstation version B.00.02 software. Preparation of standards A stock solution of 1 mg/mL was prepared by adding 10.78 mg of Diclofenac in 10 mL (MeOH) (Final conc. 1.078 mg/mL). Working solutions were prepared by serial dilutions of the stock solution. A 0.45 mg/mL stock solution of 4-hydroxydiclofenac was prepared by dissolving 4.5 mg of 4-hydroxydiclofenac in 10 mL MeOH. The concentration of the working internal standard solution of 4-hydroxydiclofenac was 0.0045 mg/mL. All solutions were stored at −20 °C. Preparation of 1M phosphoric acid 33.3 mL of concentrated phosphoric acid solution of 85 % (w/v) strength was diluted with 500 mL deionised water to give a solution of 1 M concentration. The bottle was labelled and an expiry date of 2 months from the date of preparation was applied. The solutions were found to be stable for this duration. The solution was stored at room temperature. Preparation of 0.08 M sodium hydrogen-carbonate solution Approximately 0.672 g of sodium hydrogen carbonate was weighed and diluted with 100 mL of HPLC grade water, stored at room temperature with an expiry date of 2 months. Sample preparation Appropriately labelled Pyrex glass tubes (100 × 13 mm) with screw caps were used. Plasma samples (1 mL) were added to the sample tubes. An internal standard of 4-hydroxydiclofenac (25 μL) of concentration 0.0045 mg/mL was added and the mixtures acidified and vortex mixed with1 M phosphoric acid (1 mL). Then, to all tubes, 1 mL of acetone was added for deproteination followed by vortex mixing. Next, 5 mL of n-hexane was added, the tubes capped and the samples placed on a roller mixer for 15 min. All the tubes were centrifuged at 1400×g for 5 min at room temperature. The top hexane layer was transferred to glass screw-capped tubes to which 1 mL of 0.08 M sodium hydrogen carbonate solution was added for basification and to increase partition of the drug into the aqueous layer. The tubes were capped and again placed on a roller mixer for 15 min and centrifuged at 3000×g for 5 min. The upper hexane layer was aspirated and discarded. Phosphoric acid (1 mL) was then added, followed by 5 mL of n-hexane. The tubes were then placed on a roller mixture for a further 15 min and centrifuged for 5 min at 3000×g and the top hexane layer was transferred to glass tubes (100 × 13 mm, without screw cap). Hexane was then evaporated off under a stream of nitrogen with the heater block set at 35 °C. Derivitisation of the samples n-Hexane (975 μL) and 25 μL (v/v) of PFPA were added to the dried residue and the tubes vortex mixed for 30 s. The samples were allowed to react for 30 min on a heater block at 35 °C and gently evaporated under a stream of nitrogen. The tubes were allowed to cool to room temperature and the derivatised compound was reconstituted into 80 μL of chloroform. The sample was transferred to autosampler vials and the GCMS autosampler programmed to inject 2 μL of the sample. Figure 1 shows the indolinone derivatives formed from derivatisation of diclofenac sodium and 4-hydroxydiclofenac using the derivatising agent PFPA. This newly developed analytical method was tested in a human pharmacokinetic study. Plasma samples obtained after administration of 100 mg of oral diclofenac sodium in participating volunteers were analysed to quantitate the plasma concentrations of the drug over a 12 h period. Kingston University research ethics committee approved the protocol and the volunteers provided informed written consent to participate. Calibration curve and analysis The working standard solutions for plasma analysis were made by serial dilution of the stock solutions to final concentrations of 10, 20, 40, 200, 400, 1000 and 2000 ng/mL in methanol. Calibration standards were obtained by spiking 25 µL of each of these standards into 975 µL of human plasma to produce concentrations of 0.25, 0.5, 1, 5, 10, 25 and 50 ng/mL. The samples for the standard curve were processed as described in the materials and method section. The ratio of peak area of diclofenac to that of the internal standard was plotted versus the concentration of the diclofenac in the calibration standard and a leastsquares linear regression analysis was performed. Values of unknown plasma concentrations were determined from the regression line of this calibration curve. The working quality control solutions in methanol for plasma analysis were made by serial dilution of the stock solutions to obtain final concentrations of 10, 20, 44, 600 and 1600 ng/ mL. Quality controls were obtained by spiking 25 µL of each of these standards into 975 µL of human plasma to produce concentrations of 0.25, 0.5, 1.1, 15 and 40 ng/mL. All methanolic solutions were stored at 2-8 °C with an expiry of 7 days, due to their short stability in methanol, while plasma samples were stored at −20 °C. Intra-inter day precision and accuracy The accuracy and precision of the method was determined by assaying 0.5 mL aliquots of ethylene-diaminetetra-acetic acid (EDTA) human plasma fortified with four quality control (QC) samples of 0.5, 1.1, 15.0 and 40.0 ng/mL of diclofenac. These fortified samples were later assayed by GCMS. To assess the inter-assay precision and accuracy, samples were analysed on five separate days. To assess the intra-assay precision, these same QC concentrations were analysed and compared during 1 day. Linearity, sensitivity and specificity The ratio of diclofenac and 4-hydroxydiclofenac responses were plotted by GCMS ChemStation Version 3.1 software to determine the linearity. A calibration point was rejected as an outlier if the back-calculated concentration for a calibrator (on the basis of the corresponding calibration curve) deviated by more than 15 % at all concentrations covered by the calibration range, except at the lower limit of quantitation (LLOQ), where a deviation of 20 % was acceptable. A calibration curve was allowed with a minimum of four acceptable calibration levels. These criteria were based on the US Food and Drug Administration (FDA) "Bioanalytical Method Validation: Guidance for Industry" protocol [34]. The analytical method was able to determine diclofenac and 4-hydroxydiclofenac (internal standard) in plasma without significant interference from other endogenous compounds. The specificity of the validated assay procedure was shown by analysing 6 blank plasma samples from subjects not exposed to diclofenac, it was then spiked and recoveries calculated. Extraction recovery Absolute extraction recovery of diclofenac from human EDTA plasma was determined at three concentration levels: 1.1, 15 and 40 ng/mL. The area ratio response of diclofenac to internal standard in the extracted sample divided by the area ratio response determined in an unextracted sample and multiplied by 100 gave the percent recovery. These samples were extracted, as described earlier, except that the internal standard was added to the collected extract. The concentrations of the spiked plasma samples were calculated from the curve and compared to the theoretical values in order to calculate the extraction recovery. Stability The stability of diclofenac in human EDTA plasma was determined in processed sample extracts over at least 24 h period and also by three repeated freezing and thawing cycles. Stability of diclofenac in EDTA plasma to repeated freezing and thawing cycles Human EDTA plasma samples at concentration of QCL = 1.1 ng/mL, QCM = 15 ng/mL and QCH = 40 ng/ mL were subjected to three freezing and thawing cycles. The time span for freeze/thaw cycles was 72 h with each freeze/thaw cycle lasting for 24 h with time points 24, 48 and 72 h. The results obtained after each freezing and thawing cycle were expressed as a percentage change from the results for QCL = 1.1 ng/mL, QCM = 15 ng/ mL and QCH = 40 ng/mL in the intra-assay run (validation run-1, these samples were prepared fresh and had not experienced any freezing conditions). The test compound was considered to be stable if the percentage change from freshly prepared samples was within ±15 % of the nominally spiked level. Pharmacokinetics study The pharmacokinetic study chosen, set out to analyze diclofenac sodium in human plasma. For this study, plasma samples were obtained from 30 volunteers who had been given an oral dosage of 100 mg of diclofenac sodium (Rhumalgan XL ™ 100 mg modified-release capsules). Diclofenac concentrations in plasma were measured between 0 and 12 h, (blood being collected every hour) in order to evaluate the pharmacokinetic parameters of diclofenac. Kingston University Faculty of Science Research Ethics Committee approved the protocol and the volunteers provided informed written consent to participate. The pharmacokinetic study was conducted according to the principles of the Declaration of Helsinki [35]. According to FDA guidelines for generic drugs studies, the area under the curve (AUC) was calculated using a linear trapezoidal method, by applying non-compartmental data analysis. The method developed was used to investigate the plasma profile after oral dosing of diclofenac sodium 100 mg capsules in 30 healthy young male volunteers. Results and discussion Diclofenac sodium and 4-hydroxydiclofenac react with the derivatising agent PFPA to form indolinone derivatives, which upon electron ionisation gave rise to diclofenac ions at m/z 277, 242 and 214, whilst 4-hydroxydiclofenac gave ions at m/z 439 and 376. The mass spectra of diclofenac and the internal standard are shown in Fig. 3a, b. The derivatised indolinone ions for diclofenac and its internal standard fragment differently in the mass spectrometer giving rise to two distinctly different indolinone ions as shown in Fig. 3a, b. Linearity, sensitivity and specificity During the validation study, calibration curves were generated over a diclofenac concentration range of 0.25-50 ng/mL. The method showed good sensitivity, specificity and linearity in the concentration range 0.25-50 ng/mL. The plots were linear over the concentration range 0.25-50 ng/mL. The curves were all linear with a mean coefficient of determination of 0.9996, see Table 2. To evaluate the curve, the observed responses for the individual standards were substituted back into the equation in order to calculate the predicted concentrations based on the calibration curve. The limit of quantitation was 0.25 ng/mL. Using a signal-to noise ratio measure, the estimated limit of detection was 0.125 ng/mL. Furthermore, as can be seen from the Table 1, the percentage recovery of diclofenac in spiked plasma samples, was well within the accepted limit of 85-115 %, thereby showing no matrix effects. No notable peaks were seen in the region of interest when six blank plasma samples were analyzed, see Table 1. The retention time region of the chromatograph where diclofenac and 4-hydroxydiclofenac eluted was clear in these samples and demonstrated the specificity of the validated analytical procedure. No interference from endogenous compounds or metabolites of diclofenac was found around the elution times, however a matrix peak was observed at a different retention time see Fig. 4. Intra and Inter assay accuracy and precision The inter-assay accuracy and precision were calculated from results obtained from quality control samples (N = 6) analysed at four concentrations (0.5, 1.1, 15 and 40 ng/mL of diclofenac in EDTA plasma representing LLOQ, QCL, QCM and QCH respectively) on three separate occasions, see Table 2. Recovery Our initial attempts gave a respectable recovery of the spiked drug at ca. 60 %. However, further experiments using acetone and sodium bicarbonate showed that the simple addition of these two reagents resulted in a dramatic increase in recovery by 50 %. Final recoveries were calculated during validation runs as shown in Table 2. Intra and inter day precision (coefficient of variation) ranged between 2.41-6.33 and 7.51-8.87 % respectively, while intra and inter day accuracy ranged between 88.98-95.82 and 95.73-102.01 % respectively. The percent recovery of the three QC's ranged between 89.86-94.76 %, see Table 2. Freezing and thawing cycles The QCL = 1.1 ng/mL samples gave a mean result of 1.96, 2.02 and 1.88 ng/mL (n = 6) with the corresponding percentage change from freshly prepared samples of +9.49, +12.93 and +4.83 % for freezing and thawing cycles 1, 2 and 3 respectively. The QCM = 15 ng/mL samples gave a mean result of 18.85, 18.97 and 19.19 ng/mL (n = 6) with the corresponding percentage change from freshly prepared samples of +3.42, +4.11 and +5.29 % for freezing and thawing cycles 1, 2 and 3 respectively. The QCH = 40 ng/mL samples gave a mean result of 51.05, 51.23 and 50.85 ng/mL (n = 6) with the corresponding percentage change from freshly prepared samples of +4.32, +4.69 and +3.91 % for freezing and thawing cycles 1, 2 and 3 respectively. The data indicated that diclofenac was stable in EDTA plasma to at least three freezing and thawing cycles. The validation results indicated that the proposed method is more efficient in detecting the non-steroidal anti-inflammatory drug diclofenac, in human plasma even at very low levels when only ca. 1000 µL of human plasma was processed. Under the extraction and chromatographic conditions employed, there were no detectable interferences by endogenous materials present in human plasma. Three freezing and thawing cycles showed that diclofenac was stable in EDTA plasma. The average percent variation from freshly prepared EDTA samples, at three concentration levels, were 9.1, 4.27 and 4.3 % respectively. Many GCMS derivatization reagents has been tried and tested in the past to get maximum sensitivity and ultimate recovery of diclofenac from human plasma. Choi et al. showed that when a mixture of PFPA and a mixture (1000:2:3, v/w/w) of N-methyl-N-trimethylsilyltrifluoroacetamide (MSTFA), ammonium iodide (NH 4 I), and dithioerythritol (DTE) were used as derivatisation reagents, the lower limit of quantification (LOQ) was 0.5 ng/mL. While we have used PFPA as a derivatisation reagent, with an improved LOQ of 0.25 ng/mL and a similar recovery to Choi's work [32]. Yilmaz et al. described a method where MSTFA was used as derivatising agent (silylating reagent). Here, the LOQ was a factor of ten higher at 5 ng/mL with a recovery of about 96 % [31]. Others who have used PFPA as a derivatising agent include Borenstein et al. who achieved a lower limit of quantification (LOQ) of 1 ng/mL with a 95 % recovery and Kadowaki et al. who reported a LOD (LOQ not reported) of 0.2 ng/mL and recoveries of ca. 83 %. However, they used benzene as an extraction solvent which is more toxic [29]. Electro-membrane extraction (EME) and pulsed electro-membrane extraction (PEME) coupled with HPLC gave an LOD of 10 ng/mL an LOQ was not reported [25]. Our method has given a considerable improvement over the above methods with increased sensitivity LOQ 0.25 ng/mL and greater than 90 % recovery. Hexane was used in the sample preparations steps instead of heptane and benzene, as it is a relatively less toxic extraction solvent. Furthermore, use of hexane resulted in a higher recovery of >90 % as compared to the published lower recoveries (around 83 %) for heptane and benzene [16][17][18][19][20]. In short the developed and validated GCMS method for diclofenac satisfy all the criteria for US-FDA's "Guidance for Industry Bioanalytical Method Validation:" [34] The method is very reliable and robust for quantitative determination of diclofenac in human plasma. Assay application The pharmacokinetic study was conducted and applied to 30 volunteers who had been given an oral dosage of 100 mg of diclofenac sodium (Rhumalgan XL 100 mg modified-release capsules). The amount of diclofenac was determined between 0 and 12 h in human plasma. The mean plasma concentration-time curve is shown in Fig. 5. Diclofenac sodium is rapidly absorbed from the gut and undergoes first-pass metabolism [17,36]. Rhumalgan XL 100 ™ capsules give the peak plasma concentrations (C max ) at approximately 2.1 h (T max ), where T max is the maximum time at which C max was observed after administration. The total drug exposure, which is the area under the curve (AUC) over time was calculated from the concentration time data. According to FDA guidelines, for generic drugs studies, the area under the curve (AUC) was calculated by the Linear Trapezoidal method, by applying, non-compartmental data analysis using the PK Solver 2.0 software (as an Excel add-on). Here AUC 0-∞ is the extrapolated value of the AUC curve to infinite time and AUC 0-12 is the AUC timeconcentration curve to the last measurable concentration at the 12 h time-point. The mean values of pharmacokinetic parameters estimated are shown in Table 3 [1][2][3][4][5]. Based on our new GCMS method, drug quality parameters like bioavailability and bioequivalence could be estimated accurately based on pharmacokinetic measures such as AUC and C max that are reflective of systemic exposure. In humans, the pharmacokinetics of diclofenac retention and absorption show that it has high inter-and intra-subject variability [3,[5][6][7][9][10][11][12][13][14][15][16]. In light of these previously reported interand intra-variability, our method (although assayed on a relatively small sample of 30 subjects) seems to be especially valuable as it showed very small variability and high reproducibility. A possible reason for this reduction in inter-and intraindividual variability as compared to other methods may be the use of new extraction solvents such as hexane along with phosphoric acid, acetone and sodium bicarbonate for increased deproteination and PFPA as a robust and efficient derivatising agent. The newly developed and validated method could have far reaching impact in pharmacokinetic and bioequivalence studies of diclofenac sodium in human patients. The proposed method might be applied to other human and animal matrices in future studies for accurate quantitation of diclofenac. This new method will also be instrumental in any future drug studies to show bioequivalence between generic and innovator drug products. Conclusions The developed and validated method for the determination of diclofenac in human plasma is rapid, sensitive, specific, reproducible and robust, and offers better sensitivity than previous methods. It utilizes hexane which is a relatively less toxic extraction solvent as compared to heptane and benzene, while phosphoric acid, acetone and sodium bicarbonate were used for increased deproteination. Due to the very small variability and high reproducibility this method has been proved to be suitable for use in pharmacokinetic studies of diclofenac in human plasma, which demonstrates the possible adequacy of this assay for clinical studies.
5,499.2
2016-08-17T00:00:00.000
[ "Chemistry", "Biology" ]
Asymmetric effects of news through uncertainty † Bad news about future economic developments have larger effects than good news. The result is obtained by means of a simple nonlinear approach based on SVAR and SVARX models. We interpret the asymmetry as arising from the uncertainty surrounding economic events whose effects are not perfectly predictable. Uncertainty generates adverse effects on the economy, amplifying the effects of bad news and mitigating the effects of good news. Introduction Recently, many contributions have investigated the role of news shocks for business cycle fluctuations. News shocks are typically defined as exogenous anticipated changes in future economic fundamentals, mainly total factor productivity (TFP). Several works have provided the theoretical grounds of the old idea (Pigou (1927)) that changes in expectations about the future can affect the current behavior of consumers and investors and therefore can generate cyclical fluctuations, see, among others, Den Haan and Kaltenbrunner (2009), Jaimovich andRebelo (2009), andSchmitt-Grohé andUribe (2012). On the empirical side, a number of works have assessed the role of news shocks. A partial list of empirical contributions in this stream of literature includes Beaudry and Portier (2004, 2006, Sims (2011, 2012), Kurmann and Otrok (2013), and . News shocks are typically found to play a role in generating macroeconomic fluctuations, although their relative importance varies across investigations. Common to all of those empirical works is the hypothesis that bad and good news have symmetric effects. Such an hypothesis is translated into the model through the assumption of linearity. In this paper, we relax such an assumption and study whether there are any asymmetries in the transmission of news shocks. More specifically, we study whether bad and good news about future changes in TFP have different effects on the economy, and whether the size of the shock matters. There are several reasons which could explain an asymmetric transmission. We will discuss these below in detail. We contribute to the literature by using a modified version of the method recently proposed by Forni et al. (2022). 1 The approach, in essence, consists of a two-step procedure where (i) the news shock is identified in an informationally sufficient VAR (see ) and (ii) the estimated shock is used, together with some nonlinear function of it, as exogenous variable in a VARX including a set of endogenous variables whose response are of interest to us. By combining the (linear) impulse response functions of the VARX, asymmetries and nonlinearities of the transmission of news shocks can be estimated. The news shock is identified along the lines of and Beaudry and Portier (2014). The nonlinear function we use in the VARX is the square of the news shock. When the quadratic effect of news is taken into account, the business cycle dynamics generated by news shocks appear more complex than usually believed. First, good (negative) news shocks have positive (negative) permanent effects on real economic activity variables, as already found in the literature. Second, squared news shocks produce a temporary downturn in economic activity. These two results imply that the response of output to positive and negative news is generally asymmetric: bad news shocks have larger effects in absolute value than good news shocks. The reason is that the effect of bad news shocks is exacerbated by the negative effect of the square term. On the contrary, the negative effect of the square dampens the expansionary effect of good news. Finally, a higher sensitivity to bad news is also found for financial variables, like stock prices and credit spreads. As mentioned above, there can be several reasons that explain asymmetries in the effects of news. The political science literature has stressed that agents pay more attention to bad news than good news (Soroka (2006)). The reason can be the existence of a loss aversion effect, agents are more concerned about losses than gains (see Kahneman (1979)). But it could also simply be that negative economic events have a higher media coverage than positive events (Soroka (2012)). The literature on news and uncertainty have developed independently from each other. In this paper, we find that the squared news shock and a smoothed version of it have a high positive correlation with existing measures of uncertainty. 2 We embrace the view that the square term can be interpreted as a proxy for uncertainty endogenously arising from news and its effects as uncertainty effects. Uncertainty acts as an amplifier mechanism, creating asymmetries and nonlinearities in the transmission of news shocks. At the end of the paper, we use a very simple model of limited information to show how uncertainty can arise from news. Our story unfolds as follows. Agents receive news about economic events and act on the basis of the value of the expected shock (first-moment effect). News, due to limited information, generate uncertainty. The larger the event, the larger uncertainty. Uncertainty generates a contractionary demand-type effect, possibly induced by a more cautionary behavior of the agents (second-moment effect). The two effects combined yield an asymmetry in the effects of news shocks since uncertainty enhances the effects of bad news and mitigates the effects of good news. The papers which are closely related to ours are Cascaldi-Garcia (2020) and Berger et al. (2020). The former assumes that news shocks generate uncertainty, via a stochastic-volatility model. Results are however different from ours. News shocks reduce macroeconomic and financial uncertainties in the medium run, but raise financial uncertainty in the short run; the economic effects of news shocks are on average higher and more disperse in periods of high financial uncertainty. The latter separates uncertainty shocks in a part related to contemporaneous changes and a part related to expected changes in volatility (basically, a news shock on second moments) and finds that the negative effects on real variables are caused by contemporaneous changes in volatility and not by changes in expected volatility. The remainder of the paper is structured as follows: Section 2 discusses the empirical model; Section 3 presents the results; Section 4 discusses the uncertainty channel; Section 5 concludes. Econometric approach Here we discuss the empirical model we employ to study asymmetries in the transmission of news shocks. The model We use a modified version of the method recently proposed by Forni et al. (2022). The method aims at estimating a nonlinear moving average representation of the economy where a shock of interest and a nonlinear function of it drive economic variables. The model can be estimated using a two-step procedure where (i) the shock of interest is identified in an informationally sufficient VAR, 3 and (ii) the estimated shock is used, together with some nonlinear function of it, as an exogenous variable in a VARX which includes a set of variables of interest. By combining the (linear) impulse response functions of the VARX, nonlinearities and asymmetries of the shock of interest can be estimated. Let Y t be a vector of m variables of interest and s t the shock of interest admitting the following structural representation where s t is the news shock, B(L) = (I + B 1 B −1 0 L + B 2 B −1 0 L 2 + . . . )B 0 is a m × m matrix of polynomials in the lag operator L, α(L) and β(L) are m × 1 vectors of polynomial in L and u t is a vector of structural shocks. The vector ε t is a vector of shocks orthogonal to s t and s 2 t . The terms α(L) and β(L) represent the impulse response functions of the linear and the nonlinear term on Y t . The total effect of a positive shock s t =s is IR(s) = α(L)s + β(L)s 2 and the effect of a negative shock s t = −s is Assuming that the term B(L)u t is an invertible vector moving average, we can rewrite the above model as a VARX for Y t where s t and s 2 t and its lags are two exogenous variables The impulse response functions to a news shock of sizes can be obtained as A(L) −1 (α(L)s +β(L)s 2 ) for a positive shock and A(L) −1 ( −α(L)s +β(L)s 2 ) for a negative shock. In order to estimate equation (2), an estimate of s t is required. We assume that X t is a vector of variables, possibly different from Y t , which is informationally sufficient for s t , that is, s t can be obtained as a linear combination of current and past values of X t . We include in the vector X t the following variables: (log) TFP, 4 (log) stock prices, the Michigan Survey confidence index component concerning business conditions for the next 5 years, (log) real consumption of nondurables and services, the 10-year government bond, the spread between the 3-month Treasury Bill and the 10-year bond, the Moody's Aaa interest rate (AAA), the spread Aaa-Baa and the CPI inflation. 5 We then estimate the VAR and identify the news shock along the lines of Beaudry and Portier (2014) and . Precisely, we impose the following restrictions: (i) the news shock has no effects on TFP contemporaneously and (ii) has a maximal effect on TFP in the long run (48 quarters). Condition (ii) is equivalent to impose that there are just two shocks affecting TFP in the long run: the innovation of TFP (the so-called "surprise" shock) and the news shock. 6 This identification scheme has become relatively standard in the news shock literature and is very similar to the one used in . 7 Having an estimate of the news shock, we estimate model (2) and the related impulse response functions. Notice that there could be a misspecification problem in the SVAR for X t . If none of the variables in this SVAR is affected by the square term, then the model will be well specified and the shock well estimated (provided that the variables included are informationally sufficient for the shock). If on the contrary some variables are affected by the square term, the SVAR is misspecified. Despite this, the shock could be correctly estimated, as discussed in Debortoli et al. (2022). In the empirical section, we perform a simple exercise to verify whether this is the case. Asymmetric effects of news In this section, we report and discuss the empirical results. We start our analysis by estimating the effects of news shocks. We use quarterly US data from 1963:Q4 to 2015:Q2 to estimate a Bayesian VAR 8 with diffuse priors 9 and four lags. The news shock The news shock and its square exhibit very large values (more than two standard deviations larger than average) in seven quarters. In Figure 1, we focus on the squared news shock. Five of the seven Figure 2 shows the effects of the news shock on the variables in X t . The impulse response function of TFP exhibits the typical S-shape which is usually found in the literature. Stock prices, E5Y, and the news variable jump on impact, as expected, while consumption increases more gradually. All interest rates reduce on impact, albeit the effect is barely significant. All in all, the effects of the news shock are qualitatively very similar to those found in the literature. As discussed in the previous section, some of the variables appearing in the vector X t could be affected by the squared term. In this case, the preliminary SVAR would be misspecified and the news shock potentially poorly estimated. To understand whether this is the case, we estimate the VARX using the same variables as in the SVAR. 10 If we find responses to the news shock which differ from those obtained in the SVAR, then the preliminary SVAR is misspecified. Figure 3 displays the results. The black solid lines represent the responses in the VARX, and the red dashed lines represent the responses in the SVAR. The responses are very similar, confirming the validity of the preliminary SVAR. The effects on macroeconomic variables The VARX we employ to study the effects of news on the economy includes (log) real GDP, (log) real consumption of non-durables and services, (log) real investment plus consumption of durables, (log) hours worked, CPI inflation, and the ISM new orders index. The estimated news shock and the squared news shock are used as exogenous variables. We organize the discussion as follows. First, we present the VARX results relative to the estimated impulse response functions to s t and s 2 t fors = 1. Then, we focus on nonlinearities. Results are reported in Figure 4. The numbers on the vertical axis are percentage variations. The news shock, Figure 4 (left column), has a large, permanent, positive effect on real activity, with maximal effect after about 2 years. The results are in line with the findings of the literature. 11 The squared news shock ( Figure 4, right column) has a significant negative effect on all variables on impact. The maximal effect on GDP is reached after four quarters and is around −1%. Afterward, the effect reduces and vanishes after about 2−3 years. The effects of the square term are also sizable and significant for investment and hours, while the effects on consumption are somewhat milder and not significant. By inspecting the response of inflation, it is clear that square effects are demand-type effects, since both GDP and inflation significantly fall. Since, fors = 1, the response to the squared term represents the asymmetry of the responses to positive and negative shocks, whenever the square term response is significant, the asymmetry is significant. Figure 5 plots the total response of economic variables to the news shock. Recall that the total responses are IR(s) for a positive shock and IR( −s) for negative shocks. We plot the responses to shocks of sizes = 1, that is, one standard deviation (first column),s = 0.5 (second column) ands = 2 (third column). The solid line represents the mean response to a positive news shocks, and the gray areas are the 68% credible intervals. The dashed red line represents the effects of a negative news shock with reversed sign (multiplied by −1), in order to ease the comparison in terms of magnitude between good and bad news. A positive news shock permanently increases real economic activity variables: GDP, consumption, investment, and hours worked. The responses however are quite sluggish. Indeed, except for consumption, the impact effects are zero. Inflation significantly falls and new orders increase. By inspecting the two lines, a clear asymmetry emerges. A bad news shock has higher short-run effects than a good news shock on real economic activity variables. Summing up, the impact effects are higher for bad news than for good news. Indeed, for negative shocks the effects of the square term enhance those of news. The contrary holds for positive shocks: the square term mitigates the expansionary effects of news. Interestingly, the result is different for inflation since good news have larger effects than bad news. The asymmetry is amplified in the case of a large shocks = 2 (third column) and dampened in the case of a small shocks = 0.5 (second column). The larger is the shock, the larger is the asymmetry since the square term becomes more important. Notice that in the series of squared news, there are realizations that are as high as four standard deviations; in that case, the importance of the nonlinear component would be extremely high. Table 2 reports the variance decomposition. In particular, it reports the proportion of variance of the variables attributable to news shocks. This includes both the linear and the quadratic term. The shock has important effects in the medium and long run for GDP, consumption, investment, and hours. For these variables, the shock explains between 40% and 60% of the variance at horizons longer than 1 year. Notice that, in principle, the asymmetry could simply arise because of a different response of TFP to news. To make sure that this is not the case, we add TFP in the VARX and check the response. It turns out that the effect of the nonlinear term is essentially not significant, see the robustness Section and Figure 9. This rules out the possibility that the effects are attributable to a different propagation of news on TFP. The effects on financial variables In order to analyze the effects of news on financial variables and uncertainty, we estimate an additional VARX including stock prices, the 3M T-Bill bond yield, the spread between Baa and Aaa corporate bonds, which may be regarded as a measure of the risk premium, the stock of commercial and industrial loans, and three indices of uncertainty, namely the extended VXO index of implied volatility in option prices (see Bloom (2009)), the macroeconomic uncertainty index 12-month ahead (denoted as JLN12), developed by Jurado et al. (2015), and the Ludvigson et al. (2021) real uncertainty index 12 months ahead (denoted as LMN R12). Results are reported in Figures 6 and 7. In Figure 6, the left column reports the effects of s t and the right column the effects of s 2 t . Figure 7 reports the total effects for different magnitudes of the shock. We start by analyzing the effects of the linear and quadratic components in Figure 6. The linear news shock increases permanently stock prices and reduces uncertainty, the risk premium, and the T-bill. The squared term is, as for macroeconomic variables, contractionary. It is interesting to notice that a positive shock to the squared term has a significant positive effects on the three uncertainty indices, VXO, JLN12, and LMN R12. There is a close link between squared news shock and uncertainty measures. We will come back to this result later on. Moving to the total effect reported in Figure 7, good news have a large, positive, and persistent effect on stock prices and significantly reduce the risk premium and the uncertainty indices. Bad news have the opposite effects (notice again that we report the response to a negative shock multiplied by −1): persistent and significant reduction of stock prices and increase in the risk premium and uncertainty. Again, a substantial asymmetry arises. Stock prices and the VXO react much more to bad news than to good news and the risk premium reacts faster. The response of the T-Bill is different. This variable displays smaller effects for bad news than good news. From the variance decomposition in Table 3, it can be seen that the shock is very important for stock price fluctuations, explaining around 40−50% of the variance. On the contrary, the shock plays a smaller role for the other variables. It is interesting to notice that more than 30% of the variance of the VXO is accounted for by the shock while the percentage is a bit less for the JLN12 measure (around 10%) and LMN R12 (around 15%). A sizable part of the existing measure of uncertainty is explained by news. Of course, this leaves the door open for the existence of an exogenous component of uncertainty that has nothing to do with news. Robustness checks We make three robustness checks. First, we check if there is any additional information coming from uncertainty useful for the identification of s t , since Cascaldi-Garcia and Galvao (2021) show that news shocks are correlated with uncertainty shocks. Therefore, we repeat the analysis adding an uncertainty measure in the initial SVAR. The results, displayed in Figure 8, are very similar to those obtained in the baseline model. As a second check, we add TFP in the VARX. Impulse responses are displayed in Figure 9. The nonlinear term on TFP is virtually insignificant, and the responses of the other variables are almost identical to those in Figure 4. Thirdly, we use the absolute value of the news shock rather than the square as nonlinear function. Figure 10 reports the results. The responses obtained with the absolute value are remarkably similar to those obtained with the square. The uncertainty channel In this section, we provide an interpretation of the squared term in terms of uncertainty. More specifically, we claim that the squared news shock represents the component of uncertainty driven by news. Thus, the effects associated with the nonlinear term can be interpreted as effects due to an increase in uncertainty associated with news. First, we provide evidence in favor of this interpretation. Second, we discuss a simple theoretical framework of limited information where the forecast error variance of macroeconomic variables, that is, uncertainty, depends on the square of the news shock. The main idea is that news about economic events, whose effects are not perfectly predictable, creates uncertainty. Evidence To explore the relation between squared news and uncertainty, we first compute the correlation of the squared shock and a smoothed version of it with a number of uncertainty measures used in the literature, namely (i) the extended VXO index of implied volatility in option prices (Bloom (2009) Note: Percentage of variance attributable to the news shock, the squared shock, and the sum of the two. (denoted, respectively, JLN1, JLN3, and JLN12 henceforth); (iii) the Ludvigson et al. (2021), financial and real uncertainty indexes 1, 3, and 12 months ahead (denoted, respectively, LMN F and LMN R and the number referring to the month). Table 4 reports the correlations. The first column refers to the squared shock while the second column to a centered 5-quarter moving average of the squared shock. For the squared shocks, correlations range from 0.24 (VXO) to 0.40 (JLN3 and JLN12) while for the moving average correlations range from 0.36 (VXO) to 0.67 (JLN3 and JLN12). The squared news shock is positively correlated with all measures of uncertainty, the correlation being particularly high for the JLN measures. Figure 11 plots the (standardized) 5-quarter moving average of the news shock (red solid line) together with the (standardized) JLN12 index (blue dotted line), top panel, with the (standardized) LMN R12 measure (blue dotted line), middle panel and with the (standardized) LMN F12 measure (blue dotted line), bottom panel. The result is striking, and the variables closely track each other and display several coincident peaks. Notice that the estimation of the news shock is completely independent of uncertainty since no uncertainty measure is included in the first estimation step. This is in line with the results of the VARX for financial variables, Figure 6. There, we saw that a positive squared news shock generates a positive conditional comovement among uncertainty measures. Here, we see that the positive comovement between squared news and uncertainty also arises unconditionally. To support the interpretation that the effects of shocks to the squared news capture the effects of shocks to uncertainty, we perform the following exercise. We regress the squared term on the current value, one lag and one lead of the uncertainty measures discussed above. We then repeat the VARX estimation replacing the squared shock with the residual obtained in this regression. If our claim is correct, when the component related to uncertainty is removed from the squared term, its effects should disappear. Figure 12 displays the results. The left column of the figure reports the effects obtained using the residual term together with the responses in the baseline model (red dashed lines). The effects obtained with the residuals are much smaller than those obtained in the baseline model and are not significant. This provides support to the view that the effects of the squared term are closely related to uncertainty. As a final check, we identify an uncertainty shock as the first Cholesky shock in a VAR including, in order, the VXO index, GDP, Consumption, Investment, Hours worked, CPI inflation, and new orders and compute the related impulse responses. 12 We then repeat the estimation with the same specification, but adding the news shock and the squared news shock as the first and the second variable in the VAR, respectively. The uncertainty shock becomes now the third shock in the Cholesky decomposition. If the standard uncertainty shock has nothing to do with news and squared news, the impulse response functions in the two model models, with and without s t and s 2 t , should be very similar. It turns out that the impulse responses are significantly different (see Figure 9. Impulse response functions to the news shock (left column) and the squared news shock (right column) obtained with the VARX. Solid line: point estimate. Light gray area: 90% credible intervals. Dark gray area: 68% credible intervals. Figure 13). When the uncertainty shock is cleaned from the effects of news, its effects basically vanish. We interpret this as meaning that a large part of the uncertainty shock is associated with news and squared news. We repeat the same exercise replacing the VXO with LMNR12. Results, see Figure 14, point to the same conclusion. When news and squared news are included, the effects of the uncertainty shocks are substantially mitigated, meaning that to some extent, uncertainty endogenously depends on news. The evidence suggests the existence of a close link between news shocks and uncertainty and supports the view that the effects associated with the squared term can be interpreted as effects generated by an increase in uncertainty arising from news. 13 In the next section, we discuss a simple framework which allows to interpret the effects associated with s 2 t discussed in the empirical section as effects due to shifts in uncertainty triggered by news. A simple informational framework Here we discuss a simple illustrative framework of limited information flows which can help in understanding why uncertainty can arise from news. By no means this is intended to be an economic model since we do not model how agents react to news and the channels through which uncertainty affect agents' decisions. Still, we believe, it can be useful since it establishes a potential "uncertainty channel" of news, and this channel can be at the root of the nonlinearities documented in the empirical part. Let us assume that TFP, a t , follows where ε t ∼ N(0, σ 2 ε ) is an economic shock with delayed effects. 14 At time t, agents have imperfect information and cannot observe ε t , but rather have access to news that report the events underlying the shock, for instance, natural disasters, scientific and technological advances, institutional changes, and political events. 15 At each point in time, agents form an expectation, s t = E t ε t of the true shock. 16 The shock and the expectation however, because information is imperfect, do not coincide. We assume that there is a random factor v t that creates a wedge between the two: The shock v t has the following properties: the conditional mean is E t v t = 1, so to satisfy E t ε t = s t , and the conditional variance is E t (v t − 1) 2 = σ 2 v , that is, constant. The above equation can be rewritten as ε t = s t + s t (v t − 1), so that ε t is made up by the sum of two components: the observed component s t and an unobserved component which is proportional to s t . This multiplicative noise structure, while common in engineering and control system, to our knowledge has not been employed before in the literature of limited information. Typically, an additive structure is used, mainly for the purpose of analytical tractability. However, we find it particularly attractive since it can describe several relevant economic situations. A few examples can provide a better intuition. Suppose that a diplomatic crisis takes place at time t and is reported by the media. The crisis can lead to a war (ε t = −1) or not (ε t = 0) with equal probabilities depending on the president's decision. The decision is taken in t but, for national security reasons, made public only in t + 1. So the expected shock is s t = −0.5. The noise, which captures the uncertainty surrounding the president's decision, will be v t = 2 in case of war and v t = 0 otherwise, with equal probabilities. As a second example, suppose the agents observe that a big bank goes bankrupt. The value of the shock, however, is unknown because with some probability, say 0.5, there will be a domino effect and other banks will go bankrupt (ε t = −3), but with probability 0.5 the government will intervene to rescue them (ε t = −1). The government's decision is taken in t but agents do not know it, so the expected shock is s t = −2 and v t can be either 1.5 or 0.5 with equal probabilities. In this simple informational framework, the TFP forecast error is Following Jurado et al. (2015), we define uncertainty as the conditional variance of the forecast error, which is The conditional variance, or uncertainty, depends on the squared expected shock. 17 Going back to the previous examples, the bigger the war in case of going to war, or the larger the consequences of the domino effect if government does not intervene, that is in both cases the larger the value of ε t in absolute value, the larger is uncertainty since the larger is s t . Through the lenses of this interpretative framework therefore, the effects of s 2 t on the economy, not modeled here, can indeed be interpreted as attributable to uncertainty. Can the interpretation be extended to our empirical findings? Can we interpret the asymmetries as arising from the uncertainty generated by news? The answer, essentially, depends on whether the news shock identified in Section 2 can be interpreted as s t . It is easy to see that this is the case. The model representation of a t and s t is ⎛ Figure 13. Impulse response functions to an uncertainty shock identified as the first shock in a Cholesky decomposition with the VXO ordered first. Black solid line: point estimate. Light gray area: 90% credible intervals. Dark gray area: 68% credible intervals. Blue dashed lines are the impulse response functions of the uncertainty shock identified as the third shock in a Cholesky decomposition with the VXO ordered third and news and squared news ordered first and second, respectively. Notice that (i) the shock s t satisfies the identifying restrictions used in the empirical model: positive long-run effect and zero impact effect on a t ; and (ii) the representation above is invertible, that is can be estimated with a SVAR. This means that under this informational assumption s t is exactly the news shock identified in the SVAR of Section 2. As a result, the effects of the squared term in our empirical findings can be interpreted as effects attributable to uncertainty arising from news. Summing up, our story unfolds as follows. Agents receive news about economic events and act on the basis of the value of the expected shock (first-order moment effect). However events, due to the fact that they are not seen with certainty, generate uncertainty. The larger is the event, the larger is the uncertainty. Uncertainty generates a contractionary demand-type effect possibly induced by a more cautionary behavior of the agents (second-order moment effect). The two Figure 14. Impulse response functions to an uncertainty shock identified as the first shock in a Cholesky decomposition with the LMN12 ordered first. Black solid line: point estimate. Light gray area: 90% credible intervals. Dark gray area: 68% credible intervals. Blue dashed lines are the impulse response functions of the uncertainty shock identified as the third shock in a Cholesky decomposition with the LMN12 ordered third and news and squared news ordered first and second, respectively. effects combined yield an asymmetry in the effects of news shocks since uncertainty enhances the effects of bad news and mitigate the effects of good news. It is important to stress that there might be other explanations for why bad news have larger effects. For instance, several works have pointed out that that agents tend to react more to bad news than good news, see Soroka (2006) or simply bad news have a larger media coverage, see Soroka (2012). These explanations and ours are of course not mutually exclusive. Our results also have important implications for DSGE modeling. Second-moment effects, related to changes in conditional volatility, appear in higher order terms of the approximation of DSGE models, see Fernández-Villaverde et al. (2015a, 2015b. Here we show that, at least for the case of the news shock, these terms are important from an empirical point of view, stressing the importance of going beyond linearization to correctly describe fluctuations in macroeconomic variables. Simulations Now that we have a simple model in which uncertainty is generated by the squared news shocks, we come back to our econometric methodology. In this section, we ask the following question: is our method able to detect the first-and second-order effects of the news shock, as generated by the model? We use two simulations to assess our econometric approach. The first simulation is designed as follows. Consider the simple model of Section 4.2. Assume that [v t s t ] ∼ N(0, I). 18 Under the assumption a t = ε t−1 , and recalling that s t = E t a t+1 and that u t = s t−1 v t−1 is the forecast error, the invertible representation for a t is a t = s t−1 + u t . We assume that there are two variables, z t = [z 1t z 2t ] , following an MA process, which are affected by s t and s 2 t . By putting together the fundamental representation for a t and the processes for z t , the data generating process is given by the following MA: where w t = s 2 t −1 σ s 2 . Simple MA(1) impulse response functions are chosen for the sake of tractability, but more complicated processes can be also considered. Using the following values m 1 = 0.8, m 2 = 1, n 1 = 0.6, n 2 = −0.6, p 1 = 0.2, p 2 = 0.4, and drawing [v t s t ], we generate 2000 artificial series of length T = 200. For each set of series, we estimate a VAR for [ a t z 1t z 2t ] and identify s t as the second shock of the Cholesky representation. We defineŝ t as the estimate of s t obtained from the VAR. In a second step, using the same 2000 realizations of [u t s t s 2 t ] , we generate another variable y t (which in the simulation plays the role of one of the variables of interest in the vector Y t ) as 19 where g 1 = 0.7 and f 1 = 1.4. We estimate a VARX for y t usingŝ 2 t andŝ t as exogenous variables. The second simulation is similar to the first, the only difference being that w t is an exogenous shock which does not depend on s t , which implies that the squared news shock has no effects on z t and y t . The values of the parameters are the same as before and [v t s t w t ] ∼ N(0, I). We then estimate a VARX for y t usingŝ 2 t andŝ t as exogenous variables. The results of simulation 1 are reported in the left column of Figure 15, while those of simulation 2 on the right column. The solid line is the mean of the 2000 responses, the gray area represents the 68% credible intervals, while the dashed red lines are the true theoretical responses. In both simulations, and in all cases, our approach succeeds in correctly estimating the true effects of news and uncertainty shock, the theoretical responses essentially overlapping with the mean estimated effects. When none of the variables is driven by uncertainty, our procedure consistently estimates a zero effect. Conclusions News about future events, whose effects are not predictable with certainty, increase economic uncertainty. As a consequence, the effects of news become nonlinear since uncertainty acts as an asymmetric amplifier. Bad news tend to have higher effects on real variables than positive news since uncertainty exacerbates the negative first-moment effect of bad news and mitigates the positive first-moment effects of good news. The literature on nonlinearities of news is still in its infancy. This paper represents one of the few contributions. Of course, there might be other types of nonlinearities and channels which propagate news in a nonlinear way which will be investigated in future research.
8,563.8
2023-02-15T00:00:00.000
[ "Economics" ]
Effect of Exports on the Economic Growth of Brazilian Microregions : An Analysis with Geographically Weighted Regression This study aimed at analyzing the effect of exports on the economic growth of Brazilian microregions in 2010, based on the theoretical model developed by Feder (1982). The hypothesis is that the economic growth of a region results from the existing productivity differential between the exporting and non-exporting sectors, as well as from the externality generated by the exporting sector in the economy. To reach the results, a geographically weighted regression was estimated, identifying a positive effect on the externality in practically all the Brazilian microregions. Regarding productivity, its effect was limited to the microregions close to the two largest ports in Brazil. Introduction The main aim of this research was to test the hypothesis of the growth model developed by Feder (1982) for Brazil, using data from different microregions.The central hypothesis of this model points out that the exports play a central role in the economic growth of the regions, due to the fact that this sector provokes indirect effects on the whole economy, as a result of the positive externalities generated between both sectors.Thus Feder"s theoretical model (1982) allows the measurement of indirect effects from exports on the economic growth, which is the main virtue of this model. It seems important to highlight that it is possible to find some studies that tested the central hypothesis of this model in the literature (Feder, 1982;Seijo, 2000;Ibrahim, 2002;Cantú & Mollik, 2003;Mehdi & Shahryar, 2012).However, most of these studies [except for Cantú and Mollik (2003)] used data from countries to test the model central hypothesis, while in this study, as mentioned before, the data base to be used was disaggregated for the Brazilian microregions.The use of microregional data aims at taking into consideration local heterogeneities and spatial dependence to capture the effect of exports on the economic growth. The choice of Brazilian microregions as a geographical delimitation, instead of towns, originated from the argument put forward by Breitbach (2008).For this author, the use of microregions as the analysis space provides the researcher with a more suitable degree of approximation to the economic and social relationships that characterize the "local environment", which is defined as a sufficiently small space, in which the proximity between the agents favors the creation of synergies able to keep a localized economic system working. Another important aspect to be taken into consideration is that, as the variable under study is exports, many times the production might be carried out in peripheral towns, but the exporting company might be located in a town which is considered central to the region, and the value of exports might be ascribed to that town.For this reason, data regarding towns might overestimate or underestimate the real value exported by the town, impairing the analysis results.On the other hand, with a microregional sample, this effect tends to be mitigated.is possible to identify different spatial patterns in the exports, as well as in the economic growth. Therefore, the econometric technique to be used for the empirical model estimation has, necessarily, to take into consideration these two effects.For this reason, this study will estimate the empirical model parameters through the Geographically Weighted Regression technique (GWR); originally developed by Fotheringham, Brundson and Charlton (2002).This technique enables the adjustment of a regression model that accounts for the heterogeneity of the data, weighting the estimates of parameters through the geographical location of the remaining observations of the data set.Thus instead of measuring the mean effect of exports on the growth, it is possible to estimate the effect for each microregion, so that it becomes possible to observe more clearly the microregions which are most affected by exports.After that, it will be possible to develop specific public policies targeting local interests. In addition to this introduction, this paper is divided into four sections.Section 2 briefly describes the theoretical model by Feder (1982) and presents some empirical studies that evaluated the effect of exports on the economic growth in the light of this model.Section 3 addresses the methodology to be employed in this study, outlining both the empirical model and the parameter estimation strategy.In section 4, an exploratory analysis of spatial data is carried out, and the results obtained from the empirical model estimation are discussed.Section five presents the final considerations. Literature Review As mentioned in the introduction above, the main objective of this research was to estimate the effect of exports on the economic growth of Brazilian microregions based on the model developed by Feder (1982).It seems relevant to emphasize that this model does not seek to quantify the direct effect of exports on the economic growth, but rather its indirect effects are addressed, which are two: the first results from the productivity differential existing between the exporting sector and the non-exporting sector.Feder (1982) pointed out that there are several factors that might result in higher productivity of the exporting sector, the higher competition of the international market outstands for leading firms to invest in more efficient production and management techniques, as well as in workforce qualification. The second effect occurs through the positive externality that the exporting sector exercises on the non-exporting sector.However, Feder"s model (1982) does not clarify what kind of externality is generated by the exporting sector over the economy non-exporting sector.At the same time, it is possible to infer the management techniques (organizational capital) or workforce qualification (human capital) used in exporting firms that might be followed by the domestic firms.Considering both effects previously presented, the main equation in the Feder"s model (1982) takes the following form: Parameters α and β will capture, respectively, the effect of the investment growth rate and the workforce on the product growth rate; while parameter λ will identify the externality effect and the coefficient ∅ will measure the productivity differential effect.Feder (1982) defended that the intensity of the externality effect is a function of the relation between the non-exporting sector production and the exporting sector production, taking into consideration that the lower the participation of the exporting sector in the economy total is, the higher the effect of externalities is.More specifically, this equation will be used to specify the econometric model of this study (Note 1). Using data from underdeveloped countries in the period between 1964and 1973, Feder (1982) ) tested his theoretical model and the results evidenced that the productivity differential leads to economic growth, confirming the hypothesis that the exporting sector presents higher productivity than the non-exporting sector.In addition to this productivity differential, the results observed also revealed the existence of a positive externality of the exporting sector over the non-exporting one.The remaining variables inserted in the model, investment and workforce, presented positive and statistically significant coefficients, as expected. Similar result was also found by Ibrahim (2002).In this study, data from six Asian countries: Hong Kong, South Korea, Malaysia, the Philippines, Singapore and Thailand were used.From the six countries under analysis, four presented productivity differentials between the exporting and non-exporting sectors and, in addition to that, except for the Philippines, in all the other countries there is a positive externality of the exporting sector over the economy.On the other hand, the author observes that this effect tends to be stronger in less developed countries than in the developed ones, since the differences between the exporting and non-exporting sectors, regarding productivity, are much more evident in less developed countries than in developed ones. Using a broader sample (a group of 72 developing countries), Seijo (2000) verified that the positive externalities generated by the exporting sector have positive effect on the economic growth of countries.Later on, in an attempt to test the robustness of the model, the author divided the sample into two groups of developing countries (medium and low), considering the income level.In both samples, the results confirmed the previous evidence, that exports generate positive externality over the non-exporting sector.Finally, in the last robustness analysis, the author divided the sample again, considering the geographical point of view, into three groups: Africa, South America and Asia.In this case, only for the African countries, the coefficient associated to externalities was positive and statistically significant, since for the other two groups (Latin America and Asia) the coefficient was positive, but not significant. More recently, Mehdi andShahryar (2012) estimated Feder"s model (1982) for some sectors of the Iranian economy, considering the period between 1961 and 2006.The sectors considered in the study were: industry∕mining, agriculture and services.Those authors" main objective was to estimate the effects of exports on the economic growth of these sectors.In all estimates, the authors verified that exports presented positive and significant effects on the economic growth of the three sectors under analysis. A common point in the studies previously listed is that they used data bases from countries in their estimates.Cantú and Mollik (2003), however, developed some studies using data from 32 Mexican States in the period from 1993 to 1998.In all the models estimated, the capital did not present statistical significance, and the growth of the production factor "work" presented negative signal, contrary to what had been expected.Moreover, although the effects of externality were positive and statistically significant, they were very small, close to zero.Thus, the results found in this study confirm only the assumptions of Feder"s model (1982). Regarding the Brazilian economy, Galimberti and Caldart (2010) estimated the Feder model, using spatial data from 22 municipalities belonging to Corede Serra, a region located in Rio Grande do Sul.The period of time considered by the authors was from 1997 to 2004.As a result, they identified a productivity differential between the export sector and the non-export sector, and this differential has a positive and statistically significant effect on the region's economic growth. It seems important to emphasize that this study is aligned with the study by Cantú and Mollik (2003), since it also proposes to use data from regions instead of countries.However, this research advances in relation to the technique employed to estimate the empirical model.While Cantú and Mollik (2003) estimated the empirical model without taking into consideration the spatial component in the estimates (which makes the results obtained somehow biased due to the disregard of heterogeneity and spatial dependence), this study will take that into consideration by using the Geographically Weighted Regression (GWR). This technique, originally developed by Fotheringham, Brunsdon and Charlton (2002), has been widely used to model processes which are not spatially uniform, that is, processes that vary from region to region regarding the mean and variance among other variables.Therefore, the main focus of the GWR technique is to adjust a regression model that takes this heterogeneity into consideration, adjusting a model for each region, weighting the estimates of the parameters through the geographical location of the remaining observations in the data set. Methodology This study uses two distinct methodologies, which are complementary, to analyze the local effect of exports on the economic growth of Brazilian microregions in the light of the theoretical model developed by Feder (1982), which are: Spatial Data Exploratory Analysis (SDEA) and Geographically Weighted Regression.In this section, the SDEA and the Geographically Weighted Regression are initially presented.Next, the empirical model and the data source are outlined. Spatial Data Exploratory Analysis (SDEA) The spatial data exploratory analysis (SDEA) is the collection of techniques that describes and visualizes the spatial distributions, identify atypical sites (spatial outliers) and finds out patterns of spatial associations (spatial clusters) and suggests different spatial regimes (Anselin, 1995).In this article, three SDEA common statistics are calculated, which are: The Global Univariate Moran I, the global bivariate Moran I and the LISA statistics. The global univariate Moran I value measures the spatial correlation degree, that is, whether there is similarity of values of a particular variable with the similarity of location of the same variable.Mathematically, the statistics are provided through a matrix by: where n is the number of microregions; z are the values of the standardized relevant variable; Wz are the mean values of the standardized relevant variable in the neighbors, following a particular weighting matrix W; S 0 is the matrix of the elements of the weighting matrix W. The Moran I value ranges between -1 and 1.A positive Moran I value indicates positive spatial autocorrelation, that is, high (or low) values of a relevant variable tend to be surrounded by high (or low) values of this variable in the neighboring regions.While a negative Moran I value indicates a negative spatial autocorrelation, where, a high (or low) value of the relevant variable in a region tends to be surrounded by low (or high) values of the same variable in the neighboring regions. The spatial correlation degree can be measured in a bivariate context, by calculating the statistics of the global bivariate Moran I.In such case, there is an attempt to find out whether the value of a variable under observation in a certain region keeps any association with the values of another variable observed in neighboring areas.In formal terms, the global statistics for two different variables in their matrix format is given by Equation 3: Where n is the number of regions; z 1 and z 2 are the standardized relevant variables; Wz 2 is the mean value of the standardized variable z 2 in the neighbors following certain weighting matrix W; S 0 is the sum of the elements of the weighting matrix W. The value of Equation ( 03) can be positive or negative.Its interpretation for a positive value is the following: the regions that present high (low) value for certain variable, in general, tend to be surrounded by towns with high (low) value for another variable.However, if this value is negative, it means that: the regions that present high (low) value for certain variable tend to be surrounded by towns with low (high) value for another variable. LISA statistics, in turn, also known as local Moran I, measures the individual contribution of each observation in the global Moran I statistics, capturing simultaneously the spatial associations and heterogeneities (Miller, 2004).Mathematically, the statistics for the observation at i-th are given by Equation 4: where z i is the value of the standardized relevant variable of the i-th; z j is the value of the standardized relevant variable of the j-th observation; and, w ij are the mean values of the standardized relevant variable in the neighbors, according to certain weighting matrix W. According to Anselin (1995), the sum of the LISA statistics is proportional do the global Moran I statistics, and might be interpreted as an indicator of a local spatial cluster. For each observation (in this article for each microregion) a I i , is calculated obtaining n values of I i , whose most efficient form of presenting is through the LISA (Note 2) significance map.The LISA cluster map shows the regions with significant statistics in the local Moran I. Geographically Weighted Regression When working on socioeconomic phenomena one can assume that they might vary between the regions under analysis, that is, the phenomena are not constant between regions.Fotheringham, Brunsdon and Charlton (2002) propose an econometric method, called Geographically Weighted Regression (GWR), which allows the study of phenomena which are not constant between regions. According to Fotheringham, Brundson and Charlton (2002), each region might have different relations, resulting in varied coefficients, for this reason GWR appears as an alternative to the classical linear regression model, enabling the existence of one coefficient for each region, indicating the non-stationarity of the responses given by the explaining variables. GWR is specified as: where: is the dependent variable for the i-th region; ( , ) are the geographical coordinates of the i-th region in the space (for example, latitude and longitude); ( , ) is the local coefficient of the i-th region, which is a function of the geographical position ( , ); are the explaining variables of each region i, when k is the number of independent variables for each region; and, is the random error term for the i-th region which follows a normal distribution with mean equal zero and constant variance. Also, according to Fotheringham, Brundson and Charlton (2002), the GWR model estimates one equation for each region, using data subsamples.The regions that take part in these subsamples are chosen according to their distance in relation to the place for which the regression is being calculated, where closer regions have greater influence than the farther ones. The GWR estimation is based on the weighted least squares method, and is calculated as follows: where: ̂ é is a vector with the estimates; X is a vector of the independente variables; Y is a vector of the dependent variable: and, ( , ) is a diagonal weighting matrix with dimension × . The elements of the matrix main diagonal ( , ), named , are the weights used to estimate the equation coefficients.These weights are based on the distance of the i-th region from the other regions in the subsample, selected through the kernel spatial function (Note 3).The kernel spatial function might be fixed or adaptive (Note 4), depending on the bandwidth (Note 5).This study employs the adaptive kernel, since the bandwidth used in this type of spatial kernel adapts to the number of observations around the point to be observed, obtaining more efficient and less biased estimates. Bandwidth is one of the important points in GWR, since according to Fotheringham, Brundson and Charlton (2002), GWR results are sensitive to this parameter choice.Therefore, a method should be adopted that determines in a non-arbitrary way the optimal bandwidth.The Akaike information criterion was used to determine the optimal bandwidth in this study. Thus GWR is presented as an alternative to control both the spatial heterogeneity and the spatial dependence, since this technique allows the inclusion of spatial dependence in the spatial lag form (SAR model) and is specified by Equation 7: where: is the spatially lagged dependent variable through a matrix of spatial weights and is the spatial autoregressive coefficient.This model is estimated by the method of instrumental variables due to the endogeneity of the variable , which has as instruments the spatially lagged explaining variables .GWR also allows for the Spatial Error Model (SEM), the Spatial Durbin Model (SDM) and the Crossed Spatial Regressive Model (SLX) (Note 6). Empirical Strategy and Data Source To construct Feder"s empirical model (1982), the following GWR model will be estimated considering the spatial effects (Note 7): where: represents the Gross Domestic Product (GDP) growth rate in the i-th microregion; is the Gross Domestic Product (GDP) growth rate in the i-th microregion between 2009 and 2010, spatially lagged using the spatial weight matrix of the type queen; is the investment in physical capital in relation to the GDP of the i-th microregion; is the population growth rate in the i-th microregion; is the exports growth rate in the i-th microregion; corresponds to the participation of exports in the GDP in the i-th microregion.It seems relevant to emphasize that the term " * (1 − )" measures the exports externality and the " * " measures the exports sector productivity differential in relation to that of the domestic market.The variable is calculated based on the percentage variation of the GDP in 2010 in relation to the GDP in 2009; the GDP data of the microregions was collected from the site IPEADATA for 2009 and 2010 (R$, prices from 2000).The value of industries in 2010 was used as proxy for the variable fixed capital investment ( ) (Note 8).The variable was obtained at the site IPEADATA for 2010.The variable was measured based on the exports percentage variation in 2010 in relation to the 2009 exports; and the exports data was obtained at the site Aliceweb originally for the towns, but for the purposes of this study they were aggregated contemplating the microregions.The variable , which is the participation of exports in the GDP of the i-th microregion, was obtained through the division of exports by the 2010 GDP.Details about the variables and how they were measured can be found in Appendix B. Analysis of Results In the 2000s, Brazil presented an important economic growth process, which was stopped in 2008 by the international financial crisis (Figure 1).In fact, between 2000 and 2008 the mean of the country economic growth was 8.3%∕year, while from 2008 and 2010 the growth was only 2.3%.The favourable result in the first years of the period resulted mainly from the "commodities cycle" experienced by Brazil, which contributed to the economy dynamics and also to the formation of a surplus in current transactions.It seems relevant to emphasize that the expansion of international reserves allowed the reduction of constraints imposed by the balance of payments to the Brazilian economic growth with the 2008 crisis.Thus the existence of international reserves and the international flow of goods which was kept throughout the crisis (supported by the Chinese demand that had a smaller decrease), enabled the strategy of activation of the domestic economic activity through domestic policies of income and credit.In such context, even if the country presented lower growth rate between 2008 and 2010, it managed to impact positively the economy. Brazilian exports showed an increase in terms of diversification, from 1183 products exported in 2010 to 1188 in 2014 (SH 4 digits).And within this export agenda, primary products resulted in an important percentage, with their main representatives: ore (13% of exports in 2014); grains, seeds and cereals (12%); meat (7%); sugars (4%); coffee and tea (3%).With regard to trade partners, the country had a small increase between 2010 and 2014, from 226 importing countries to 228, with China and the United States as its main partners, representing respectively 18% and 12% of the total value exported by the country.These characteristics -diversification of products and commercial partners -are important elements when seeking to reduce Brazil vulnerability to the oscillations of the international market (Note 9).When comparing the exports evolution (Figure 1) a similar trend is seen, with a boom of external insertion in the 2000s, interrupted from 2008 on.Also, Figure 2 shows certain correlation (Note 10) between the exports growth rate and the Gross domestic product (GDP), in which in general, in periods of increase in exports tended to show increase in the product (and vice-versa).When the economic activity decreased, mainly in 2008∕2009, the exports presented a sharp reduction, which was greater in the period under analysis. 1981981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Since exports are part of the aggregated demand, it is natural to find a positive association between them.However, theoretically, as mentioned in the Feder"s model (1982) exports might generate an effect in the economy which transcends their direct impacts, generating externality and also productivity differentials.These particularities might interfere directly in the economic dynamism of the country. Figure 3 shows the process of international insertion of the Brazilian economy in a microregional perspective.In this Figure, an evolving process in the number of microregions is observed, considering that the percentage increased from 75% (1997) to 83% (2010) (Figure 3).That is, this result evidences that the Brazilian microregions were increasing their competitiveness, since they were managing to insert their products in the competitive international market.However, the great problem regarding international insertion of the Brazilian microregions is that the magnitude of exports is not homogeneous, on the contrary, it is highly concentrated in some regions of the country.As shown in Figure 4(b) most exports (2010) were concentrated in some microregions, mainly the Southeast and South of the country, with a huge gap in regions North and Northeast. Some authors point out structural issues in the productive sector, the availability of natural resources, government incentive, transport infrastructure, and the easy access to the external market, as elements that potentially explain this spatial heterogeneity of exports over the Brazilian territory (Perobelli & Haddad, 2002;Betarelli Junior & Almeida, 2009). As regards economic development, heterogeneity is also seen in its distribution (Figure 4a), so that only 43% of the microregions obtained a GDP growth above that of the Brazilian mean.An interesting point that has been already highlighted in relation to the figures previously described, is the geographical proximity of microregions that present high GDP and exports values, suggesting the existence of a spatial autocorrelation in the data under investigation, which is confirmed in Table 1.Note.An empirical pseudo-significance based on 999.999 random permutations; (*) significant at 1%. Table 1 presents the univariate global Moran I statistics, which presented a statistically significant positive coefficient for both the exports and the economic growth.This means that the regions that held high (low) amounts of exports were surrounded by microregions that also had high (low) exports values.Likewise, microregions with intense (reduced) economic growth were surrounded by microregions that also presented intense (reduced) economic growth.Therefore, not only were the values of exports∕GDP growth concentrated in some spaces in 2010, but also these places were close one to another. Table 1 also shows the bivariate global Moran I statistics analyzing the relation between economic growth and exports.Once more, a positive and significant coefficient was obtained, which means that the economic growth of microregions is related to the behaviour of the exports in the microregions around it.In this sense, the hypothesis that greater economic dynamism tends to concentrate in those microregions where the international insertion is higher is confirmed, optimizing the spillover effect of the results in the area surrounding these regions. Taking that into consideration, the influence of exports in this process of economic growth is analyzed, seeking to capture its indirect effects: externality and productivity differential, as the central hypotheses of the theoretical model proposed by Feder (1982).Due to the existing heterogeneity in the economic growth distribution, a phenomenon that is confirmed by the local Moran I analysis (Figure 5), we opted for the analysis via estimation of the Geographically Weighted Regression (GWR), aiming at controlling both the spatial heterogeneity and the spatial dependence.In fact, Figure 5 confirms the spatial disparity in relation to both economic growth and exports. Regarding the latter, low-low clusters are seen mainly in the North and Northeast of the country, regions that present lack of infrastructure and competitive productive clusters, slowing international insertion.As regards the GDP growth, the dynamics of cluster formation is slightly different, since some heterogeneity is seen over the country, but one that does not follow a pattern of regional location neither of high-high clusters nor the low-low ones.Therefore, due to the existence of this uneven distribution, the Feder"s model was estimated for the Brazilian microregions using the Geographically weighted regression.The results of the empirical model estimation described in Equation 08 are reported in Table 2 (Note 11).Based on the global model, a positive and statistically significant effect of the exports externality on the economic growth was observed.This basically results from the income and chain effects that exports possibly generate in each microregion economy.Regarding the income effect, by being inserted in the international market some region might create internal jobs which can boost the local commerce and other domestic industries.Moreover, a multiplying effect might be generated in the economy resulting from the existing linkage between the exporting sector and other domestic productive segments, also leading to competitiveness between these segments. Therefore, the correlation observed between the economic growth of the Brazilian microregions and the insertion in the international market (Figure 4) is validated by the econometric results.Especially in relation to primary products, Brazil has a comparative advantage, a result of existing natural resources, as well as investments in research in this area that have been made over the years.These and other factors have raised the country competitiveness and placed it among the main exporters of these products, so that in 2010 Brazil ranked sixth in the ranking of the world agricultural exporters.All this efficiency somehow runs in the productive chain in which agriculture is inserted, generating externalities for the links that are especially interconnected in this sector.This requires more efficient inputs, specialized services, etc., which are available to the exporting sector as well as to domestic market production.Moreover, export industries linked to the primary sector (low-tech industry) also gain in competitiveness.Also, by analyzing the Brazilian export agenda, the low technology industry and the non-industrial products were seen to correspond to 42% of the Brazilian exports. In addition to this, the injection of income that the exports promote, generates demand for domestically produced goods, fostering income and employment throughout the country.These arguments explain the positive and statistically significant coefficient for export externalities.. Source: Estimated by the authors aided by the software GWR, based on the research data. Note. * significant at the 5% significance level.The term " (1 − )" is the proxy for the exports externality, and " * " refer to the variable "productivity differential" of the exporting sector in relation to that of the domestic market. As regards the productivity differential of the exporting sector, its coefficient presented the signal expected, however, it was not statistically significant.This results might be due to the time interval under analysis, a period in which the international market was weak and part of the production that would have been sent to the external sector was displaced to supply the domestic market, resulting in similar productivity between the international and domestic markets. It seems relevant to emphasize that from 2008 on the Brazilian government effected a series of anti-cyclic measures, such as the increase in credit through the public banks, the reduction of the interest basic rate, the housing program "Minha Casa, Minha Vida" (My house, my life) and the Federal fiscal waive regarding the payment of Industrialized Products Tax (Almeida, 2010).Mainly the latter, aimed at stimulating the domestic consumption of such products, balancing the production of industries that produced goods which benefited from the tax reduction, as well as the sectors backwards and forwards each productive chain.Therefore, these and other actions led the productive activity even during the international crisis, focusing on the domestic market and that, possibly, justifies the absence of the exporting sector productivity differential effect on the economic growth. As for the remaining variables included in the model, both the physical capital and the economic growth spatial gap presented positive and/ statistically significant effect.Mainly, the parameter highlights a positive spillover of the GDP growth in the economic dynamics of the neighboring microregions.This shows that, when certain region grows, part of this growth also benefits the neighboring microregions, creating a virtuous cycle of growth. All the previous analysis involved estimated global coefficients (Table 2 analysis).In certain situations, it is theoretically expected that some coefficient might be global, while other coefficients are supposedly global.The great advantage of the GWR is to provide local coefficients, that is, this technique recognizes that the effect of a variable is not exactly the same in all regions, on the contrary, it tends to vary from region to region. To verify the hypothesis of stationarity of relationships represented by the variables considered in the empirical model, the test Monte Carlo was adopted (Appendix A, Table A1).Through this test, the null hypothesis of stationarity for the exports externality and productivity differential coefficients was rejected at a 5% significance level, that is, statistical evidence pointed out that the effects of these two variables are local. Figure 6 shows the distribution of such coefficients, evidencing that, although the mean effect of the productivity differential was not statistically significant, in 20% of the microregions this impact exists [Figure 6(a)].When observing the location of these microregions, they are seen to be located mainly in the regions Southeast and South of Brazil, which concentrated most of the country exports (Carmo, Raiher, & Stege, 2016).As previously stated, these regions have higher availability of natural resources, better universities and transportation infrastructure, as well as easy access to the external market, due to the proximity to the main ports in the country (Santos, Rio de Janeiro, Paranaguá, Vitória and Itajaí).These elements might be interpreted as competitive advantages of these microregions to attract exporting companies, which, in turn, present higher productivity levels. As regards exports externalities [Figure 6(b)], 96% of the Brazilian microregions presented a positive and statistically significant coefficient.That is, basically along the whole Brazilian territory, the insertion in the international market presents an effect that goes beyond the injection of resources in the economy, generating indirect impacts that lead to a process of economic growth.These dynamics were not verified in 24 microregions, highlighting that these regions presented an important economic growth in the year under analysis, however, they did not present an exports value that matched this growth process [comparison between Figures 4(a The same distribution of the local coefficients in Figure 6 is seen in Figure 7, however, in the latter the magnitude of the coefficients was evidenced (both the productivity differential and the externality) in each Brazilian microregion.In this case, the microregions in which the exports level was more intense (South, part of Southeast and Center-West) were seen to the present lower externalities; at the same time, in the microregions where the external insertion was lower, the externality impact was higher.That is, the internationalization of Brazilian products, might be an important way to the economic growth, mainly in those areas of the country that present greater weakness in terms of external insertion (North and Northeast), regions that also present lower economic dynamics.Regarding the productivity differential coefficient, the effect was seen to be higher in those microregions located closer to the coast, neighboring the main ports of the country. Finally, the proximity between microregions that present higher relation between exports externality and the GDP growth and between the productivity differential and the economic dynamics was noticeable.This spatial pattern was confirmed by the Moran I statistics, which obtained a 0.94 coefficient for productivity and 0.91 for externality.Thus microregions with high (low) beta for productivity were surrounded by neighbors that also held high (low) beta for productivity.The same phenomenon was verified for the externality.Therefore, knowing this spatial dependence, public policies that aim at the external insertion might be applied to each space, obtaining very similar results in terms of economy dynamics. Final Considerations This study aimed at verifying the local effect of exports in the economic growth of Brazilian regions in 2010 in the light of the Feder"s theoretical model (1982).Basically, the theoretical hypothesis is that the economic growth of a region results from the existing productivity differential between the exporting and non-exporting sectors, as well as the externality generated by the economy exporting sector. In methodological terms, a geographically weighted regression was estimated, and the hypotheses of the theoretical model were partially confirmed.As regards externality, its importance is visible to favour the GDP of almost all microregions of the country, mainly those whose international insertion is weak.That is, not only has a direct impact on the formation of the country GDP with the insertion of products in the international market, but also an indirect effect, generating spillovers, income effect, etc., throughout the productive chain of the sector.All these impacts are important for the economic dynamism, especially of the less developed regions of the country.Regarding productivity, its effect was limited to the areas close to the largest ports in the country. From these results it is possible to direct specific policies to boost the international insertion of each microregion, seeking to homogenize the country competitiveness and, consequently, favoring a more intense economic growth, mainly in those areas which are economically weaker (North and Northeast).But, to achieve that it is necessary to rethink the exports flow, mainly in the North and Northeast, with the implementation of efficient ports in those regions. Finally, specific policies are needed, aiming at the microregions insertion in the international market, mainly North and Northeast, and also the deepening of the commercial relations already existing in the country.The results found in this study point out to the fact that if the country manages to insert more microregions in the international market, the economic growth might be even greater and more homogeneous all over the country. CRESX i * PARTX i Refers to the proxy used for the productivity differential between the exporting sector and the non-exporting sector.Such variable is measured by the multiplication between the exports growth and the participation of exports in each microregion GDP.It seems relevant to emphasize that the exports values were in dollar (site AliceWeb) and were converted into real (exchange rateefetiva real -Ipeadata). 𝐹𝑇𝑅𝐴𝐵 𝑖 Represents the population growth rate. 𝐼𝑁𝐶𝐹 𝑖 Corresponds to the participation in physical capital investment in the GDP.Since the value of physical capital investment at the level of each microregion was not available, the following steps were followed: 1) the total number of industries existing in the country was measured throughout the years under analysis and was divided by the total investments in Brazil in each year, obtaining a mean value of investment per industry (VIE); 2) the number of industries in each microregion was identified, multiplying by the VIE; 3) Finally, this value was divided by the GDP, obtaining the participation in physical capital of each microregion GDP.It seems important to highlight that a correlation was made between the VIE and the actual physical capital of the country and the result was a 0.98 correlation, demonstrating the robustness of the proxy used.  This is a spatial auto regressive parameter associated to the error gap, capturing the spillover effect in the error term. This is the Gross Domestic Product growth rate of the i-th microregion between 2009 and 2010, spatially lagged. Notes Note 1.The econometric model estimated in this study is specified in Equation 07, which is described in detail in the section addressing the methodology used. Note 2. Also called LISA cluster map. Note 3. The kernel spatial function is a real, continuous and symmetric function which uses the distance between two geographical points and a bandwidth parameter to determine the weight between these two regions, which is inversely related to the geographical distance. Note 4. For a more detailed explanation of the types of kernel spatial function, see Fotheringham, Brundson and Charlton (2002). Note 5. Bandwith is a softening parameter, so that the wider the band is, the more observations are used as calibrating point and the greater the local coefficient softening tends to be (Almeida, 2012). Note 6.For a more detailed explanation of the models SEM, SDM and SLX see Lesage and Pace (2009). Note 7. The models SEM, SDM and SLX were tested, however, the results of these models did not present statistical significance. Note 8.The number of industries in each microregion was taken from the RAIS.Considering the total of industries in the country and dividing this number by the total investment, the distribution was carried out and used to calculate the physical capital of each region.It seems relevant to highlight that a correlation between this variable and the actual physical capital of the country was carried out and the result was a 0.98 correlation. Note 9.It is important to emphasize the importance of the public policy in this process of the Brazilian external insertion, especially tax relief for exports.However, regarding agriculture, experts indicate that the productivity gains that the sector presented were the main factor for the greater international insertion that took place in 2000 (as can be observed in Figure 1). Note 11.The ANOVA test was carried out for the GWR, and its value was 3.51.This test led to the conclusion that the GWR model represented some improvement in relation to the classical linear regression model which generated global coefficients.It seems relevant to highlight that the ANOVA test holds the null hypothesis that the GWR model does not improve the global model results. Figure 1 . Figure 1.Gross Domestic Product (GDP) and Brazilian exports (US$)-1980 to 2010 Source: Elaborated by the authors with data from Ipeadata. Figure 2 . Figure 2. GDP and exports growth rate (%) -1981 to 2010 Source: Elaborated by the authors with data from Ipeadata. Figure 3 . Figure 3. Number of Brazilian exporting microregions -1997 to 2014 Source: Elaborated by the authors with data from Aliceweb. Figure 4 . Figure 4. GDP (a) and exports (b) growth rate -Brazilian microregions -2010 Source: Elaborated by the authors with data from Ipeadata and AliceWeb. Figure 5 . Figure 5. GDP growth (a) and exports (b) LISA map -Brazilian microregions -2010 Source: Estimated by the authors aided by the software GeoDa, based on the research data.Note.The empirical pseudo-significance based on 999.999 random permutations. Figure 6 . Figure 6.Spatial distribution of the productivity (a) and exports externality (b) statistically significant local coefficients -Brazilian microregions -2010 Source: Elaborated by the authors from the results of the software GWR. Figure 7 . Figure 7. Spatial distribution of productivity (a) and exports externalities (b) effects (betas) -Brazilian microregions-2010 Source: Elaborated by the authors from the results of the software GWR. Table 1 . Moran I coefficient (univariate and bivariate) -Brazilian microregions -2010 Calculated by the authors aided by the software GeoDa, based on the research data. Table 2 . Global results of the GDP estimates -Feder"s model -Brazilian microregions -2010
9,536.8
2017-11-19T00:00:00.000
[ "Economics", "Geography" ]
Prognostic Signatures and Therapeutic Value Based on the Notch Pathway in Renal Clear Cell Carcinoma The Notch family of genes encodes a group of highly conserved cell surface membrane receptors, which are involved in one of the key pathways that determine cell growth, differentiation, and apoptosis in embryonic tissues. Furthermore, abnormal expression of Notch genes is closely related to the occurrence and development of several cancers. To date, no specific treatment of RCC has been reported to relate to the Notch pathway. Therefore, we detected Notch pathway genes in series of tumors, as well as potential compounds targeting the Notch pathway, with a focus on the mechanism of Notch pathway action in kidney renal clear cell carcinoma (KIRC). Samples from KIRC patients were divided into three clusters based on the mRNA expression of Notch pathway genes. In addition, we investigated the potential targets of the Notch pathway, predicted the IC50 of several classical targeted therapies, and analyzed their correlation with the Notch pathway. Finally, LASSO regression analysis was performed to build a model to predict survival in KIRC patients. These results suggest that therapies targeting the Notch pathway could be more efficiently studied based on the Notch score and that we can predict the prognosis of patients with KIRC based on the expression of Notch pathway genes. Most importantly, these results may provide a solid theoretical basis for future research on therapeutic targets for patients with KIRC. Introduction The Notch signaling pathway plays an important role in cancer biology and is the focus of research on targeted therapy for cancer. The Notch pathway was first observed in Drosophila in 1914, and related genes were discovered in 1986 [1]. The Notch signaling pathway is mainly composed of four parts, i.e., receptors, ligands, CSLDNA-binding protein, and downstream genes. The Notch signaling pathway is different from other important pathways, such as Wnt and TGF-β, and comprises receptors and ligands that mediate the activation of two cells after contact [2]. In humans, the Notch receptors (Notch 1, 2, 3, and 4) interact with the Notch ligands (Jagged 1, 2; Delta 1, 3, 4, the difference between these two types of ligands is the presence of a cysteine-rich region [3], in addition to the differences in function [4]). The Notch receptor is hydrolyzed by ADAM-γ-secrete to produce NICD, which binds to CSLDNA-binding proteins and activates downstream target genes [5]. The Notch signaling pathway has been shown to play an important role in homeostasis during cell development and the development and progression of diseases, especially those of cancer, with varying roles in various cancer tissues. For example, in small cell lung cancer, bladder cancer, and other cancers, the Notch pathway intermediaries, Notch 1, 2, 3, and 4, Mastermind-like (MAML), NICASTRIN, and other genes can serve as protooncogenes, leading to the hyperactivation of the Notch pathway. Notch 1, 2, 3, FBXW7, and other genes serve as tumor suppressor genes in T-ALL and breast cancer [6]. Kidney cancer is one of the cancers whose incidence has been on the rise in recent years. It was estimated that the global incidence of this cancer would exceed 70,000 cases by 2020 [7]. Renal cell carcinoma (RCC) is the main type of kidney cancer, while clear cell renal cell carcinoma (ccRCC) cases comprise a small percentage of RCC cases (75-80%) [8,9]. Advances and innovations in the treatment of ccRCC can benefit the majority of patients with renal cancer. In recent years, the use of targeted therapy for ccRCC has become a mainstream trend in clinical treatment, including therapies that target the mTOR pathway [10], ferroptosis [11], and lipid metabolism [12], and other strategies that targeting other pathways or processes. Therefore, we considered whether targeting the Notch signaling pathway could provide more possibilities for the treatment of ccRCC. In this study, we investigated the role of the Notch signaling pathway in ccRCC and related therapeutic targets through bioinformatic analysis and established a Notchrelated prognosis model of ccRCC by using a LASSO estimate for linear regression to screen related genes, thereby providing a relatively meaningful strategy for the treatment of ccRCC. We used gene set enrichment analysis (GSEA) to screen out more than 40 genes involved in the Notch pathway and were differentially expressed in ccRCC. Using these genes, we classified the samples obtained from The Cancer Genome Atlas (TCGA) into three clusters, and we continued our research based on these three clusters. We identified potential therapeutic compounds and carried out classic anticancer drug effect analysis, histone modifications, and immune infiltration analysis. Finally, using LASSO regression analysis, we selected 14 genes from more than 40 genes to build a prognosis model, reducing the influence of the selected genes (multicollinearity). We further used the e-MTAB-1980 dataset from the ArrayExpress database for model verification. Moreover, we verified the differential expression of related genes encoding proteins in ccRCC in the model based on the relevant immunohistochemical results. All data acquisition, collation, and analysis, including statistical analysis and tests, were carried out using R. While introducing the experimental ideas, we will also indicate the specific R software package used. Data Acquisition and Analysis Based on TCGA. The RNA-SEQ and clinical data of ccRCC were obtained from TCGA (https://portal.gdc.cancer.gov/), including 72 normal tissue samples and 539 ccRCC tissue samples. All Notch pathway genes were identified using Wiki Pathways from GSEA. Ultimately, 47 Notch pathway genes related with 32 types of cancer were identified. We also obtained the data for CNV, SNV, and Notch gene expression from TCGA (https://portal.gdc.cancer.gov) [13][14][15][16][17][18]. Perl was used for analyzing Notch gene-related data, and TBtools were used for data visualization. Correlation Analysis of Drugs or Compounds with Notch Pathway Genes. To identify which drugs or compounds are useful for Notch-related tumor therapy, we performed cor-relation analysis using Connectivity Map Build02 (CMap) 14 , a resource that uses cellular responses to perturbation to find relationships between diseases, genes, and therapeutics. CMap (Build 02) contained the expression profile of 6,100 genes from five human cell lines treated with 1309 different doses of drug, and a ranked list of compounds with connectivity scores between -1 and 1 was obtained by comparing disease characteristics with all reference expression profiles of the chemicals. There was a high degree of negative correlation between the disease characteristics and the expression profile associated with the compound, suggesting that the compound may have therapeutic effects. The CMap database has successfully led to drug repurposing for a variety of diseases, including lung cancer, breast cancer, muscular dystrophy, acute myeloid leukemia, Parkinson's disease, and Alzheimer's disease [15,16]. In this study, bioinformatics and CMap analysis were used to screen out the key pathogenic genes and candidate small molecule therapeutic against the Notch pathway in cancer, providing new ideas for the treatment of KIRC. Ultimately, 14 differential characteristics and expression of mRNA were obtained for further research through differential expression analysis of Notch pathway genes. p < 0:05: statistical significance. Analysis of Correlation of the Gene Enrichment Scores and Gene Clusters. To further display the differences in gene expression among the samples, we constructed a Notchscore model to classify mRNA expression in tumor tissues into three groups, i.e., high expression of the Notch pathway genes (cluster 1), normal expression of the Notch pathway genes (cluster 2), and low expression of the Notch pathway genes (cluster 3), according to the mRNA expression in the samples. In addition, we used a violin diagram to describe the relationship between gene enrichment scores and the expression levels of the three clusters. In RStudio, we used the "gplot" package for gene cluster analysis. Meanwhile, the survival differences of the three clusters were analyzed, and their survival curves were analyzed using the "survival" package in RStudio. Furthermore, we investigated the relationship between Notch genes and the clinicopathological features of patients with KIRC. p < 0:05: statistical significance. Analysis of Notch Pathway Gene Expression and Tumor Drug Therapy Based on Genomics of Drug Sensitivity in Cancer. To clarify the relationship between Notch pathway gene expression and tumor drug therapy, the Genomics of Drug Sensitivity in Cancer database (GDSC; https://www .cancerRxgene.org) was used to predict chemotherapy response. As one of the biggest public resources for information regarding cancer drug sensitivity, drug reactions, and molecular targets, GDSC enables the identification of potential therapeutic targets to improve the treatment of cancer. GDSC currently hosts nearly 75,000 experimental data on drug sensitivity, documenting the responses of nearly 700 cancer cell lines to 251 anticancer drugs. Focusing on identifying molecular targets for drug sensitivity through a web portal, GDSC databases can be query-based to obtain graphical representations of specific anticancer drugs or cancer 2 Oxidative Medicine and Cellular Longevity gene data. Furthermore, the GDSC database integrates a large set of drug sensitivity and genomic data. In this study, several therapeutic agents targeting KIRC were collected, and the "Prophet" package in R was used to conduct the prediction process. Meanwhile, the ridge regression method was used to estimate the semimaximum inhibitory concentration (IC 50 ) of the samples [17,18]. A smaller IC 50 always means a lower semi-inhibitory mass concentration of the drug in cancer cells, which indicates that the cancer cells are more sensitive to the drug. Furthermore, 10-fold crossvalidation was performed to estimate the accuracy. p < 0:05:statistical significance. Effect of Differentially Expressed Oncogenes in the Notch Pathway. The Notch signaling pathway plays an important role in regulating tumor cell proliferation, differentiation, and apoptosis. Abnormal activation of the Notch signaling pathway can promote the occurrence and development of several cancers. Chromatin modifications, such as histone methylation, acetylation, and ubiquitination, are key epigenetic mechanisms that regulate gene transcription. Some classical oncogenes and histone modification-related genes in the Notch pathway may influence the regulation of the Notch pathway. In our study, the expression levels of different oncogenes in the three clusters of the Notch pathway were examined and presented as heat maps. p < 0:05: statistical significance. 2.6. Immune Cell Infiltration and Immunotherapy. To provide a theoretical basis for the clinical application of immune markers and immunotherapy in ccRCC, we investigated the immune response genes in tumor tissues and adjacent tissues and immune cell infiltration in tumor tissues. The correlation of immune cell infiltration in ccRCC samples from TCGA was analyzed using single-sample gene set enrichment analysis (ssGSEA) [19,20]. Immune gene set analysis was performed, and immune scores were evaluated. The ssGSEA algorithm was used to calculate the degree of immune infiltration in each sample and to study the gene signals expressed in 29 types of immune cells and regulatory cells related to innate and adaptive immunity. After downloading the GMS-format geneset data required for analysis, the R package GSVA was used to score the samples for immunity, including 29 immune-related scores. Finally, we obtained four classic immunomodulators, DCs, mast cells, Tfh, and T1IR, and used the "GGSCATterStats" package to plot scatter plots showing their specific correlation with the Notch score. p < 0:05: statistical significance. Analysis of the Prognosis of KIRC Based on LASSO Regression. To determine the expression level of the Notch pathway gene in normal and KIRC tissues, a heat map was used. In addition, "corrplot" was used to describe the coexpression relationship between any two Notch pathway genes in KIRC. Univariate Cox regression was used to analyze the genes associated with prognosis, and a prognostic model was constructed using LASSO regression and multivariate Cox regression. Univariate and multivariate Cox analysis verified that the Notch gene prognostic correlation model was an independent prognostic factor, and receiver-operating curve (ROC) analysis further verified its accuracy with respect to predicting survival. A risk score was calculated for each sample based on the expression level of relevant genes in the prognostic model, and all samples were divided into highrisk and low-risk groups by the median risk score. The optimal cut-off value was selected using the R software "SurvMiner" package, and the "Survival" package in RStudio was used to compare the overall survival difference between high-risk and low-risk groups. p < 0:05: statistical significance. In the HR analysis, univariate and multivariate Cox regression models were used to analyze the relationship between clinicopathologic features and risk score. To show the multiple attributes of the statistically significant protective and risk genes, we used a Sankey diagram drawn with the "ggalluvial" package. We used RStudio for all statistical analyses. Statistical significance was set at p < 0:05. The relevant online atlas of proteins (https://www .proteinatlas.org/) provided by Sjöstedt et al. [21] and Uhlénet al. [22] was used to verify whether or not the expression of model's proteins was consistent with the expression of the corresponding model's genes. Results 3.1. CNV and SNV of the Notch Pathway Genes in Cancers according to TCGA. A total of 47 Notch pathway genes were detected from 32 types of malignant tumors in TCGA, and CNV and SNP of these genes were analyzed by R. We found that among the 32 types of malignant tumors, most tumors showed no CNV gain or loss ( In addition, we found that most of the Notch pathway genes had SNV in UCEC, and the SNV frequency was almost greater than 0.04. Part of the Notch pathway genes in DLBC, SKCM, COAD, STAD, BLCA, CESC, ESCA, HNSC, LUAD, and READ had SNV, and SNV frequency was greater than 0.04. The SNV frequency of the Notch pathway genes in tumors was less than 0.02 (Figure 1(c)). We further observed mutations in the Notch pathway gene in KIRC and found that CNV and The Role of Different Notch Genes in Different Cancers and the Mechanisms of Action of Each Compound Targeting the Notch Pathway. A Connectivity Map (CMap) [23] was used to study the relationship between genes and compounds to determine compounds that may target or regulate Notch pathway genes. We identified 20 related compounds that acted on 16 types of malignant tumors using the compound enrichment score (Figure 2(a)). Further, we used the CMap mode-of-action (MoA) to analyze the mechanism of action of the six compounds, and found that each of the six compounds had an independent mechanism of action ( Figure 2(b)): galantamine is an acetylcholinesterase inhibitor, and imatinib can inhibit BCR-ABL kinase, KIT, and DGFR receptors. Triflupromazine, a dopamine receptor antagonist, inhibits both EGFR and SRC. Norprogesterone is known as a progesterone receptor agonist, and Denoprost excites prostaglandin receptors. The Notch pathway plays an important role in tumor occurrence and development. Studies have shown that the Notch pathway can inhibit the transcription of tumor cells or promote the effects of tumor cells, i.e., it can act as a cancer suppressor or promoter [6]. In our study, we further divided the Notch pathway genes into protective genes and risky genes using the information in TCGA and investigated the relationship between Notch pathway gene expression and patient survival (genes with high expression in tumor and prolonged patient survival as protective genes, contrary to risky genes), determining their Figure 1: (a, b) CNV frequencies of 47 Notch pathway genes in 32 tumor types from TCGA. The color bar on the right represents gain or copy number, with pink representing high copy frequency and green representing low copy frequency. (c) SNV frequencies of 47 Notch pathway genes. The color bar on the right shows the degree of SNV, with pink representing high frequency and blue representing low frequency. ). The expression of most genes of the Notch pathway in KIRC was significantly higher than that in normal kidney tissue. Further analysis revealed that most genes play a protective role in kidney cancer. We found an interesting phenomenon in which both protective and risky genes were 12 ( Figure 2(d)), which is contrary to the report in an article that Notch is a cancer pathway. This strange phenomenon aroused our interest, so we conducted a further biometric analysis to understand the mechanism. 9 Oxidative Medicine and Cellular Longevity score from low to high) as follows: cluster 1, cluster 2, cluster 3, and cluster 4 ( Figure 3(a)). We demonstrated the differences in the Notch score among the four clusters using a violin diagram (Figure 3(b)). The Notch score was significantly different (p < 0:05) among the four clusters. Furthermore, we described the survival curves of the four clusters (Figures 3(c) and 3(d)) and combined cluster 2 and cluster 3 into a new cluster 2. Through the analysis of the survival curve, it was found that there were significant differences in the survival rates of KIRC patients in the three new clusters. The prognosis of cluster 3 was significantly better than that of cluster 1 and cluster 2, which is similar to that depicted in Figure 2(c). Most Notch genes are highly expressed in KIRC, and all the patients with high Notch scores had longer survival. These data suggest that Notch may play a role as a tumor suppressor gene in KIRC. Finally, using information from TCGA, we further analyzed the relationship between Notch scores and the clinicopathological features of KIRC ( Figure 3(e)). We found that the Notch scores were significantly related to the tumor, stage, metastasis, and fustat of KIRC, and a higher Notch score was associated with lower tumor grade, stage, and prognosis. There was a statistically significant difference (p < 0:05); the Notch score was not associated with age or Fiume (p > 0:05). This further suggests a protective role of the Notch pathway in KIRC. Drug Sensitivity Analysis of Notch Pathway Genes in KIRC Based on GDSC Data. At present, the treatment of advanced renal cancer mainly relies on molecular targeted drugs and novel immune checkpoint inhibitors. Molecular targeted therapy of cancer is based on the molecular biology of cancer, taking tumor-related molecules as targets, using specific agents or drugs for target molecules for treatment. Since 2006, 11 types of targeted drugs, sorafenib, sunitinib, bevacizumab+IFN, ticsirolimus, everollimus, acitinib, pazopanib, capotinib, navumab, lenvatinib, and erlotinib, have been recommended by the NCCN to use as a firstor second-line treatment for metastatic kidney cancer. According to their targets, these 11 targeted therapy drugs are classified into VEGF inhibitors, i.e., sorafenib, sunitinib, bevacizumab, acitinib, prazopani, capotinib, and lovaritinib; MTOR inhibitors, i.e., tesirolimus, everolimus; Pd-1 inhibitor, Navumab, and EGFR inhibitors, i.e., erlotinib [24][25][26][27]. As mentioned above, the Notch pathway plays an important role in the occurrence and development of tumors. Our data also show that the Notch pathway genes might play a protective role in KIRC. Is there any relationship between the Notch pathway gene and the current targeted drugs that can effectively treat advanced renal cancer? To clarify this question, we further studied the relationship between the Notch gene and the IC 50 of 12 commonly used targeted drugs. A ridge regression model was built to predict the IC 50 of the drugs against tumor using the GDSC database. The Notch gene is associated with most targeted therapies as shown in Figure 4, the sensitivity of each Notch gene cluster toward each drug is as follows: for pazopanibm, cluster 3 is better than cluster 2; for sorafenib, cluster 1 is better than cluster 2, and cluster 2 is better than cluster 3; for sunitinib, cluster 3 is better than cluster 1, and cluster 2 is better than cluster 1; for nilotinib, cluster 3 is better than cluster 2, and cluster 2 is better than cluster 1; for vorinostat, cluster 3 is better than cluster 1, and cluster 2 is better than cluster 1; for axitinib, cluster 3 is better than cluster 1, and cluster 2 is better than cluster 1; for gefifitinib, cluster 2 is better than cluster 3, and cluster 1 is better than cluster 3; for temsirolimus, cluster 2 is better than cluster 3, and cluster 1 is better than cluster 3; for lapatinib, cluster 1 is better than cluster 2; for metformin, cluster 2 is better than cluster 1, and cluster 2 is better than cluster 3; for bosutinib, cluster 2 is better than cluster 3, and cluster 3 is better than cluster 1; for tipipifarnib, cluster 1 is better than cluster 3, and cluster 2 is better than cluster 3. Through our study, the correlation between the therapeutic effects of commonly used targeted drugs and Notch genes might be well understood, which may be helpful for advanced KIRC treatment in the future. Correlation of Notch Pathway Genes with Potentially Targetable Classical Genes, Sirtuin Family Genes, and HDAC Family Genes. Histone acetylation and deacetylation play important roles in regulating gene expression. In addition to the classical histone deacetylation enzymes (HDACs) of classes I and II, there is also a special class of HDACs (class III HDAC, Sirtuin). According to the characteristics of the substrates, it is speculated that the physiological function of human Sirtuin protein may be involved in the regulation of the balance of cell survival and death under stress conditions. In contrast, metabolism regulation affects the development, differentiation, aging, and other physiological processes and is closely related to cancer [26]. SIRT5mediated SDHA desuccinylation promotes clear cell RCC tumorigenesis [28]. In addition, the SIRT family shows a differentially expressed organization in RCC [29]. Deacetylation of the tail area of histones could cause DNA to bind more tightly to the histone core area, preventing the promoter region from being activated and ultimately inhibiting transcription [30,31]. In our study, we further clarified the relationship between the Notch pathway gene and common oncogenes in KIRC, and the correlation of the Notch pathway genes with the Sirtuin family and HDAC family proteins. The results showed that AKT1, MYC, and VEGFA was highly expressed in cluster 3 (the exception being HRAS), but they were low expression in cluster 1 in KIRC, suggesting the role of these molecules as oncogenes. However, common tumor suppressor genes, such as VHL and PTEN, were highly expressed in cluster 3 with low expression in cluster 1, much like the oncogenes ( Figure 5(a)). In addition, the results of the relationship between the Notch pathway genes and Sirtuin family genes show that SIRT1 was highly expressed in cluster 3 but had low expression in cluster 1. SIRT2 and SIRT3 were highly expressed in cluster 2, but had low expression in clusters 1 and 3 with no significant difference. SIRT4, SIRT5, SIRT6, and SIRT7 were all highly expressed in cluster 3 but had low expression in cluster 1. This suggests that the Sirtuin family plays different roles in the KIRC ( Figure 5(b)). Furthermore, the results of the relationship between the Notch pathway genes and HDAC family genes show that DNMT1, 10 Oxidative Medicine and Cellular Longevity 16 Oxidative Medicine and Cellular Longevity HDAC1-HDAC7, and HDAC9 were all highly expressed in cluster 3 but had low expression in cluster 1. HDAC8 and HDAC11 were highly expressed in cluster 1 but with low expression in cluster 3. HDAC10 had the highest expression in cluster 2 and higher expression in cluster 1 than in cluster 3. Similar to the Sirtuin family, it is suggested that the HDAC family also plays different roles in KIRC ( Figure 5(c)). Analysis of the Correlation between the Notch Pathway Genes and Immune Infiltration. The tumor microenvironment (TME) is a two-way interaction between tumor cells and stromal cells, dynamic with the role of complex networks. During the development of tumor cells, immune escape occurs and further develops into metastases, and the TME provides the necessary cellular and molecular environment for this dynamic process. Different degrees of immune cell infiltration exist in the tumor microenvironment of renal cancer, and the immune cell infiltration pattern is closely related to the survival and clinical stage progression of renal cancer. In the future, better targeted drugs can be developed according to the immune cell infiltration pattern [32][33][34]. In this study, we further explored the relationship between 47 Notch pathway genes and 29 immune infiltration-related factors and cells. The results show that PTCRA, MFNG, LFNG, Notch1-3, DTX3L, DTX2, CIR1, APH1A, and ADAM17 are positively correlated with most immune-infiltrating agents. In contrast, SNW1 RFNG, NUMB, Notch4, MAML3, KAT2A, DVL2, DVL1, DLL1, and CTBP2 are negatively correlated with immune infiltration (Figure 6(a)). Most immuneinfiltrating components were positively correlated with the Notch pathway genes, including the type-II-IFN response, mast cells, DCs, and CCR. A few, including DCs, Tfh, Th2-cells, and T-cell-coinhibition showed a negative correlation with the Notch pathway genes ( Figure 6(b)). Finally, we analyzed the correlation between four immuneinfiltrating components (DCs, mast cells, T-helper cells, and type-I-IFN response) and Notch scores. The results show that the responses of DCs, mast cells, T-helper cells, and type-I-IFN are positively correlated with the Notch score (Figures 6(c)-6(f)). KIRC Patient Prognosis Analyzed through LASSO Regression. Notch plays a different role in different tissues and cells and may promote or inhibit cancer depending on the tumor type and other signaling pathways. However, Notch has a cancer-promoting role in most tumors. Studies have suggested that Notch expression level is associated with the prognosis of KIRC, and high Notch expression might indicate poor prognosis [6,7]. In our study, we compared the differences in the expression of 47 Notch pathway genes in normal kidney tissues and renal cancer tissues. The results show that the expressions of 36 Notch pathway genes is abnormal in 72 normal kidney tissues and 539 KIRC samples obtained from TCGA (Figure 7(a)). Further hazard ratio (HR) analysis revealed that 25 Notch pathway genes were associated with KIRC progression, 14 of which were significant, including KAT2B, KUMB, NUMBL, DVL3, and JAG1 ( Figure 7(b)). Results of gene coexpression analysis indicate that there is a coexpression relationship between Notch pathway genes (Figure 7(c)). We further used LASSO regression to establish a predictive model to analyze the value of Notch pathway genes in predicting the prognosis of patients with KIRC. Therefore, we selected 14 genes (HDAC1, JAG1, JAG2, MAML3, CREBBP, MAML2, CTBP2, DTX2, CTBP1, KAT2A, NUMBL, DLV3, DTX1, and NCSTN) as risk factors using the LASSO regression model (Figures 7(d) and 7(e)). All the KIRC cases were further divided into two groups based on the best cut-off values of the risk scores, i.e., high-risk group and low-risk group. On this basis, we analyzed the differences in survival curves between the two groups and further analyzed the relationship between the grouping model and pathological features of KIRC. Similar to the association between Notch score and KIRC survival, the results show that KIRC survival is significantly better in the low-risk than in the high-risk group (Figure 7 ). An AUC value > 0:7 is considered predictive. Risk Score Was a Risk Factor in KIRC. According to the previous results of HR analysis, 14 Notch pathway genes as risk factors were divided into risky and protective genes. Figure 8(a) shows that the high expression groups contains CREBBP, CTBP1, CTBP2, DTX2, JAG1, JAG2, KAT2A, MAML3, and NUMBL; the low expression group included DTX1 and HDAC1; the Nosig group included MAML2, NCSTN, and DVL3. The protective genes include CREBBP, CTBP1, CTBP2, DTX2, JAG1, and DVL3. The remaining genes were all risk genes. The results show that some of the highly expressed genes are protective genes, and some are risk genes, while all lowexpression genes are risk genes. The results demonstrate, once again, that the Notch pathway genes may play multiple roles in the tumor. Univariate Cox regression analysis showed that age, grade, stage, T (tumor), M (metastasis), and risk score were risk factors, while multivariate Cox analysis suggested that age, grade, stage, and risk scores were risk factors (Figures 8(b) and 8(c)). Finally, we analyzed the predictive effects of age, grade, stage, and risk score with respect to 5-year survival, 7-year survival, and 10-year survival of KIRC patients using the nomogram of the model (Figure 8(d)). In conclusion, all the results suggest that the Notch risk score is a significant factor for predicting patient prognosis. 28 Oxidative Medicine and Cellular Longevity proteins (https://www.proteinatlas.org/) provided by Sjöstedt et al. [21] and Uhlénet al. [22]was used to verify CTBP1 and NUMBL encoded by selected Notch pathway genes in the model in KIRC. The results show that the expression of CTBP1 and NUMBL in KIRC was significantly higher than that in normal tissue (Figure 10), which suggests that the expression of model proteins is consistent with the expression of the corresponding model genes. In 2000, Rae et al. compared renal cancer tissues with normal kidney tissue by differential PCR and found that the transcript levels of Notch3 were increased in RCC, which may be involved in the occurrence and progression of tumors [46]. Overexpression of Notch1 increases the risk of distant metastasis in stage T1 RCC [47]. Recently, through the analysis of the Tumor Genome Atlas (TCGA), the low expression of ADAMl7, a key factor involved in Notch signaling enzyme digestion, was found in patients with clear cell carcinoma, chromophile cell carcinoma, or papillary cell carcinoma, suggesting a good prognosis, indirectly suggesting that the Notch pathway may affect the outcomes of patients with multiple types of RCC [48]. Specific inhibition of Notch1 in renal carcinoma cells reduced the level of B-cell lymphoma/leukemia-2 (BC1-2) 32 Oxidative Medicine and Cellular Longevity proteib and increased apoptosis. Meanwhile, the phosphorylation of phosphatidylinositol-3kinase (P13K)/protein kinase B (Akt), which is involved in promoting cell growth and proliferation, decreased [49], suggesting that Notch1 simultaneously regulates the proliferation and apoptosis of tumor cells and inhibits Notch1 signaling, which may be a new therapeutic target. Von Hippe1 is known to cause protein inactivation. The Lindau syndrome (VHL) gene mutation is the most common cause of renal clear cell carcinoma. However, in animal models, VHL deletion alone does not effectively induce renal carcinoma. Moreover, the overexpression of Notch1 intracellular segment in VHL knockout mice revealed accumulation of intracellular fat, cytoplasmic dysplasia nests, and upregulated expression of Hey1 and Hey2 downstream of the Notch pathway, similar to human renal clear cell carcinoma. Therefore, it can prove that the abnormal Notch1 signaling pathway is involved in the pathogenesis of early renal cancer [50]. Angiogenesis and tumor stem cell mechanisms play a role in the pathogenesis of RCC. Multiple studies have shown that the expression of Notch-related ligand DLL4 is elevated in surgical specimens of renal clear cell carcinoma and is an independent prognostic factor. The expression of DLL4 was 9-times higher in the vascular endothelium of RCC tissues (compared to that in the normal renal tissue). DLL4/Notch/Heyl/matrix metalloproteinase (MMP)-9 cascades promote the distant metastasis of the tumor, and lentivirus short hairpin RNA (shRNA) specifically silenced DLL4 in mice, which significantly inhibited the growth of transplanted tumors [51][52][53]. CDl33 +/CD24 + cells with tumor stem cell properties were isolated from the renal carcinoma cells, ACHN and AKI-L, and treated with a Notch pathway inhibitor (MRK-003). The expression of dry markers, such as copper transporter (CTR2), BC1-2, oct4binding protein (OCT4), Kruppe1-like factor 4 (KLF4), and multidrug resistance gene (MDRl), was downregulated, and the abilities of self-renewal, tumor formation, invasion, and migration were reduced, and the sensitivity to sorafenib and cisplatin increased [54]. In another study, the inhibition of the Notch signaling pathway by the γ-secretase inhibitor, Figure 10: Immunohistochemical images were obtained from online atlas of proteins (https://www.proteinatlas.org/) for CTBP1 and NUMBL, which are representative of the two gene groups ( * p < 0:05, * * p < 0:01, * * * p < 0:001). LY3039478, in 769-P and aki-L cells (cell lines originating from highly aggressive RCC), resulted in slowed cell proliferation and downregulated expression of Myc and Cyc1 in Al. Thus, Notch may be a new therapeutic target for advanced renal cancer [55]. In our study, we first investigated the expression of Notch pathway genes in 32 cancers and their differential expression levels. We found that in 32 types of malignant tumors, most tumors showed no CNV gain and loss of Notch genes, and most CNVs fluctuated around 0.2 (Notch genes with CNV gain frequency > 0:04, and Notch genes with CNV loss frequency > 0:05). In addition, we found that most of the Notch pathway genes had SNV in UCEC, and the SNV frequency was almost greater than 0.04. Some of the Notch pathway genes in DLBC SKCM, COAD, STAD, BLCA, CESC, ESCA, HNSC, LUAD, and READ had SNV, and the SNV frequency was greater than 0.04. The SNV frequency of Notch pathway genes in tumors was less than 0.02. We further observed mutations in the Notch pathway genes in KIRC and found that CNV and SNV of the Notch gene in KIRC were significantly low, fluctuating between 0 and 0.02. To determine whether there are compounds that may target or regulate the Notch pathway genes, we identified 20 related compounds that acted on 16 kinds of malignant tumors by compound enrichment score. Furthermore, we used CMap MoA to analyze the mechanism of action of the six compounds and found that each of the six compounds had an independent mechanism of action. We further divided the Notch pathway genes into protective genes and risky genes using information from TCGA and investigated the relationship between Notch pathway gene expression and patient survival. We found that most of the genes of the Notch pathway had no value in tumors, and most of the remaining genes were risk genes, and a few were protective genes. In KIRC, we found an interesting phenomenon in which both protective genes and risk genes were 12, which was also consistent with previous reports. These results indicate that the Notch pathway gene expression is stable in renal clear cell carcinoma without obvious mutations. In addition, the risk genes and protective genes were equally matched in KIRC, indicating that the occurrence of KIRC was related to the proportion between the expression levels of risk genes and protective genes. If the risk genes were dominant, renal cancer would occur, whereas the opposite occurred when the protective genes were dominant. We further studied the relationship between Notch pathway genes and KIRC, according to the mRNA expression level of the Notch pathway genes obtained from TCGA, and found that there were significant differences in the survival rates of KIRC patients in the three clusters. Cluster 3 was better than cluster 2, while cluster 2 was better than cluster 1. Most Notch genes were highly expressed in KIRC, and patients with high Notch scores survived for longer. These data suggest that Notch may play a role as a tumor suppressor gene in KIRC. Finally, using information from TCGA, we analyzed the relationship between the Notch scores and the clinicopathological features of KIRC. We found that Notch scores were significantly related to T (tumor), stage, metastasis, and fustat of KIRC, and a higher Notch score was associated with lower tumor grade, stage, and prognosis. This further suggests a protective role of the Notch pathway in KIRC. Based on previous reports on the mechanism of Notch in KIRC, and through our analysis of the relationship between Notch pathway genes and the survival of KIRC patients, we found that most Notch pathway genes had protective effects in KIRC. Presently, targeted drug therapy is the first-line treatment for patients with advanced renal cancer, and its therapeutic effect has been widely recognized. There are also some reports on the effect of targeting the Notch pathway in cancer treatment. Therefore, we performed a GDSC analysis to confirm the effects of some of the most commonly targeted Notch pathway gene drugs in KIRC therapy. Our findings are expected to lead to a better understanding of the correlation between the therapeutic effects of commonly used targeted drugs and Notch genes, which may be helpful for advanced KIRC treatment in the future. Our results show that there is a correlation between IC 50 and Notch score in some drugs, while there is no significant correlation in the other drugs, which may be related to the difference between the targets of the drugs and the abnormal genes of the Notch pathway in renal cancer. The lower the IC 50 , the more effective a drug will be if it has a target that matches or inhibits the Notch genes, which could cause KIRC. Currently, there are many studies based on cancer immunotherapy. It is gradually gaining acceptance in the treatment of cancer by intervening in histone acetylation and modulating T cell killing [56,57]. Furthermore, for KIRC immunotherapy be successful, it is important to clarify the mechanism of action of classical protooncogenes, tumor suppressor genes (KRAS, VHL, etc.), and immunerelated genes, especially histone acetylation in KIRC and their relationship with the Notch pathway genes (three clusters). Our results show that HRAS, AKT1, MYC, and VEGFA are highly expressed in cluster 3 but had low expression in cluster 1 in KIRC, suggesting their role as oncogenes. However, common tumor suppressor genes, such as VHL and PTEN, were highly expressed in cluster 3 but had low expression in cluster 1, similar to the performance of the oncogenes. Based on the characteristics of the substrates, it is speculated that human Sirtuin may be involved in regulating the balance between cell survival and death under stress conditions on the one hand, and metabolism regulation, on the other hand, affecting the development, differentiation, aging, and other physiological processes, and is closely related to cancer [58,59]. In addition, the results of the relationship between Notch pathway genes and Sirtuin family genes show that SIRT1 was highly expressed in cluster 3 but had low expression in cluster 1. SIRT2 and SIRT3 were highly expressed in cluster 2 but with low expression in clusters 1 and 3, with no significant difference. SIRT4, SIRT5, SIRT6, and SIRT7 were all highly expressed in cluster 3 but had low expression in cluster 1. This suggests that the Sirtuin family plays different roles in KIRC. Furthermore, the results of the relationship between the Notch pathway genes and HDAC family genes show that DNMT1, HDAC1-HDAC7, and HDAC9 are all highly expressed in cluster 3 but had low expression in cluster 1. HDAC8 and
8,752.8
2022-01-20T00:00:00.000
[ "Biology" ]
Studying Cortical Plasticity in Ophthalmic and Neurological Disorders: From Stimulus-Driven to Cortical Circuitry Modeling Approaches Unsolved questions in computational visual neuroscience research are whether and how neurons and their connecting cortical networks can adapt when normal vision is compromised by a neurodevelopmental disorder or damage to the visual system. This question on neuroplasticity is particularly relevant in the context of rehabilitation therapies that attempt to overcome limitations or damage, through either perceptual training or retinal and cortical implants. Studies on cortical neuroplasticity have generally made the assumption that neuronal population properties and the resulting visual field maps are stable in healthy observers. Consequently, differences in the estimates of these properties between patients and healthy observers have been taken as a straightforward indication for neuroplasticity. However, recent studies imply that the modeled neuronal properties and the cortical visual maps vary substantially within healthy participants, e.g., in response to specific stimuli or under the influence of cognitive factors such as attention. Although notable advances have been made to improve the reliability of stimulus-driven approaches, the reliance on the visual input remains a challenge for the interpretability of the obtained results. Therefore, we argue that there is an important role in the study of cortical neuroplasticity for approaches that assess intracortical signal processing and circuitry models that can link visual cortex anatomy, function, and dynamics. Introduction Unravelling the organization of the visual cortex is fundamental for understanding the foundations of vision in health and disease. A prominent feature of this organization is the presence of a multitude of visual field maps. These maps are spatially and hierarchically organized representations of the retinal image and are often specialized to encode specific environmental visual attributes. Studying these cortical visual maps is relevant as it enables the characterization of the structure and function of the visual cortex and therefore the study of the neuroplastic capacity of the brain. With the latter, we refer to the ability of the brain to adapt its function and structure in response to either injury or to a treatment designed to recover visual function. Over the last two decades, visual field mapping has been extensively used to infer neuronal reorganization resulting from visual field defects or neuroophthalmologic diseases. For a review, see Wandell and Smirnakis [4]. Because of its focus on the analysis of individual participants and the relative amount of detail provided, the pRF model seems ideal to study questions on neuroplasticity-at least in theory. Some of the hypotheses that can be tested with pRF mapping are as follows: are the neurons within the lesion projection zone active? Is there a displacement in position or enlargement of the pRF size during development, following a retinal or cortical lesion? Do the pRF properties change in response to monocular treatments that promote the use of the amblyopic eye, e.g., patching or blurring therapy? Given that visual neuroplasticity is greatest during early stages of development (childhood), the characterization of the pRF properties has special relevance to determine, in vivo, the presence of atypical properties of the visual cortex during development and plasticity. In particular, changes in pRF size have been reported in a series of studies on developmental disorders. Clavagnier and colleagues measured enlarged pRF sizes in primary visual areas (V1-V3) in the cortical projection from the amblyopic eye as compared to the fellow eye [5]. Schwarzkopf and colleagues reported that individuals with autism spectrum disorder (ASD) have larger pRFs as compared to controls [6]. Anderson and colleagues found smaller pRF sizes in the early visual cortex of individuals with schizophrenia compared to controls, using a specific pRF model that takes into account the center surround structure of the RF [7]. In the case of congenital visual pathway abnormalities that affect the optic nerve crossing at the chiasm, e.g., achiasma, albinism, and hemi-hydranencephaly, several studies revealed overlapping visual fields and bilateral vertical symmetric pRF representations [8][9][10][11][12]. This contrasts with the case of a single patient that had her left hemisphere removed at the age of three, who did show the expected right hemifield blindness, even though she had larger representations of the central visual field in extrastriate visual maps, which was particularly apparent in area LO1 in the right hemisphere [13]. Hence, the pRF modeling approach has been applied with at least some degree of success to reveal neuroplastic changes at the level of the visual cortex. Nevertheless, in the present paper, we will briefly indicate issues with the current pRF approach as it relates to neuroplasticity and ways to Noninvasive measurement of receptive fields. The visual maps result from a combination of the receptive fields (RF) of individual neurons. In vision, a RF corresponds to the portion of the visual field that a neuron responds to. A fundamental property of the visual cortex is that visual neurons are retinotopically organized (neighboring visual neurons respond to nearby portions of the visual field). Currently, it is not possible to measure the activity of single neurons noninvasively; however, the development of noninvasive neuroimage techniques, such as functional magnetic resonance imaging (fMRI), combined with computational neural models have been used to characterize RF properties at a larger scale. Briefly, fMRI uses a magnetic field to detect changes in blood oxygenation, a proxy of neural activity. This activity is coupled to oxygen consumption, which is why fMRI is also called blood oxygen level-dependent (BOLD) imaging. In fMRI, a standard voxel of 3 mm 3 captures the aggregate activity of~1 million neurons [1,2]. Therefore, the notion of the RF is extended to the collective RF of a population of neurons, the population receptive field (pRF). By applying biologically plausible models to describe the structure of this collective RF at a recording site, pRF mapping became a popular technique for the detailed characterization of visual cortical maps at the level of neuronal populations [3]. In essence, this method models the pRF as a two-dimensional Gaussian, of which the center and width correspond to the pRF's position and size, respectively. The model pipeline and description are presented in Figure 1. Figure 1: The population receptive field (pRF) modeling procedure. A pRF model describes, per voxel, the estimated pRF properties position (x, y) and size (σ). A voxel's response to the stimulus is calculated as the overlap between the stimulus mask (the binary image of the stimulus aperture: a moving bar) at each time point and the receptive field model. Following this, the delay in hemodynamic response is accounted for by convolving the predicted time courses with the hemodynamic response function. Finally, the pRF model parameters are adjusted for each voxel to minimize the difference between the prediction and the measured BOLD signal. The best fitting parameters are the output of the analysis. Figure adapted from Dumoulin and Wandell [3]. improve the methods. Finally, we will argue that we should also look beyond it to fully address questions on neuroplasticity. Limitations of Current Stimulus-Driven Approaches When Studying Neuroplasticity We address the question to what extent population receptive field mapping is actually a suitable tool to capture cortical plasticity. We point out various limitations. The most important one is that the assumption of the receptive field and map stability in healthy controls is largely untenable. The most common and straightforward manner in which the pRF approach has been applied is to compare model parameters between either two groups of participants-usually a patient group and matched controls [8,14], or between the affected eye and the normal fellow eye, which can be done in the case of monocular developmental conditions such as amblyopia [5]. In both types of studies, it is commonly assumed that the differences in pRF estimates are caused by differences in brain organization and eye-brain connectivity of the two groups or the two eyes. However, there are various issues that complicate the interpretation of pRF differences in health and disease. A number of these limitations were recently discussed by Dumoulin and Knapen [15], and for this reason, we will only reiterate the most critical ones. Changes at the Level of the Eye Limit the Use of pRF Mapping to Study Neuroplasticity in Both Ophthalmic and Neurological Diseases. Estimates of pRFs are based on the stimulus input. In numerous ophthalmic diseases, changes at the level of the eye-such as cataract or retinal lesions-strongly modify the visual input. This could be a decrease in visual acuity, contrast sensitivity, or the entire loss of vision in part of the visual field. Consequently, in many of such diseases, the stimulus-driven input to the brain will be different and usually deteriorated. In neurological conditions such as in hemianopia, retrograde degeneration of the retina [16,17] gives rise to a similar concern. As changes in the visual input have a direct effect on the signal amplitude, straightforward differences in BOLD signal cannot be taken as an indicator of neuroplasticity or degeneration at the level of the cortex. The retinotopic maps of healthy adults with normal or corrected to normal vision are stable over time when measured under similar environmental and cognitive factors [18,19]. Hence, it would appear that changes in maps or population properties should be a good indication for the presence of neuroplasticity. Indeed, it was found that in patients with long-term visual impairment due to macular degeneration, the pRF of voxels representing both the scotomatic area and neighboring regions are displaced and changed in size [20]. However, there is mounting evidence that simple stimulus manipulations, e.g., masks mimicking retinal lesions, can have a large effect on the population-receptive field estimates in healthy participants. Estimated pRF properties (position shift and scaled size), similar to those in patients with retinal lesions, were observed in healthy adults in whom a visual field defect was simulated [20][21][22]. Comparable shifts in pRF position and scaling of pRF size were also found in an experiment that used scotopic illumination levels to examine the "rod scotoma" in the central visual field [23]. In other words, changes in visual input can mimic the consequences of lesions due to ophthalmic disease in healthy observers. This implies that observed differences in pRF properties in patients relative to controls may simply reflect normal responses to a lack in visual input rather than a reorganization of the visual cortex. Therefore, just by themselves, changes in pRF measures are insufficient to decide on the presence of neuroplasticity. The feasibility to use pRF estimates to topographically map visual field defects in the cortex, particularly in earlystage disease, is further complicated by two aspects. First, neurons near the border of either the scotoma or the edge of the visual stimulus field may be partially stimulated. In such cases, the stimulus aperture partially activates receptive fields that belong to voxels whose pRF center would ordinarily be outside the stimulus presentation zone [21,24]. Second, the presence or absence of a scotoma affects mostly the signal amplitude while the temporal dynamics of the modulation pattern are not affected. As pRF estimates are mostly invariant to the BOLD amplitude, the pRF model does not properly capture the effect of the scotoma. These two factors induce biases in the pRF estimates that can be wrongly interpreted as signs of neuroplasticity (see Box 2). Nevertheless, changes in the BOLD signal may be used as an alternative assessment for nonfunctional parts of the visual system in patients that are unable to perform standard ophthalmic examinations, e.g., infants or patients with nystagmus [25][26][27]. However, because of the above aspects, caution is warranted when interpreting such data. Eye movements may affect the pRF estimates substantially, resulting in noisy maps and increased pRF sizes [28][29][30]. This is particularly relevant for developmental disorders such as amblyopia [5,[31][32][33]. In addition, pRF mapping is most accurate at an advanced stage of ophthalmologic disease where the visual field defects are relatively large and the scotomatic edge (i.e., the transition between healthy visual cortex and damaged visual cortex) is sharp [34,35]. Overall, this inability to accurately detect small visual field defects implies that the sensitivity of the pRF approach is too limited to monitor the effects of slow retinal degeneration or slow cortical changes that would presumably be associated with rehabilitation therapies or other procedures to restore visual functioning. Different Stimulus Properties Result in Distinct pRF Properties in Healthy Human Observers. An additional factor to be considered when interpreting pRF estimates is that the pRF represents the cumulative response across all neuronal subpopulations within a voxel. These subpopulations are selectively sensitive to spatial properties, such as orientation, color, luminance, and temporal and spatial frequencies. Hence, their activity can be driven by specific stimuli. In pRF mapping, manipulating the carrier-the stimulus aperture which drives the neuronal activity-elicits responses from a particular neuronal population. By selectively A bias in pRF estimates induced by the presence of real and simulated scotomas. To show how the presence of a scotoma may affect the pRF estimates, we simulated the pRF behavior in healthy vision (absence of scotoma) and in the presence of a scotoma (either due to a retinal or cortical lesion). The simulated circular scotoma is located in the horizontal meridian at 5 degrees of eccentricity, and it has a 3-degree radius. Figures 2(a) and 2(d) depict the overlap between the pRF model (in red) and the stimulus in the absence and presence of a scotoma (circular region within the bar aperture), respectively. Figures 2(b) and 2(e) show the respective simulations of the predicted pRF response resulting from convolving the stimulus with the pRF model (first part in Figure 1) and subsequent addition of noise. A similar level of noise was added to both simulations. The noise simulates any nonbiological signals captured with MRI. Note that the modulation pattern of the time series only differs between both conditions on the basis of the artificial noise added. The difference is mostly visible in the signal amplitude (note the different scales of the y-axes). When applying the pRF model, we need to define a stimulus mask which, ideally, should match the stimulus displayed during retinotopic mapping. Figure 2(c) shows the pRF-estimated properties in the absence of scotoma. Figures 2(f) and 2(g) depict the pRF estimates in the presence of a scotoma, using a stimulus mask that does not (Figure 2(f)) and that does (Figure 2(g)) take the scotoma into account. When we model the stimulus mask without taking the scotoma into account, this results in a bias, as pRF are enlarged and displaced towards the artificial lesion projection zone border (Figure 2(f)).When the presence of the scotoma is taken into account in the pRF model, the estimated properties of the pRF closely match the simulated ones. Note that the variance explained of pRF estimates in the three situations (normal vision (Figure 2(c)), lesion modelled without scotoma (Figure 2(f)), and lesion modelled with a scotoma (Figure 2(g))) is very similar. This shows that the pRF mapping approach is invariant to the BOLD amplitude, which hinders the detection of small scotomas. Additionally, in clinical cases where the extent of the scotoma is not fully established, it is thus impossible to accurately account for the presence of a scotoma in the pRF mapping. stimulating these neuronal populations, a number of recent studies have shown that compared to the standard stimulus (flickering luminance contrast checkerboard bar), pRF estimates shift in position and change their size [36][37][38][39]. These studies indicate that the recruitment of neural resources depends on the task and that there is a dependency of the retinotopic maps on the task or stimulus. This type of stimulus selectivity captures the neuronal population characteristics for features such as luminance, orientation, or words. In contrast, Welbourne and colleagues [40] found no difference in pRF estimates when using chromatic and achromatic stimuli. This implies that for color, there may be a decoupling between the pRF measurement and the underlying neuronal populations [40]. The spatial distribution of the receptive fields can also be modelled by attention. A series of studies manipulating spatial and feature-based attention found that the neuronal resources are shifted towards the attended positions [30,41,42]. These findings imply one of two things: (1) the topography of the visual cortex is flexible and may change in response to environmental (stimulus, task) as well as cognitive factors such as attention or (2) pRF measures are inaccurate and may change in response to spatial and cognitive factors. Either of these explanations limits the ability of the pRF approach to provide a straightforward assessment of neuroplasticity. Improving Stimulus-Driven Approaches We consider various ways in which the pRF method might be improved to study neuroplasticity. Of note are models that provide information on the reliability of the pRF-estimated properties. As a further incentive, we propose a new pRF model that incorporates cortical temporal dynamics and which integrates connectivity and topography. Given the limitations mentioned above, this raises the question whether and how the pRF approach can be modified to render it more suitable to track neuroplastic changes. As was indicated, mimicking visual field defects can alter pRF properties in a similar manner to patients. At the minimum, this requires creating elaborated control stimulus conditions (simulations) that exactly mimic patient conditions. Unfortunately, this is often impossible to achieve. Deviations of parameter estimates in the patient group from those control values could be an indication of neuroplasticity. However, obtaining good simulations is not trivial. Thus far, the simulations that have been used have generally been quite simple, i.e., mimicking scotomas in which no light sensitivity remained-usually simulated as a region without signal modulation. However, the perceptual awareness of natural scotomas may be substantially different from that of artificial ones. For example, when the visual input is incomplete, the visual system appears to fill in any missing features (through prediction and interpolation) in order to build a stable percept. Moreover, scotomas in patients are usually more complex than simulated ones, both in their shape and their depth (reduced sensitivity). Finally, the scotoma may also change the attentional deployment by the patient, potentially affecting the estimated pRF properties [30,41,42]. In order to accurately measure neuronal reorganization, it is crucial to overcome the abovementioned limitations. A significant amount of work has been directed towards the development of more reliable models of retinotopic mapping. The methodological advances serve three different goals, which may be useful in studying neuroplasticity: (1) improve the reliability of the estimates using more informative pRF shapes and more complex computational models, (2) measure stimulus-selective maps, which allow to capture the reorganization of specific neuronal populations, and (3) measure spatial modulation and dynamics of neuronal populations, potentially reflecting short-term neuroplastic changes. Computational and Model Advances. Computational and model advances have been made to (a) improve the pRF shape so that it better reflects the biological structure of the RF, e.g., using a difference of Gaussian model allows to account for surround suppression [43], and (b) account for nonlinearities, provide distributions of property magnitudes, and capture neuronal characteristics, such as tuning curves. Such models add new pRF features which may be important to infer functional reorganization and provide a measure of the reliability of the estimates. A different pRF shape can be an indication of neuroplasticity. Several models have been developed to account for various possible receptive field shapes: circular symmetric difference of Gaussian (DoG) functions [43], bilateral pRF [10], elliptic shape [34], Gabor wavelet pyramids [34,44], and compressive spatial summation [45]. Some reviews have discussed these methods in detail [15,46]. However, the above models all assume some form of symmetry. Recently, data-driven models were developed that do not assume any a priori shape [47][48][49]. These model-free approaches are particularly relevant to measure the functioning of the visual system in patients, as plasticity may manifest as a differently shaped pRF without affecting its position or size. An example is that asymmetrical shapes capture best the pRF properties of any skewed distributions of RF within a voxel. However, even in these data-driven approaches, the estimated shape of the receptive fields remains dependent on the stimulus used. Extending the pRF model to account for more complex RF shapes will improve its explanatory power-the model can better predict the BOLD response. However, this will not remove the issue of model bias, mentioned in Box 2. In various attempts to resolve this, computational advances were made which can be categorized into four different classes. The first class comprises nonlinear pRF models, such as a compressive spatial summation model and convex optimized pRF, which substantially increases the range of shapes that the model can describe [45]. The second class is the development of Bayesian models. For each property, these models do not only estimate the best fitting value but a full posterior distribution as well [50,51]. This serves several needs: (a) it indicates the uncertainty associated with each estimate (Figure 3). Such uncertainty maps are of particular importance when a visual field defect is present, as higher uncertainty will most likely be associated with model biases, (b) it facilitates the statistical analysis, and (c) it allows one to incorporate additional biological knowledge by providing prior information. An example of such a biologically based prior is that the density of cortical neurons is higher in the fovea than in the periphery [50,51]. In combination, the above-referred three factors improve the interpretability of pRF estimates. The third class comprises the development of the feature-weighted receptive field (fwRF) models that allow capturing additional pRF parameters-such as neuronal tuning curves (e.g., the spatial frequency tuning)-through the combination of measured neural activity and visual features [52]. Finally, the fourth class relates to methods that allow to enhance the resolution at which we can detail RF properties. Of relevance are the approaches that allow to estimate the average single-unit RF size (suRF) [49,53] or multiunit RF (muRF) properties that can without restriction uncover the size, position, and shape of neuronal subpopulations, also when these are fragmented and dispersed in visual space [49,53]. Models of Perception: Spatial Modulation and Dynamics. Specific models have been developed to capture short-term plasticity. Such models take into account cognitive and/or perceptual factors such as attention [30,54] or crowding [55,56] to understand changes in observed spatial properties or perception. Recently, Dumoulin and Knapen proposed a more complex pRF model that relates pRF changes to the underlying neural mechanisms [15]. This very general model allows modeling and predicting dynamic changes that result from changes in the visual input. In particular, they proposed an extension of the pRF model to account for multiple neural subpopulations responding to different properties of the stimulus. Their expectation is that this will enable unravelling of the different sources of pRF plasticity. Although there have been significant improvements in pRF models which may be able to aid in charting neuroplastic changes, in our view, this is still insufficient. There are still many constraints to be addressed, in particular, the fact that a voxel may contain a mixture of neurons with spatially distinct receptive fields. This is particularly relevant in developmental disorders such as albinism and achiasma [9,10] or for voxels located in sulci. In those cases, the measured pRF properties will either represent the strongest contributing RF or be erroneously large. In our view, the neuronal spatiotemporal dynamics can be better captured if we would take into account the interactions with nearby linked populations. The connectivityweighted pRF, described next, is a first attempt to integrate models of cortical organization with cortical connectivity. This further encourages the development of new models that integrate stimulus-and cortex-referred methods. The Connectivity-Weighted pRF Integrates Cortical Organization and Connectivity. Current analytical approaches to track retinotopic changes are voxel based. This limits their accuracy, as the visual system is dynamic and the activity of one population of neurons is influenced by nearby connected populations. Ideally, a more complete model should reflect the balance between inhibitory and excitatory processes and account for various cortico-cortical interactions. [50,51]. Both methods result in similar visual field maps. However, the latter method also enables the estimation of the uncertainty associated with each parameter; (b) eccentricity, phase, and pRF size uncertainty maps obtained for the left hemisphere of a single healthy participant. The uncertainty maps describe how reliable each estimate is. For example, we see that the polar angle estimates for the central fovea (near fixation) are less reliable than those measured in the periphery. The uncertainty associated for each estimate was calculated as the difference between the 75% and 25% quantiles of the Bayesian Markov chain pRF distribution. Here-as an example of such a model-we propose a stimulus-driven pRF model, in which the estimated parameters, pRF j , depend upon the unique activity of the neuronal population pRFu j and the activity of interacting cortical neuronal populations, weighted by the strength of their connections, C jk . Note that e j is the error associated with voxel j. Depending on the goal of the study and the design of the experiment, the connectivity (C) can be based either on the structure (anatomically connected neighbors), on function (neuronal populations which exhibit specific correlated activity during the resting state), or on effective connectivity [57]. Here, we treat it as effective connectivity given that it accounts for dynamic interactions and the model of coupling between neuronal populations. Such a model can describe the spatiotemporal dynamics of neuronal populations. It is sensitive to the recurrent flow of synchronized activity between connected neurons. Using such a connectivity-weighted model, we may-in the future-assess brain plasticity based on both structural reorganization and functional reorganization. Cortical Circuitry Models Look beyond the Stimulus We suggest that models that can be estimated without requiring visual stimulation, which we refer to as cortical circuitry models (CCM), may be highly suitable to measure cortical reorganization. While not without potential pitfalls themselves, such approaches avoid many of the complications associated with the stimulus-driven pRF approach. Additionally, we indicate various other avenues that may improve our ability to quantitatively assess neuroplastic changes in the visual cortex. Studying Neuroplasticity Using Intrinsic Signals and Cortical Circuitry Models. The fMRI signal is a mixture of stimulus-specific and intrinsic signals [57,58]. As a result, it is plausible to assume that intrinsic generated signals may influence stimulus-driven signals [57,58]. Therefore, the study of brain plasticity may be ameliorated and/or complemented if the dependence on stimuli is reduced. For this reason, estimates based on intrinsic signals rather than task responses are potentially a very suitable source of information on the presence or absence of cortical plasticity. Intrinsic signals are commonly obtained in a "resting-state" condition in which participants are not required to do anything in particular and usually have their eyes closed. Resting-state fMRI signal fluctuations have been shown to correlate with anatomically and functionally connected areas of the brain. In particular, specialized networks have been found in cortical and subcortical areas in sensory systems [59][60][61][62][63][64]. Based on resting-state data, CCMs can be used to infer the integration of feedback and feedforward information [65]. However, one important limitation is that currently, the directionality of information flow cannot be directly inferred from the BOLD signal. Therefore, primarily because of the limited temporal resolution of fMRI, it remains to be determined whether CCMs can be used to assess this aspect. Nonetheless, CCMs have the potential to capture the effects of structural reorganization and can inform about which neural circuits have the potential to reorganize and which are stable. An example of this type of model is the connective field (CF) model, which applies the notion of a receptive field to cortico-cortical connections [66]. Another example is the connectopic model which combines voxelwise connectivity "fingerprints" with spatial statistical inference to detail multiple overlapping connection topographies (connectopies) in the human brain [66,67]. Ultimately, in our view, it will be essential to combine retinotopic and neural circuitry models, such that their combination can be used to fully describe the dynamics of the visual cortex [68]. To accomplish this, models will have to be developed that can capture the (dynamic) adaptation of feedback, feedforward, and lateral connections in the functional networks underlying visual processing and cognition. Such models may be implemented by calculating the correlation between neuronal populations taking time lags into account or by using CCM to describe connections across cortical layers (see also below). The Connective Field Defines a Receptive Field in Cortical Surface Space. Connective field (CF) modeling predicts the neuronal activity in a target area (e.g., V2) based on the activity in a source area (e.g., V1). In a similar way that a neuron has a preferred location and size in visual space (its receptive field), it also will have a preferred location and size on the cortical surface of a region that it is connected with [65,66,68]. Based on retinotopic mapping, the visual field coordinates of the target area can be inferred from the preferred locations in the source region. In this way, the connective field-when combined with pRF mapping-can link a CF's position in cortical surface space also to a position in visual space. The connective field model is briefly described in Box 3. There are several advantages of CCMs when compared to pRF models. First, the ability to assess and compare the finegrained topographic organization of cortical areas promotes the comparison of connectivity patterns between groups of participants with different health conditions and between experimental conditions [67,70]. Second, CCMs can even be applied to data that was acquired in the absence of any sensory input, enabling the reconstruction of visuotopic maps even in the absence of a stimulus and in blind people. Several studies have shown that cortical connectivity during the resting state reflects the visuotopic organization of the visual cortex [65,67,[70][71][72][73]. A comparison between stimulus-driven and resting-state CCMs may also convey information on the influence of retinal waves and prior visual experience in the cortical circuitry. For example, larger CF sizes were measured with visual stimulation when compared to the resting state [65,73,74]. Third, CCMs provide insight into the anatomical and functional neuronal circuitry that enables the visual system to integrate information across different cortical areas. They can reveal the presence or absence of a change therein following a disease [74][75][76]. Fourth, CCMs, in particular when assessed in the resting state, are less affected by various intrinsic and extrinsic factors such as the type of task and stimulus [37][38][39], patient performance, optical properties and health condition of the eye [77], or stimulus-related model-fitting biases [22,77]. Despite these important advantages, the current CCM approaches also have their limitations. First, the reliability of CCM parameters, such as the CF size, is affected by the signal-to-noise ratio. Fortunately, the signal-to-noise ratio does not introduce a systematic bias in the estimated parameters [74][75][76]. Second, the current iteration of CCM models does not capture causal interactions between different cortical visual areas. Third, like pRF estimates, it is likely that the accuracy of the CCM-related estimates depends on the spatial and temporal resolution, the distortion and spatial spread of the BOLD signal, and the distribution of dural venous sinuses and vessel artifacts. Fourth, although there is no need for stimulus-driven signals, resting state signals-and thus also any estimated CCM properties-are influenced by the environmental conditions under which they were acquired. Factors such as eye movements and exterior luminance may also influence estimates. These limitations demonstrate that although the CCM approach seems suitable to infer the presence or absence of plasticity by associating connectivity strength with cortical degeneration [75], it still requires careful experimentation as well. Some of the above limitations have recently been addressed. For example, global search algorithms that help to avoid local minima have also been applied to CCMs Connective field modeling. The CF model, as originally proposed by Haak and colleagues, assumes a circularly symmetric 2D Gaussian model on the surface of the source region as the integration field from the source to the target [66]. This 2D Gaussian is defined by its position (v0) and size (σ), where d(v,v0) is the shortest distance between the voxel v and the connective field center v0 and σ is the Gaussian spread (mm). Distances are calculated across the cortical surface, using Dijkstra's algorithm [66,69]. The connective field pipeline is described in Figure 4. Figure 4: (a) CF pipeline as described by Haak and colleagues [66]. The model comprises two steps: (1) predict the fMRI response, pðtÞ, by multiplying the CF model gðv0, σÞ with the measured source fMRI signal aðv, tÞ, and (2) the CF position (v) and size (σ) are estimated by varying parameters and selecting the best fit between the predicted time series and the measured BOLD signal yðtÞ. Then this procedure is repeated for every voxel in the target region. (b) The V2 response is predicted based on the pRF (stimulus-driven, in blue) and connective field (cortical-driven, in red) model. The color map on the brain shows the V1>V2 CF model weights for a specific voxel. [74,75]. Furthermore, new data-driven methods are able to measure multiple and even overlapping connectopies [67]. Although, currently establishing these connectopic maps requires a very large number of participants, they hold a promise of being able to reveal cortical and network reorganization and plasticity one day [67]. Cortical Circuitry Models in Ophthalmic or Neurological Diseases. The development of CCMs is a sequel to the classical pRF mapping. Hence, the available literature is still relatively small. Nonetheless, the existing studies give a good impression of the possible applications and the type of information that these models can provide. At this point in time, in particular, the CF modeling approach has been applied in several ophthalmic disorders, in which visual perception was either impaired or completely absent. A study by Haak and colleagues found that in macular degeneration, long-term deprivation of visual input had not affected the underlying cortical circuitry [75]. This suggests that the visual cortex retains the ability to process visual information. In principle, following the restoration of visual input, i.e., via retinal implants, such patients may thus recuperate from vision loss. Papanikolaou and collaborators applied CF modeling to study the organization of area hV5/MT+ in five patients with large visual field defects resulting from either early visual areas or optic radiation lesions [76]. They showed that in three of the five subjects, the CFs between areas V1 and hV5/MT+ covered visual field locations that overlapped with the scotoma. This indicates that activity in the lesion projection zone in hV5/MT+ may originate from spared V1. Bock and collaborators applied the CF model to resting-state BOLD data acquired from normally sighted, early blind, and monocular patients in which one of the eyes had failed to develop [74]. All subjects showed retinotopic organization between V1 and V2/V3. Butt and colleagues studied the cortical circuitry of the visual cortex in blind observers and compared this to that of sighted controls [70,74]. They found a very minute change in the pattern of fine-scale striate correlations between hemispheres, in contrast to the highly similar connectivity pattern within hemisphere. They concluded that the cortical connections within a region (which can be a hemisphere) are independent of visual experience. The above-cited studies show that, in general, the visuotopic organization of the cortical circuitry is maintained even after prolonged visual deprivation or blindness, supporting that the plasticity of the adult visual brain is limited (see Wandell and Smirnakis for a similar conclusion based on stimulus-driven mapping [4]). Moreover, these studies suggest that CCMs may be able to capture the integrity of cortical connections using both stimulus-driven and resting-state data. This encourages the development of new CCMs that can be applied to study how connected neurons in different layers and columns interact. Mesoscale Plasticity: Layer-and Column-Based Cortical Circuitry Models. Measuring cortical reorganization at a finer scale might reveal changes that are invisible or masked at a coarser scale. With the recent advance in ultra-high field functional MRI, the tools to examine the human brain at a mesoscale in vivo have become available. This enables assessing the presence of cortical reorganization across cortical depth to measure the flow of information across different cortical laminae-in particular feedback and lateral inputs-and to infer the microcortical circuits by studying their columnar organization. Many of the opportunities and challenges in visual neuroscience provided by increases in MRI field strength have been described in a recent review, to which we refer [78]. With respect to the topic of neuroplasticity, a study that showed that pRF in the input (middle) layer have a smaller RF than those in superficial and deeper intracortical layers is of particular interest [79]. Although this study provides hints about cortical organization, it exclusively relied on stimulus-based modelling and thus does not truly inform about the underlying circuitry. In order to bridge this gap, we propose that the application of CCM-like approaches to study short-range connections at laminar and columnar levels is warranted. The development of methods that reflect the mesoscale circuitry should be able to answer various outstanding critical questions in visual neuroscience and contribute with new fundamental and clinically relevant insights into cortical functioning and neuroplasticity. For example, following a visual field defect, is the input/feedforward layer the one that is most affected? Do neurons in the upper and deepest layers of the lesion projection zone establish new connections to healthy neurons in the input layer? At what level of cortical processing do feedback and feedforward signals modulate our conscious percepts? Are putative overlapping representations in ventral areas [38] perhaps encoded in distinct layers of the visual cortex? Conclusion In this paper, we discussed (a) the role of pRF mapping to cortically characterize visual areas and extrinsic and intrinsic factors that influence the pRF estimates, (b) methodological advances in retinotopic and connectopic mapping, and (c) stimulus-driven and cortical circuitry models that can link visual cortex organization, dynamics, and plasticity. Although we fully acknowledge the important contribution of pRF mapping towards understanding the structure and functioning of the visual cortex, we strongly argue against a "blind" reliance on this technique when studying neuroplasticity. The degree to which a change in signal amplitude or pRF measurements-by themselves-reflects that cortical reorganization remains to be determined: even in the presence of a presumed stable cortical organization in healthy participants, different pRF estimates may be elicited due to a change in the task at hand, cognitive factors, and the type of stimulus used. For this reason, we have stressed that prior to deciding that pRF changes are the result of reorganization, one has to exclude that these are due to different inputs, (implicit) task conditions, or cognitive demands. To improve the reliability of retinotopic mapping, more complex models and computational approaches have been developed with a noticeable trend to move from stimulus-driven to data-driven techniques. These efforts have resulted in a multitude of new methods. Their specific use depends upon the goal of the study and the neuronal population of interest. Nevertheless, although these newer techniques provide clear improvements, they potentially retain the issues associated with stimulus-driven approaches. Therefore, we argue in favor of also considering alternative techniques to study brain plasticity, in particular ones that directly assess the neural circuitry rather than stimulus-driven responses to estimate the extent of neuronal reorganization. As an exemplary incentive, we propose a model that combines connectivity with spatial sampling. In theory, such a model will not only inform about the spatial sampling but also about interactions between the linked neuronal populations. Finally, we encourage the development and application of models to capture the plasticity of layer-based circuitry at the mesoscale. Disclosure The funding organizations had no role in the design, conduct, analysis, or publication of this research. Conflicts of Interest The authors declare that they have no conflicts of interest.
9,163
2019-11-03T00:00:00.000
[ "Biology", "Psychology" ]
AUTO-CDD: automatic cleaning dirty data using machine learning techniques Cleaning the dirty data has become very critical significance for many years, especially in medical sectors. This is the reason behind widening research in this sector. To initiate the research, a comparison between currently used functions of handling missing values and Auto-CDD is presented. The developed system will guarantee to overcome processing unwanted outcomes in data Analytical process; second, it will improve overall data processing. Our motivation is to create an intelligent tool that will automatically predict the missing data. Starting with feature selection using Random Forest Gini Index values. Then by using three Machine Learning Paradigm trained model was developed and evaluated by two datasets from UCI (i.e. Diabetics and Student Performance). Evaluated outcomes of accuracy proved Random Forest Classifier and Logistic Regression gives constant accuracy at around 90%. Finally, it concludes that this process will help to get clean data for further analytical process. institute of Medicine reported [7] calculations that minimum 44,000 to 98,000 patients had to lose their lives every year for medical data errors. In the case of Iot Applications, most of the data are electronically collected, which may have serious data quality problems. Classic data quality problems mainly come from software defects, customised errors, or system misconfiguration. Authors in [8] discussed about cleaning data obtained from sensors. Here other method with ARIMA method was compared and they concluded that with a lower noise ratio, better results were obtained compared to higher noise ratio. The main advantage of their method is that it can work with huge data in a streaming scenario. However, if the data set is batch data it will not perform as expected. In [9], the problem of cleaning is overcame using DC-RM model, where it supports better Pre-processing and Data Cleaning, Data Reduction, and Projection phases. If the data set contains missing values, the format of missing values was prepared and imputed. In cleaning phase performing removal of unwanted and undesired data is required with elimination of the rows which contains null data [10]. Eliminating data redundancy which usually available in different datasets on same datasets. These data redundancy can cause to database system defection and increase the unwanted cost of transmitting data. These defects can be useless occupying storage space, reducing data reliability, leads to higher data inconsistency, and destroying data. Hence, different reducing techniques were proposed for data redundancy, for example data filtration, data redundancy detection, and data compression. These techniques may be applicable to various data sets. However, it may also bring negative issues, such as compressing data and then decompressing those data may lead to additional computational load. Hence, it is important to balance the process and the cost. An author also indicates that after data collection process cleansing data is compulsory according to previous different datasets can be handled [11]. Research Gap. Usually multiple manual scrubbing process is executed to overcome and solve the poor data issues. This often involves more processing time and human resources. This results in slowing down any company operation performances and leave less time for analysing and optimising program. It increases cost for leads involving revenue reduction and profit margin. The issue will be solved if the cleaning phase is automatic. The tools available in market, are third party application. However, if the DA process is implement by using programming language it is important to make this process as fast and accurate as possible. Here, a predictive model will be useful to impute accurate missing data. Problem Statement. In Data Analytics (DA) processing, data cleaning is most important and essential step. Inappropriate data may lead to poor analysis and thus yield unacceptable conclusions [12]. Some authors [13][14][15][16] ocused on the problem of duplicate identification and elimination. Their research focused on data cleaning partially and hence received only little attention in the research community. Different information system required to repair data using different rules. It is first required to overcome the dirty data dimensions from the structured data for better DA process. Data cleaning is the process of overcoming dirty data dimensions; such as incompleteness (missing values), duplication, inconsistency, and inaccuracy. Under these requirements, researchers developed tools to detect and repair Data Quality issues by specifying different rules between data, and normally different dimension issues requires different techniques, e.g., imputing missing value in the multi-view and panoramic dispatching [17]. There is scope for research in achieving better data cleaning. It can be achieved by introducing automatic data cleaning process with the help of Machine Learning (ML). Sampling technique is also integrated into the process considering the size of data. Because of the ML ability, the Auto-CDD system can learn from the data and predict the missing class in order to perform Automatic Missing Value Imputation. It is also required to select the suitable features for the suitable ML models automatically, depending on the form of the data set obtained from various domain. These abilities of data cleaning process can enhance the performance of DA, by replacing the current manual data cleaning with an intelligent one. In the report [18], it has analysis of data issues obtained by companies of differing sizes and operational goals according to business-to-business (B2B) industries (i.e. Small and Medium Business (SMB), enterprise businesses and media companies). The final calculation of data issues is almost same for three categories. The percentages are 38%, 29% and 41% for SMB, enterprise and media companies respectively. The results indicated that the causes of In this research, the main objective is to overcome issues of incomplete data, due to missing data is produced by data sets basically missing values. These type of data considered concealed when the amount of values identified in a set, but the values themselves are unidentified, and it is also known to be condensed when there are values in a set that are predicted. The following research questions were addressed to be more exact: a) How to train model to predict if the value is missing ? b) How to repair the dirty data ? c) What is the best Machine Learning Algorithm for building the model ? The rest of paper is organized as follows: Section 2 presents the comparison between existing function in Python and developed function (AutoCDD). Section 3 demonstrates and evaluated performance of Auto-CDD system to make sure the prediction value's accuracy is precise. Then, Section 3 explains in details of developed System Design clearly. Lastly, Section 5 concludes the paper and discusses future prospects. Comparison As stated earlier, to develop the script of cleaning data Python Language a comparison is shown in Table 1 between existing functions in Python library and Auto-CDD. In the table, the column "Function" contains the task title of the method presented in "Call function example" column. Next, column "Description" contains the definition of the function written in python's Pandas official website. Finally, Pros and cons are written to understand the good and bad side of available functions. System Design The central goal of this study is to build a system for deriving a quality data set by detecting, analyzing, identifying and predicting the missing values. This task can be implemented using different Machine learning paradigm [4]. This system will able to perform independently without the help of any pre-developed software. As the system is developed using python Language. The system life cycle is divided into two stages, i.e. training/testing and prediction. Details of the phased are described in details in this section. Training Phase The first stage is Training Phase, as shown in Figure 1, the selected classification or regression machine learning model is trained using selected data sets. Initially, data is retrieved from .csv file and detect the column need to be cleaned. Next step is Feature Selection step, to obtain the important features to train with. After selecting the important features in this training phase, a machine learning model will be produced and will be saved. Finally, an evaluation is held to make sure the stored model produces accurate results. Retrieving Data The cleaning process is mostly processed on the stored dataset; since the system will be responsible for cleaning dirty data (such as missing data) it is important to retrieve data to process. As mentioned earlier, to develop the system python is used, hence 'PANDAS' was imported which is the best tool for data munging. It is a library of high-level data structuring dataset and manipulating tools, which helps to make analyzing data faster and easier. The dataset retrieved data from is stored in comma separated values (.csv) file. For the task reported in this paper, three sets of data selected which have missing values, as it will help to validate the system will work for cleaning data. The data set is selected according to the requirements of the system input. In the developed system three datasets are used. Details of data sets used are presented in Table 2. Feature Selection Based on Random Forest In this stage Random Forest feature selection method is used. The steps of Random Forest algorithm includes: Step 1: Extract feature sets from dataset including personalized and non-personalized features. Step 2: Take M subset samples at random, without replacement from original feature sets. Step 3: Build decision tree for each subset samples and calculate Gini index of all features. Step 4: Rank Gini index in a descending order. Step 5: Set the thresholds value, and then features with high contribution are selected as the representative features. The columns selected to train the Machine Learning model by feature importance, the values are plotted in a Cluster Bar chart, as shown in Figures 2 and 3. Data set 1 (student performance) Training a Classifier Model A set of features for each missing value's attributes are retrieved and then the old model is retrained to get better accuracy for predicting anomalies of data using the trained Machine Learning model. For training the model three common Machine Learning techniques are used, they are Random Forest, Linear SVM, and Linear Regression. a. Random forest model According to the system's requirement a supervised learning algorithm can be selected, where Random forest Algorithm is shown to provide a prediction with contains more than one Decision trees, and these trees are independent with each other [24]. It was implemented in different areas and proved to give great prediction accuracy, such as Network Fault Prediction [25]. Suppose there are T classes of samples in set C, then its Gini index is defined in (1) (1) where nc is the number of classes in set T (the target variable) and pi refers ratio of this class i. If considering dataset C splatted into two class, T1 and T2 with amount of data N1 and N2 respectively, then the Gini index for T is defined in (2). b. Support vector machine (SVM) model Another supervised learning algorithm is selected, which is known to be strong algorithm used for classification and regression used in different domain, such as Healthcare [26], intrusion detection system [27], lymphoblast classification [28] and driving simulators [29]. It also helps to detect outliers using a built-in function. Implementation of Linear SVM, 'LinearSVC' option was used for able to perform multi-class classification. The (3) used for predicting new input in SVM by means of the dot product of input ( ) with every support vector ( ): where is new input, and and value of each input is obtained from training data through the SVM algorithm. Whereas in Linear SVM the dot product is known as the kernel, the value defines comparison or a gap measure between new data and the support vectors. It can be re-written in form of (4) c. Logistic regression One of the most common ML algorithm is Logistic Regression (LR). LR is not a regression algorithm it is one of the probabilistic classification model. Where, the ML classification techniques works as a learning method, which contains an instance mapped with one of the many labels available. Then machine learns and trains itself from the different patterns of data in such a way that it is able to represent correctly with the mapped original dimension and suggest the label/output without involving a human expert. The sigmoid function graph is plotted using (5): (5) it makes sure that the produced outcome is always in between 0-1, as the denominator is greater than numerator by 1, as shown in (6). Prediction Phase The prediction phase shown in Figure 4, can be integrated into any pre-processing system, which detects and identifies missing value. Our system first retrieves data contains the missing value. Afterward, our system extracts feature, then predict the missing data by using the stored trained Machine Learning Model and provide predicted missing value. Finally, replace the NAN values with predicted values. Performance Evaluation The importance of the performance evaluation is to investigate that how accurate and effective is the developed system, which is able to detect missing values, based on several metrics. Different type of data may give unlike level of prediction accuracy in a classification model. So different models are used and passed selected features from three data sets. Then cross-validation is implemented for further proof of the effectiveness of developed classifiers. More specifically, a selected dataset is divided into test and training sets (Diabetics Dataset obtained from 'uci'). Classification Accuracy The method used for evaluation is by retrieving TP (True Positive), TN (True Negative), FP (False Negative) and FN (False Negative) values. Where, TP is total amount of predicted correct/true value as expected; TN as total amount of predicted correct/true value as not expected; FP is total amount of predicted incorrect/false value as expected; FN as total amount of predicted incorrect/false value as not expected. Finally, accuracy is calculated by using following in (7). This accuracy of Machine learning Models depends on the data set selected to train. As different type of data sets will predict differently and different Learning models are used to get the best model according to the data set. Data sets were selected and the predicted outcome accuracies on different machine learning where presented in Figures 5-6 in form of graphs. This accuracy is the percentage of predicted missing values for each attribute, for example, in graph predicting values in 'rosiglitazone' column obtained from a CSV file. Three well-kwon supervised learning algorithms are used as mentioned earlier and in evaluation process from the three trained model, Random Forest Algorithm and Logistic Regression gave stable accuracy output throughout inputting data. Whereas, LinearSVM shows unstable and comparatively lower accuracy than other selected algorithm. Case 1: Cleaning Dataset1-Diabetics Data: Trained Random Forest Algorithm gave more than 90% accuracy, as shown in Figure 5 (a). Trained LinearSVM model shows to be an unstable model with lower accuracy of predicting missing values as shown in Figure 5 (b) and Logistic Regression trained algorithm proved to be more than 85% accuracy as shown in Figure 5 (c). Case 2: Cleaning Data set 2 (Student Performance Data set): Cleaning this data set, Logistic Regression performs in accuracy of greater than 90% as shown in Figure 6 (c) and Random Forest Algorithm is a close competitor in terms of accuracy 90% as shown in Figure 6 (a). Whereas. Linear Support Vector Machine again gives the bad performance of around 80% accuracy as shown in Figure 6 For cleaning purpose and predicting missing data for each attribute, it's proved that a trained Random Forest Model and Logistic Regression Model acts a better predictive model. Whereas, a trained LinearSVM shows to be unreliable for this type of prediction cause as it gives lower and unstable accuracy throughout training model by inputting new data into the model. This accuracy is further verified by using cross-validation technique. Cross-Validation Cross-validation technique is important to implement to confirm and examine the trained model can be reliable without issues (such as overfitting). Here, the data set is divided into Figure 7 (where, k=5). This type of validation is known as k-fold cross-validation used to validate and determine the trained classifiers. Figure 7. Data splitting in 5-fold cross validation As the data set is divided into 5-folds, total of 1/5 of complete data used for testing and test data used for training. This training and testing are repeated 5 times, and total of each test accuracy is calculated to get Cross-validation score. The retrieved outcomes are entered into a table (presented in Table 3) with the classification accuracy obtained in previous stage for one column containing missing value(s). The outcomes proved that the model accuracy and cross-validation accuracy is almost close to each other. The trained model is not over-fitted and can be reliable. Conclusion Almost all dataset available in repositories may contain attributes with missing data and it is very important to handle these type of data to overcome any performance issues. As different data set have different formats of data it is quite challenging task to deal with, and it is important to deal intelligently by using robust models. In this paper, a comparison is stated with pros and cons to will help the developer while selecting the best method for cleaning missing values. However, it's not essential to use one method for repairing data. Next, a system is designed and presented by using well-known Machine Learning algorithms for predicting missing data automatically. Three classification algorithms (i.e. SVM, Random Forest, and Logistic Regression) are used to test the process. The evaluation methods proved that two trained models are reliable on the data set selected. The k-fold cross-validation method confirms that the trained model is not over-fitted and can perform well with new dataset. For future work, combination of more than one method needs to be implemented with additional rules for data repair. It is also important to indicate and repair inappropriate or wrong data. Integrity constraints (such as Functional dependencies) can combine with Machine Learning Algorithms to classify the type of error to capture.
4,169.2
2019-08-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Nomenclature Effects of the bounding solid walls are examined numerically for slow flow overregular, square arrays of circular cylinders between two parallel plates. A local magnitudeof the rate of entropy generation is used effectively to determine the flow region affected bythe presence of the solid boundary. Computed axial pressure gradients are compared to thecorresponding solution based on the Darcy-Brinkman equation for porous media in whichthe effective viscosity appears as an additional property to be determined from the flowcharacteristics. Results indicate that, between two limits of the Darcian porous medium andthe viscous flow, the magnitude of μ (the ratio of the effective viscosity to the fluid ˆviscosity) needs to be close to unity in order to satisfy the non-slip boundary conditions atthe bounding walls. Although the study deals with a specific geometric pattern of the porousstructure, it suggests a restriction on the validity of the Darcy-Brinkman equation to modelhigh porosity porous media. The non-slip condition at the bounding solid walls may beaccounted for by introducing a thin porous layer with μ = 1 near the solid walls. ˆ less than unity to as high as ~10 < for high porosity porous media [3,5,[8][9][10].The validity of the Darcy-Brinkman equation has also been a subject of investigation, particularly in relation to the boundary conditions at the solid-as well as fluid-interface [11,12].In the present study our subject of interest is flow over regular square arrays of circular cylinders bounded at the top and the bottom by solid plates.The flow in the absence of the bounding walls is studied in detail by Sangani and Acrivos [15].The analysis which solves the Navier-Stokes equations rather than the Darcy equation, yields a relation between the permeability of the regular array structure and the porosity (volume fraction occupied by the flow), confirming that the Darcy equation is valid for flow through regular structures over the whole spectrum of the porosity.Therefore, quantitative relations between the wall effects and the Darcy-Brinkman equation may be examined in a more focused manner through the present investigation.One of the objectives of the present analysis is to examine flow structure near the bounding walls.It is generally accepted that the effects of a solid boundary is confined within a thin boundary layer [12].Our second objective is to test the feasibility of addressing the thin region by using the Darcy-Brinkman equation.The Darcy-Brinkman equation in recent years is employed in biomedical hydrodynamic studies [7], including its use in modeling a thin fibrous surface layer coating blood vessels (endothelial surface layer) as it is a highly permeable, high porosity porous medium [13,14,17].A better understanding of the characteristics of the Darcy-Brinkman equation, therefore, is an important part of more practical problems; thus forming a motivation of the present report. Analyses As shown in Figure 1 where p = pressure, s u = superficial velocity, K = permeability of porous medium.In terms of non-dimensional variables defined as, / y y H = , / s m u u u = ( m u = mean velocity = / 2 q H , q = volume flow rate per channel width), the formulation and its solution become, where ˆ/ e μ μ μ = (viscosity ratio), / Da H K = .(5) The viscous flow limit corresponds to the case of ˆ1 μ = in the equation above. The Darcian limit of flow through a porous medium is recovered as For the case of our interest, the porous medium consisting of regular square arrays of cylinders, the permeability, K , may be expressed as, (2 ) ( ) where C (solid fraction) = 1 φ − (φ = porosity), 2l = side length of a single square unit.(See Figure 1.)The function, ( ) f C , is given in an explicit form for high porosity arrays and in a graphical form for low porosity arrays in [15].8). (b) Typical computational grid for a unit cell. Flow, conditions of which lie between the two limits of Eqs.(5,6), is investigated numerically.Referring to Figure 2, governing equations and boundary conditions are, and on cylinder surface, (4.8b) Two commercially-available computational programs for hydrodynamic analyses are used in our numerical experiments, FLUENT (by Fluent Inc.) and FEMLAB (by COMSOL Group).The former is used as the main program; while, the latter is employed to confirm results from the former.Eq.( 8) is solved computationally as the porosity, φ , and the length ratio, / H l , are varied systematically.The porosity represents a fraction of the flow field in the cross sectional area of a ( 2 2 l l × ) unit cell; while, the ratio, / H l (integer), indicates a number of cell layers over the channel depth.Figure 3 is presented to depict the range of the porosity of our investigation for the case of / 10 H l = .For the high porosity channel flow of 99.99%The computational procedure is outlined below: 1. Solve Eq.( 8) and find the x-direction pressure gradient, / dp dx − ), for a specified value of the porosity at the length ratio, / 1 H l = (that is, the case of a single unit cell over the channel).(Actual computations are performed over multiple longitudinal cell columns (5 -10) to ensure that the periodic conditions at the cross-sectional cell boundaries are satisfied accurately.) Increase the value of / H l by one, and repeat the computation. 3. In the case of / 1 H l = , the solid wall affects the entire flow field.As the magnitude of / H l increases, cells near the center of the channel (y = 0 in Figure 1) become less sensitive to the presence of the solid walls.Computation for a fixed value of porosity is terminated when the size of the wall-affected region becomes independent of / H l . Sangani and Acrivos [15] reports solutions of the velocity field in a single unit cell (i.e.flow through square arrays of cylinders without solid bounding plates) by applying the least square collocation method [4] to a series solution, which satisfies a part of the required boundary conditions exactly.We used this solution for confirmation of the validity of our computational results as well as for determination of the required numerical conditions (number and size of the computational mesh as well as the conversion criteria).The validity of the computational results is confirmed by recovering the permeability, K , (Eq.( 7)) reported in [15] over the range of 0.215 ( min imum φ (cylinders in contact with each other) 1 φ < < .When a unit cell becomes completely unaffected by the presence of the bounding walls, the velocity field should be identical to that of the solution in [15] everywhere in the cell.However, it is not easy to find the identity for the two-dimensional velocity field we are analyzing.Instead, the identity of the rate of entropy generation over a unit cell is used to find a degree of the wall effects on the flow field in the respective unit cell. Results and Discussion Figure 4 shows the axial velocity ( u ) profile over the channel half-depth at a cross sectional boundary between two lateral cell columns (with the number of cylinders per lateral column = four), as the porosity is varied.The centers of circular cylinders are located at x = l , 3l , 5l and 7l .The parabolic profile is computationally recovered for 100% Table 1 lists computational results of the rate of entropy generation of each cell for the case of ten cells across the entire channel depth.Even at a very high porosity of 99.99% φ = , the plate effects are limited to the region within three cells from the plate wall with the affected region becoming even smaller as the porosity is decreased., the viscous effect of the flow around the cylinder remains important with the minimum value of 2 Da being limited to ~1.5 .(The computational lower limit is due to difficulties of the computational grid generation that is small in size and large in numbers sufficiently for accurate resolution of the velocity field near the cylinder.)The lowest value of 2 Da corresponds to the case of / 1 H l = (a single cylinder over the channel width) for each set of numerical results with a fixed value of φ .Also, results may not be presented in a continuous curve as an incremental change in the number of cylinders in the lateral direction leads to a step change in 2 Da .Figure 6 indicates that, between the Darcian and the viscous flow limit, all numerical results may be recovered by setting ˆ1., and that the effect of the bounding walls on the flow structure, which is confined to a narrow region near the walls, may be approximated by a porous medium with ˆ1 μ = .Figure 7 By setting / 0 L H = and 1 in Eq.( 10), Eq.( 4) is recovered for a porous layer with ˆI μ μ = and with ˆ1 μ = , respectively.The left hand side of Eq.( 10) is the ratio of the non-dimensional pressure gradient of the three layer model to that of a single layer (= the Darcian limit of / 0 L H = ).For a common pressure gradient, therefore, the mean velocity, m μ , of the three layer model is higher than that of the single layer.Greater the effective viscosity of the middle layer is relative to the fluid viscosity, more flow is channeled through the top (bottom) layer.The length scale of the top layer depth, L , is of the order of 6l in Figure 2, implying that the depth ratio, / L H , depends on the characteristic length scale of the porous structure (such as l in Figure 2) as well as on the macroscopic scale of the channel depth, H. Figure 8 shows Eq.( 10) as Da is varied with / L H as third parameter for the case of ˆ10 I μ = .For a case in which the wall layer thickness, L , is ~1/1000 of the channel half-depth, H , an increase of ~10% in the mean velocity across the channel is predicted at 2 3 10 Da = due to a higher flow velocity in the wall layer.Although a substantial reduction of the wall effect is observed as the magnitude of / L H is reduced from side of a unit cell ( Figure 1(b)) L depth of the top and the bottom layer in three layer model (Figure 7) p pressure q volume flow rate per channel depth / , G cell S rate of entropy generation over a unit cell per channel depth /// G S rate of entropy generation per channel width T temperature [K] u x-direction velocity of viscous flow u / the top and the bottom layer is Figure 7 (=1) ˆI Introduction The Darcy-Brinkman equation is a governing equation for flow through a porous medium with an extra Laplacian (viscous) term (Brinkman term) added to the classical Darcy equation.The equation has been used widely to analyze high-porosity porous media.The dynamic viscosity, e μ , associated with the Brinkman term is referred to as the effective viscosity.Studies in the past yielded varying results for the magnitude of the viscosity ratio, μ ( / e μ μ = with μ = fluid viscosity) between slightly , we consider a steady, incompressible, fully-developed, very slow ( Re (Reynolds number) 0 → ) flow across regular square arrays of circular cylinders, bounded by parallel plates.Governing equations based on the Darcy-Brinkman equation for porous media and the boundary conditions are, Figure 1 . Figure 1.(a) Flow between two parallel plates filled with regular square arrays of circular cylinders, (b) Regular square arrays of circular cylinders. ,Figure 3 . Figure 3. Examples of flow field over a channel half-depth. ( Figure 4(a)) as viscous flow between two parallel plates.Even at a very low solid fraction ( 4 (b)), the presence of cylinders is seen to affect the velocity profile substantially near the middle region of the channel; while, the velocity profile retains parabolic characteristics near the plate.It should also be noted that the effects of the cylinder adjacent to the wall (with its center at x l = ) are more significant for the low porosity cases. Table 1 . Variation of / , G cell S over Channel Half-Width.(Cell I in contact with the bounding plate at y=H in Fig 2(a)) Figure 6 . Figure6summarizes the computational results of the (non-dimensional) axial pressure gradient vs 2 Da , Eq.(4), covering the porosity range up to 4).Our numerical range is limited to / 20 H l ≤.Classical Darcy's law is valid under the assumption of very slow ( Re 0 → ) flow through a layer of a porous medium in which the local (microscopic) length scale ( l in the present analysis) is sufficiently small compared to the overall (macroscopic) length scale ( H ). For the two cases of (a) 50% for Darcian limit.A corresponding thermal problem had been analyzed[2,6,16] for stability of a fluid saturated porous medium between two horizontal plates heated from below.Results, obtained under an assumption of e μ μ = ( ˆ1 μ = ), indicate that the Darcian limit is reached in terms of the critical Rayleigh number for the onset of convection if Our computational results along with Figure6indicate that the Darcy-Brinkman equation with the viscosity ratio substantially different from unity fails to satisfy the non-slip condition at bounding walls, particularly under the high porosity condition of 1 = is a sketch of a three layer model, proposed to accommodate the wall effects in the analyses of a Brinkman porous medium with ˆ1 μ ≠ .It consists of two layers near the bounding walls with ˆB μ (viscosity ratio of the top and the bottom layer) , and a middle (interior) layer with ˆI μ (viscosity ratio of the middle layer) 1 ≠ .A solution for a parallel flow through the three layers may be sought from the Darcy-Brinkman equation for the top ( H L y H − ≤ ≤ ) as well as for the middle layer ( 0 y H L ≤ ≤ − ) under the conditions of the symmetry at 0 y = , the non-slip condition at y H = , and the velocity-and the shear stress-continuity at y H L = − . Figure 7 . Figure 7. Schematic Diagram of Three Layer Model.
3,361.4
2007-09-11T00:00:00.000
[ "Physics" ]
NIKA: a mm camera for Sunyaev-Zel'dovich science in clusters of galaxies Clusters of galaxies, the largest bound objects in the Universe, constitute a cosmological probe of choice, which is sensitive to both dark matter and dark energy. Within this framework, the Sunyaev-Zel'dovich (SZ) effect has opened a new window for the detection of clusters of galaxies and for the characterization of their physical properties such as mass, pressure and temperature. NIKA, a KID-based dual band camera installed at the IRAM 30-m telescope, was particularly well adapted in terms of frequency, angular resolution, field-of-view and sensitivity, for the mapping of the thermal and kinetic SZ effect in high-redshift clusters. In this paper, we present the NIKA cluster sample and a review of the main results obtained via the measurement of the SZ effect on those clusters: reconstruction of the cluster radial pressure profile, mass, temperature and velocity. Thi s v e r sio n is b ei n g m a d e a v ail a bl e in a c c o r d a n c e wit h p u blis h e r p olici e s. S e e h t t p://o r c a . cf. a c. u k/ p olici e s. h t ml fo r u s a g e p olici e s. Co py ri g h t a n d m o r al ri g h t s fo r p u blic a tio n s m a d e a v ail a bl e in ORCA a r e r e t ai n e d by t h e c o py ri g h t h ol d e r s . telescope, was particularly well adapted in terms of frequency, angular resolution, field-of-view and sensitivity, for the mapping of the thermal and kinetic SZ effect in high-redshift clusters. In this paper, we present the NIKA cluster sample and a review of the main results obtained via the measurement of the SZ effect on those clusters: reconstruction of the cluster radial pressure profile, mass, temperature and velocity. Clusters of galaxies are the largest gravitationally bound objects in the Universe and trace the matter distribution across cosmological times [1]. Clusters are mainly made of dark matter (about 85% of their total mass), but also of hot ionized gas (about 12%) and of the stars and interstellar medium within galaxies (a few percent). The latter two represent the baryonic component of clusters and can be used to detect and study them. At visible and IR wavelengths we can observe cluster galaxies, while the baryonic hot ionized gas that form the Intra Cluster Medium (ICM) can be detected via its X-ray emission [2] and the thermal Sunyaev-Zeldovich (tSZ) effect [3]. The latter corresponds to the Compton inverse interaction of the ICM electrons with the CMB photons travelling through the cluster, and leads to a well defined spectral distortion of the CMB emission. Introduction Clusters are a powerful probe for cosmology because they form across the expansion of the Universe (see for example [4]). Their abundance as a function of mass and redshift is very sensitive to cosmological parameters [5] such as σ 8 (the rms of the matter perturbations at 8 Mpc scales), Ω m (the dark matter density), Ω Λ (the dark energy density). Constraints on these parameters derived from galaxy cluster samples are generally limited by the accuracy of mass estimates of galaxy clusters [6,7], which mainly come from the baryonic observables. Scaling relations relating the mass of the cluster to the baryonic observables are generally used (see [8] for a review). These scaling relations assume that clusters are relaxed and that gravity is the only physics at play. Furthermore, they are generally calibrated using low redshift clusters. However, baryonic physics -for example shocks during merging events, turbulence in the gas, and cooling-flows near active galactic nuclei -may introduce deviations with respect to the self-similar scenario and lead to significant bias in the cluster mass estimates. Such deviations are expected to be more likely at high redshift as merging processes are expected to be more common following the hierarchical scenario of structure formation. 18 12 Within the self-similar scenario cluster properties are only linked to their mass and redshift. In particular, as illustrated in Figure 1, for an equivalent mass clusters in the redshift range 0.5 < z < 1.0 are observed smaller in angular size than the low redshift ones, z < 0.3. Therefore, a better understanding of high redshift cluster properties can only be achieved via high resolution tSZ observations [10]. The NIKA camera ( [9]), installed at the 30 m IRAM telescope in Pico Veleta, was a pioneer in this respect as will be shown in this paper. NIKA [9,11] was a dual band millimeter intensity and polarization camera operated at 150 and 260 GHz and installed permanently at the IRAM 30-m telescope from 2013 to 2015. The NIKA camera (see Figure 2) was made of two arrays of Kinetic Inductance Detectors (KIDs) cooled down to 100 mK via a 3 He-4 He dilution cryostat and instrumented via a dedicated readout electronics [13,14]. NIKA has been the first KID-based camera to produce scientific quality results [15] and has demonstrated state of the art performance during operations [12,16]. Table 1 summarizes the main characteristic of the NIKA camera and its performance during operations. The NIKA cluster sample The NIKA camera was particularly well adapted for observations of the SZ effect in clusters of galaxies at high redshift because of: 1) the dual band capabilities with frequencies sampling the zero (260 GHz) and negative part (150 GHz) of the thermal SZ spectrum [3], 2) the high resolution offered by the 30 m telescope and the large FOV, which permitted a detailed mapping of clusters in the redshift range from 0.5 to 1, 3) excellent performance in sensitivity allowing fast mapping speed, and 4) an accurate calibration and photometry. Because of this, during NIKA operations it has been possible to map a sample of 6 clusters of galaxies 1 as shown in Figure 3. The cluster sample was chosen in order to best explore the capabilities of large KID-based cameras for cluster science using the SZ effect. The first cluster observed was RXJ1347.5-1145 [15], which is a very massive and medium redshift, z=0.45, cluster and constitutes a perfect first target. These observations were the first ever scientific quality observations with a KID camera. To further test the capabilities of NIKA, there were observations of CLJ1216.9+3332, which is a massive and high redshift cluster, z=0.89 [17,18]. One important issue with the observations of high redshift cluster via the tSZ effect is the contamination by dusty and radio point-like sources as was shown by the NIKA maps of the cluster MACS J1423.9+2404 presented in [19]. The high resolution and large FOV capabilities of NIKA allowed also the detailed study of MACS J0717.5+3745 [20][21][22], which is a complex morphology cluster presenting various components as well as extreme physical conditions (violent merging events, large velocities, etc). Finally, it was possible to check that the follow-up of high redshift clusters detected (e.g. PSZ1 G045.85+57.71 and PSZ1 G046.13+30.75 ) by low resolution CMB experiments like Planck is possible with NIKA like cameras. This work demonstrated that cluster pressure profile and mass estimates can be significantly improved as in the case of PSZ1 G045.85+57.71 [23]. NIKA results on SZ science The NIKA camera has permitted a wide sample of SZ studies on clusters of galaxies. Here we have selected some representative examples. Figure 4. Pressure profile reconstruction for the cluster PSZ1 G045.85+57.71 [23]. Left: thermal SZ map of the cluster at 150 GHz. Right: Non-parametric reconstruction of the cluster pressure profile as a function of radius as obtained from the NIKA and Planck data. 2 Cosmological analyses with cluster of galaxies (e.g. [5]) require accurate measurements of the cluster mass. This can be achieved using the SZ effect only via scaling relations between the cluster mass and the integrated Compton parameter [8] or from a combination of the SZ and X-ray data by computing the cluster hydrostatic mass. In both cases the detailed reconstruction of the cluster pressure profile is a key element. NIKA has shown that this can be achieved with high resolution KIDs-based cameras both for parametric [17,19] and non-parametric models [18,23]. The latter case is illustrated in Figure 4. As observed in the right panel of the figure, the combination of the high-resolution and high-sensitivity NIKA data with the Planck data permitted the non-parametric reconstruction of the pressure profile for the cluster PSZ1 G045.85+57.71 from the inner part of the cluster to the outskirts. Cluster velocity The kinetic SZ effect [24], arising from the CMB Doppler shift produced by the bulk motion of the ICM electrons, can be used to measure the velocity of cluster of galaxies along the line-of-sight. By contrast to the thermal SZ effect, the kinetic SZ effect does not produce a spectral distortion of the CMB photons and it shows the same spectrum that the CMB. Thus, in the case of the NIKA observations we expect to observe the same signal in CMB temperature units at 150 and 260 GHz. Furthermore, the kinetic SZ effect is expected to be small with respect to the thermal one for typical clusters velocities, which are typically in the range of few hundreds to few thousands of km/s [20,[25][26][27]. In this respect, the cluster MACS J0717.5+3745 is a target of choice as we expect the different components in the cluster to present large relative velocity differences [20,25]. This is illustrated in Figure 5. In the right panel of the figure we show a composite map of MACS J0717.5+3745 obtained from observations at various wavelengths: optical image (green), X-ray (red), and thermal (blue) and kinetic (yellow) SZ as measured by NIKA [20]. These latter maps are obtained from the combination of the NIKA 150 and 260 GHz maps after cleaning for astrophysical contaminants and accounting for temperature induced relativistic corrections [20]. The different subclusters present in MACS J0717.5+3745 are shown as dotted-dashed circles. From the kinetic SZ map, it is possible to extract a velocity map that is shown on the right panel of Figure 5. We observe in this map, the first model independent one, that the substructures C and D have very large velocities along the line-of-sight, with opposite sign. The cluster temperature can be generally estimated from X-ray spectroscopic observations. However, these measurements are affected by two major systematics: 1) the X-ray brightness temperature is proportional to the square of the electron density, such that the spectroscopic temperatures are driven by the colder and denser regions along the line-ofsight [28], and 2) the recovered temperature estimates are very sensitive to the calibration scale of current X-ray satelite observatories leading to absolute uncertainties of about 15 % (e.g. [29]). Furthermore, spectroscopic observations are very time costly. Alternatively, the SZ effect can also be used to measure cluster temperature given a measurement of the electron density using for example X-ray photometric observations. This type of analysis was performed for the first time by [21] for the cluster MACS J0717.5+3745 using a 2D reconstruction of the electron pressure from the NIKA SZ data accounting for relativistic corrections as in [20], and of the 2D electron density derived from the XMM-Newton X-ray data. This is illustrated in Figure 6, which presents the maps of the electron pressure, density and reconstructed temperature. We observe a high signal-to-noise reconstruction of the cluster, which shows huge increase at the position of the shock between two of the substructures in the cluster. We find that the combined SZ and X-ray based temperature map is compatible with those obtained from X-ray spectroscopy using XMM-Newton and Chandra observations to 10 % in amplitude after correcting for the kinetic SZ effect. This is a promising result and opens a new window for temperature determination in clusters of galaxies. Conclusions The NIKA camera was demonstrated to be an excellent instrument for the study of clusters of galaxies via the Sunyaev-Zel'dovich effect. Among others, NIKA has permitted the detailed characterization of a sample of six galaxy clusters, chosen to demonstrate the capabilities of KID-based instruments. NIKA has provided the first ever SZ scientific observations with KIDs and permitted to conduct a wide scientific program. This program included for example the reconstruction of the cluster pressure profile, temperature and mass via the thermal SZ effect in combination with X-ray observations, as well as the first direct mapping of cluster velocity via the kinetic SZ effect.
2,937.2
2019-06-03T00:00:00.000
[ "Physics" ]
The contributions of hepatic de novo lipogenesis to the difference in body fat mass of genetically lean and fat duck breeds ABSTRACT The underlying reasons for genetic differences in body fat mass remain unclear. The objective of this study was to investigate the contributions of hepatic de novo lipogenesis (DNL) to the genetic differences in fat mass in ducks. Ducks with distinct genetic backgrounds were selected for the genetically lean and fat animal models. The weights of fat tissue and organs as well as the gene expressions, enzyme activities and concentrations related to hepatic lipogenesis were measured and compared between the two genetically different duck breeds. Although a clear phenotypic difference in extra-liver fat mass between the two genetically different duck breeds was observed, the relative weights of the fat tissue to body weight in the two breeds were similar. There were no clear divergences regarding DNL-related gene expressions (Acc, Dgat2 and Fas), enzyme activities (ELVOL and FAS) and enzyme concentrations (ACC) between the two genetically different duck breeds, suggesting that hepatic DNL may contribute less to the individual fat mass difference in ducks; rather, body fat mass may depend more on the liver size. Our findings may provide a fundamental explanation for individual fat mass differences. Introduction Adipose tissue represents an endocrine metabolic organ that plays a critical role in the efficient storage and mobilization of lipids to fulfil bioenergetics demands (Galic et al. 2010). A specific amount of fat mass is necessary for sustaining normal physiological functions in the body. In farm animals that provide meats, fat in muscle tissues (i.e., intramuscular fatness) is considered an important economic trait because it influences meat quality (Wood et al. 2008). A thorough understanding of the exact mechanisms underlying the fat deposition process is important for developing strategies for producing high-quality meat. Hence, why some individuals have more fat than others is a fundamental question that needs to be addressed first. It is clear that genetic backgrounds are responsible for individual differences in fat mass. In livestock and human medical research, genome-wide association studies have identified numerous single nucleotide polymorphisms (SNPs), many of which have been shown to be highly correlated with lipid metabolism (Hinney et al. 2007;Wu et al. 2013;Yang et al. 2013). The locations of some of these SNPs have been identified near or within the genes responsible for lipid metabolism. Gene expression and regulatory networks are involved in regulating fat mass through numerous physiological processes. Among them, de novo lipogenesis (DNL) is an endogenous pathway that involves lipid synthesis and plays an important role in directly affecting fat mass (Hellerstein 1999). In the body, some of the fatty acids are derived from the diet and some are derived from the DNL process. Carbohydrates could enhance the DNL pathway to contribute deposits and obesity (Diraison et al. 2003;Strable and Ntambi 2010). Therefore, the contribution of DNL is perhaps one of the main sources of individual fat mass differences. The main DNL sites vary greatly among different species. In mammals, DNL mainly occurs in the adipose tissue (Shrago et al. 1971). The general view is that lipogenesis is equally active in the liver and adipose tissue based on observations in rodents (Leung and Bauman 1975). Hepatocytes prepared from lean rats starved for 48 h significantly lacked the capacity for lipogenesis, whereas hepatocytes isolated from starved obese rats exhibited detectable rates of lipogenesis. Enzymes that are normally associated with lipogenesis were elevated in the liver tissues from obese rats; the liver may be prominently involved in the development of excessive blood lipids and enlarged fat cells in the Zucker obese rat (McCune et al. 1981). In contrast, DNL in avian species predominantly, if not exclusively, occurs in the liver (Griffin et al. 1992;Bedu et al. 2002;Diraison et al. 2003), and the primary function of adipose tissue appears to be one of lipid storage rather than lipid synthesis (Griffin et al. 1992). Earlier studies have established that the in vitro incorporation of glucose into lipids was uniformly high in liver slices from ducks aged 2, 4 and 10 weeks but was very low in the adipose tissues of these animals (Evans 1972). Most of the endogenous body lipids are of hepatic origin, and the growth and subsequent fattening of adipose tissue depend more on the availability of plasma triglycerides (TG; O'Hea and Leveille 1969; Alvarenga et al. 2011). Our initial data in ducks also demonstrated that DNL mainly occurred in the liver instead of in adipose tissue (Ding et al. 2012). Considering that DNL in birds mainly occurs in hepatic tissues and plays significant roles in avian fat mass, we hypothesized that hepatic DNL may be the key contributor to individual differences in fat mass. To test the hypothesis, ducks with distinct genetic backgrounds, Pekin duck (fast growth with high fat levels) and Heiwu (a new artificial selected duck breed, slow growth with low fat levels), were selected for the present study to serve as genetically obese and lean animal models, respectively, thus providing a valuable comparative model for investigating the role of hepatic DNL in the differences in fat mass. These findings may provide new insight into the mechanism of fat mass. Birds and sampling Pekin ducks (Anas platyrhynchos, designated as PK) and Heiwu ducks (newly breed by artificial selection, designated as HW) were raised under natural temperature and light conditions at the experimental waterfowl breeding farm at Sichuan Agricultural University. The embryos were under the same hatching conditions of temperature and humidity, and the birds had free access to a starting diet containing 22.36% crude protein and 12.66 MJ/kg of metabolizable energy. From week 4, the ducks were fed a diet containing 18.32% protein and 12.35 MJ/kg of metabolizable energy. During the embryonic stages, 20 embryos of each species at each time point including embryonic days 15, 20 and 25 were isolated for sampling. During the post-hatching stages, six birds (three males and three females) were randomly selected at each sampling time point. The PK ducks were sacrificed for sampling every week, and the HW ducks were sacrificed every 2 weeks. Before sampling, the ducks were made to fast for 12 h, and then approximately 2 mL of blood was collected from the wing vein and mixed with EDTA (0.8 g/L) in a vacuum tube; then, the plasma was separated by centrifugation at 3000g for 10 min at 4°C. The blood serum was kept at −20°C until subsequent experiments. After the ducks were sacrificed, the organs and fat tissues including the liver, heart, leg fat tissues, abdominal fat tissues and subcutaneous fat tissues were isolated for weighing. Additionally, to produce an adequate comparison of abdominal fat and subcutaneous fat, 30 ducks (15 male and 15 female) of each species at weeks 8 and 16 were sacrificed, respectively. The tissues were immediately collected and quickly frozen at −80°C until subsequent analyses. All procedures in this study were conducted in compliance with the requirements of the Animal Ethics Committee of Sichuan Agricultural University. Plasma lipid parameter determination The lipid parameters of the blood plasma, including total cholesterol (TC), TG, high-density lipoprotein cholesterol (HDL-C) and low-density lipoprotein cholesterol (LDL-C), in the livers of the genetically lean and fat ducks were compared. For each plasma parameter, six ducks (three male and three female) of each breed were analysed individually every 2 weeks from week 0 to week 16. Briefly, the plasma TG concentration was determined using a TG Assay Kit (GPO-PAP-LST, Sichuan Maker Biology Technology, China), and the TC was determined using a Cholesterol Kit (CHO Kit, Sichuan Maker Biology Technology, China). HDL-C was detected with an HDL ELISA kit (Shanghai Bangyi, China), and LDL-C was detected with an LDL ELISA kit (Mindray Biology, China). All assays were performed following the manufacturers' instructions. RNA isolation and real-time PCR Total RNA was isolated from samples of liver, leg fat tissues, abdominal fat tissues and subcutaneous fat tissues from the two distinct duck breeds (PK & HW) at weeks 8 and 16, respectively. For each time point and breeds, the RNA was isolated from six individuals (three male and three female) to ensure six individual data points. The RNA was isolated using the Trizol reagent according to the manufacturer's protocol (Invitrogen, Carlsbad, CA) and then quantified based on spectrophotometric absorbance at 260 nm. First-strand cDNA was synthesized from total RNA using the PrimeScript RT reagent kit (TaKaRa Biotechnology, Dalian, China). The primers for the amplification of the duck Acc (acetyl-CoA carboxylase 2), dgat2 (diacylglycerol-O-acyltransferase 2), Fas (Fatty acid synthesis), cEBP(CCAAT/enhancer-binding protein) and PPARγ (peroxisome proliferator-activated receptor gamma) genes were designed using PrimerPremier 5.0 (Primer Biosoft International, USA). The duck β-actin and 18S rRNA sequences from GenBank were used as internal references. All GenBank accession numbers for each gene and the expected product size are provided in Table 1. The real-time PCR was performed using a reaction mixture containing 1 μL of the newly generated cDNA template, 12.5 μL of SYBR Premix Ex Taq, 10.5 μL of sterile water and 0.5 μL of the primers of the target genes. The reaction was performed using an IQ5 real-time PCR thermal cycle instrument (Bio-Rad, Munich, Germany). The thermal cycling parameters were as follows: 1 cycle at 95°C for 30 s, 40 cycles of 95°C for 10 s and 60°C for 40 s, with a subsequent 80-cycle melt curve to determine the primer specificity by starting at 55°C and increasing the temperature by 0.5°C every 10 s. All reactions were run in triplicate. The geometric means of the Ct value of the two internal control genes were calculated to analyse the relative expression of the target genes using the 'normalized relative quantification' method followed by 2 −ΔΔCT (Livak and Schmittgen 2001). ELISA For each duck, 0.5 g of liver tissues was homogenized. For each time point and breed, six individuals were prepared for analysis. The concentration of ACC, enzyme activities of FAS and ELOVL were determined using their corresponding ELISA kits, which were purchased from biotechnology companies (R&D, USA). The procedures were performed according to the manufacturer's instructions. Briefly, standard curves were first generated after measuring the optical density of the standard samples provided in the kits; then the optical densities of the tested samples were determined for a further analysis of enzyme amount based on the standard curves. Data analysis All the data were subjected to PROC GLM in SAS (Version 9.2, SAS Inc., Cary, NC) for testing, and the means were assessed for significance using Duncan's test. Correlations were assessed using Pearson's correlation coefficient using t-test. All statistical analyses were performed using SAS (version 9.3), with the significance level set at 0.05. Results and analysis Comparison of developmental characteristics of liver and adipose tissues between the two duck breeds HW is a new artificially selected duck line with characteristics of slow growth and less fat deposits, whereas the PK duck is a type of domestic duck species with characteristics of fast growth and high fat content. These two distinct genetic backgrounds were used in this study to illustrate the contributions of hepatic DNL to body fat mass. Figure 1 illustrates the phenotypic differences in weight in the liver, leg fat tissues, abdominal fat and subcutaneous fat between the two duck breeds. Overall, higher weights in the above-mentioned tissues were observed in the PK ducks, providing a reliable model for the study. The liver weights in the lean ducks were compared to those of the fat ducks at embryonic and post-hatching stages (Figure 1(A)). An apparent difference in absolute weight of the liver began to emerge from as early as embryonic day 20. Correspondingly, the heart, which exhibits less fat deposition, was used as a contrast to the liver in the present study, and the results illustrated a similar trend in absolute weight of the heart compared with the liver, showing a clear difference beginning at week 1. The difference in the weights of the leg fat tissues between the HW and PK ducks emerged as early as 1 week post hatching. The weights of the abdominal fat tissues and the subcutaneous fat tissues were compared at weeks 8 and 16, and both showed a significantly higher fat content in the PK ducks than in the HW ducks. Figure 2 illustrates the relative weights of the fat tissues to overall body weight between the genetically different duck breeds. Firstly, in the heart, which is a non-fat deposition organ, no apparent difference for most of the examined stages was observed between the lean and fat ducks, except week 8 to week 16 where HW have a higher ratio (p < .05). Similarly, the relative weight of the livers showed no differences in the embryonic stages or in the post-hatching stages for most of the analysed stages. Some significant differences only appeared in E25, w2, w4, w12 and w16 (p < .05). Only significant differences were observed in abdominal fat tissues and subcutaneous fat tissues (Figure 3). Variations in blood parameters between the two duck breeds Blood glucose levels were monitored from the post-hatching stages until week 16. Overall, no apparent differences between the two duck species were observed during the entire study period. They exhibited similar trends, both showing higher levels in the early stages of post hatching compared to the later stages closer to the week 16 sampling point. Blood TG and TC concentrations also showed similar trends in the two duck species at each time point analysed, exhibiting a decreasing trend during the early stages post hatching. The HDL concentration in the blood serum initially increased from the early stages post hatching until week 4, and then decreased from week 8. The LDL concentrations were decreased in both duck breeds at all examined time points. DNL-related gene expression and enzyme amount in the two duck breeds The DNL-related genes, including Acc, Dgat2 and Fas, were analysed in the liver tissues at weeks 8 and 16 (Figure 4). At week 8, the relative mRNA expression levels of the three genes did not show any differences between the PK and HW ducks, whereas at week 16, the expression levels were increased in the HW ducks and apparent differences from the levels in the PK ducks began to emerge. However, still no significant differences were observed according to the large individual variance. The mRNA expression results were further confirmed based on the protein levels analysed using ELISA kits ( Figure 5). The ACC, ELVOL and FAS results still showed no significant differences between the genetically fat and thin ducks at weeks 8 and 16. In the peripheral fat tissues, the mRNA expression levels of cEBP and PPARγ were measured and compared at weeks 8 and 16. The expression of cEBP showed a similar level in all three examined fat tissues in the PK and HW ducks. PPARγ was also expressed in all three peripheral adipose tissues studied, with a relatively higher level in the HW ducks at each time point; however, due to the large individual variance, a large difference in the leg fat tissues was observed only at week 16. Correlation analysis The contributions of DNL to fat deposition were evaluated based on the correlation coefficients between the fat phenotypic items and the molecular DNL indicators (Table 2). Heart weight was an important contributing factor that shaped the body weight during the entire study period (R = 0.37∼0.61, p < .05). The liver weight contributed greatly to the duck body weight only in the PK ducks at week 8 (R = 0.82, p < .01) and in the HW ducks at week 16 (R = 0.64, p < .01). The fat weight contributed greatly to the body weight except in the HW ducks at week 8 (R = 0.58∼0.87, p < .01). These data indicated that HW ducks are a late-maturing breed compared with the PK ducks. For the body fat mass, interestingly, the heart contributed more than the liver as reflected by the results that a significant correlation coefficient was observed between the heart weight and the fat weight at all stages studied; however, a significant correlation coefficient between the liver weight and the fat weight was observed only at week 8 in the PK ducks (R = 0.48, p < .01) and at week 16 in the HW ducks (R = 0.58, p < .01). The correlation between hepatic DNL molecular items and phenotypic fat deposit amounts was also analysed, and the results did not show any regular contributions of hepatic DNL to body fat mass, although some DNL molecular items appeared to play a negative role in fat mass. Discussion The underlying reason for the individual differences in body fat mass remains unclear. The results of the present study supported different fat deposition capabilities in the PK and HW ducks. The phenotypic divergence in the weights of the liver and the fat tissues indicated that different genetic backgrounds existed within the two duck breeds, providing a suitable model for the subsequent analysis of the contribution of hepatic DNL to individual differences in duck fat mass. In ducks, our initial data demonstrated that the main DNL site is always the liver in post-hatching ducks and that the adipose tissues are of little importance for DNL (Ding et al. 2012). It was therefore concluded that hepatic DNL as an endogenous pathway for lipid synthesis may play an important role in the individual fat mass differences. In the present study, we focused on the expression pattern of lipogenic genes (Acc, fas and Dgat2) and the levels of enzyme amount (ACC, ELVOL and FAS) in the hepatic tissues between the PK ducks and the HW ducks. These selected genes and enzymes have been shown to play roles in controlling the key lipogenesis processes and have been well established as good indicators of DNL (Daval et al. 2000;Yen et al. 2005;Zhao et al. 2007;Rosebrough et al. 2008;Herault et al. 2010;Strable and Ntambi 2010). Our results demonstrated that the expression of these genes and enzyme activities (or concentrations) showed no obvious changes in the two duck breeds, indicating that hepatic DNL may participate less in the fat mass difference. In chickens, the expression of lipid synthesis and secretion-related genes were analysed in the liver of lean and fat chickens, and only a few genes (stearoyl-CoA desaturase, sterol response element binding factor 1 and hepatocyte nuclear factor 4) were identified to be differentially expressed between the chicken breeds (Bourneuf et al. 2006). In ducks, Aijuan et al. identified the differential expression of proteins in hepatic tissues in lean and fat Pekin ducks (Zheng et al. 2014). Based on their data, no DNL-related proteins were identified, which was similar to our results that indicated less contribution of hepatic DNL in the genetic fat mass difference. The fat mass difference can also be attributed to the accumulation of triacylglycerols in the adipose tissue and hypertrophy as well as hyperplasia of adipocytes, in addition to the contributions of hepatic DNL. Moreover, it was reported that the adipocyte proliferation mainly occurred in the early post-hatching stages in ducks . We further examined the mRNA expression levels of cEBP and PPARγ in the peripheral fat tissues, whose functions involve regulating adipocyte proliferation (Hu et al. 1995). However, no clear differences in the expression levels of these genes were observed in any of the fat tissues in the two duck breeds, indicating that the adipocyte proliferation status may contribute equally to fat tissues. It was concluded that adipocyte proliferation may not be the main contributing factor to individual fat mass differences. In our present study, although there were some apparent phenotypic divergences in body fat mass amounts in the two duck breeds, they both grew at their normal rate and exhibited similar developmental status, which was reflected by the similar percentage of liver and fat tissues in relation to the body weight. The similar developmental status of the two duck breeds may be able to explain why the expressions and enzyme amount examined in our study showed no clear differences between the species. Our results were supported by a transcriptome study, which demonstrated that no DNL-related genes were identified to be differentially expressed in genetically thin and fat chickens (Carre et al. 2002). Given that the levels of gene expression and enzyme amount in per unit hepatic tissues were approximately the same in the two genotypic duck breeds, the liver size may account for the body fat mass. Interestingly in our results, the liver weight at its peak in the PK ducks was approximately 2.5-fold that in the HW ducks, which was similar to the fold change of the extrahepatic adipose tissue between the two duck breeds; for example, the leg fat tissue weight at its peak in the PK ducks was approximately30 g, which was approximately three times that in the HW ducks. These findings were also supported by the correlation analysis, which showed that hepatic DNL items and phenotypic fat deposit amounts did not show any regular contributions of hepatic DNL to body fat. Therefore, it was suggested that a larger liver may represent higher levels of liver fatty acids generated in the liver. It was suggested that hepatic lipogenesis is sensitive to the nutritional conditions, such as high-carbohydrate and high-fat diets. In humans, the DNL capabilities can be stimulated by high-carbohydrate and high-fat diets (Diraison et al. 2003;Strable and Ntambi 2010). In ducks, Hermier et al. investigated the effects of overfeeding on hepatic lipid channelling in two duck breeds, including the common duck (Anas platyrhynchos) and the Muscovy duck (Cairina moschata). The Muscovy duck exhibited a higher degree of hepatic steatosis and a lower increase in adiposity and in the concentration of plasma TG and VLDL response to overfeeding, indicating a genotypic influence on VLDL and fat mass (Hermier et al. 2003). Based on their data, several DNL-related genes were differentially expressed in the two duck species, which was similar to our findings. However, the common duck and the Muscovy duck are two duck species that belong to different genera and therefore have greater differences in their genetic backgrounds than do the PK and HW ducks in the current study. Herault et al. compared the feeding and species effect on differences in lipid metabolism and steatosis of Pekin and Muscovy overfed ducks, and they demonstrated a specific positive effect of feeding on the expression of the genes involved mainly in fatty acid and TG synthesis and glycolysis and a negative effect on genes involved in b-oxidation; a strong species effect was also observed in stearoyl-CoA desaturase 1 and, to a lesser extent, in diacylglycerol-O-acyltransferase 2 expression, leading to large differences in the expression levels in the Pekin and Muscovy overfed ducks (Herault et al. 2010). Their data did not definitively show a species effect on the expression of genes that encode the main enzymes involved in DNL. Taken together with our data, it is reasonable to believe that hepatic DNL may contribute less to the body fat mass difference in some duck species. In summary, we investigated the possible effects of hepatic DNL on fat mass differences in genetically thin and fat ducks. Although a clear phenotypic divergence in extra-liver fat mass was observed in the two duck breeds, the relative weight of the fat tissues to the body weight was the same. Our results showed that there were no clear differences in the DNLrelated gene expression and enzyme amount observed in the PK and HW ducks, suggesting that the hepatic DNL may levels and enzyme activities (or concentration) were examined in the livers of both the PK ducks and the HW ducks at weeks 8 and 16. Fat weight is the sum of the subcutaneous fat, abdominal fat and leg fat weights. For items of body weight, heart weight, liver weight, and fat weight, n = 30. For items of gene expression and enzyme activities, n = 6. a Enzyme activities or concentrations. *Significance level of p < .05, **Significance level of p < .01. contribute less to the fat mass difference in genetically different duck breeds than does liver size. Our findings may provide a fundamental explanation for the genetic differences in fat mass. Disclosure statement No potential conflict of interest was reported by the authors.
5,629.4
2018-01-01T00:00:00.000
[ "Biology", "Agricultural and Food Sciences" ]
Grain bulk density measurement based on wireless network To know the accurate quantity of stored grain, grain density sensors must be used to measure the grain’s bulk density. However, multi-sensors should be inserted into the storage facility, to quickly collect data during the inventory checking of stored grain. In this study, the ZigBee and Wi-Fi coexistence network’s ability to transmit data collected by density sensors was investigated. A system consisting of six sensor nodes, six router nodes, one gateway and one Android Pad was assembled to measure the grain’s bulk density and calculate its quantity. The CC2530 chip with ZigBee technology was considered as the core of the information processing, and wireless nodes detection in sensor, and router nodes. ZigBee worked in difference signal channel with Wi-Fi to avoid interferences and connected with Wi-Fi module by UART serial communications interfaces in gateway. The Android Pad received the measured data through the gateway and processed this data to calculate quantity. The system enabled multi-point and real-time parameter detection inside the grain storage. Results show that the system has characteristics of good expansibility, networking flexibility and convenience. Introduction Inventory checking of grain storages expends a great deal of manpower and material resources in China, but the accuracy of measurement results is still not satisfying, and the actual density of the grain storages are difficult to determine.Sensors offer a potential solution to this underlying problem.The grain bulk density sensor was designed according to the relationship between permittivity and bulk density of grain [1], and the electromagnetic wave transmission method was used to estimate grain permittivity, and then calculated bulk density. A wireless network is any type of communication network that uses wireless data connections for connecting network nodes thus, obviates the costly process of introducing cables into a building [2].ZigBee is a low-cost, low-power, wireless mesh network standard targeted at the wide development of long battery life devices in wireless control and monitoring applications [3].ZigBee devices can transmit data over long distances by passing data through a mesh network of intermediate devices to reach more distant ones, whereas Wi-Fi is a wireless local area network based on IEEE 802.11 standards [4].Devices which can use Wi-Fi technology includes personal computers, smartphones, tablet computers and digital cameras. Decision-makers need accurate information on amount and quality of stored grain to prevent storage losses, which can be a major hindrance to food security.Grain bulk density sensors can be used to measure grain bulk density, which is a good indicator of stored grain conditions.However, multi-sensors should be inserted into the grain storage to quickly collect measured data during the inventory checking of stored grain.This can be costly and sometimes give erroneous results.Therefore, in this study, the ZigBee and Wi-Fi coexistence network's ability to transmit data collected by density sensors was investigated. 2 System design 2.1 System architecture A wireless network system was developed and it consisted of six sensor nodes, six router nodes, one gateway and one Android Pad, to measure grain bulk density and calculate quantity as shown in Figure 1.ZigBee network was used for communication among sensor nodes, router nodes and gateway.The Android Pad connected with the gateway through Wi-Fi to send commands and receive data. Figure 1. Grain bulk density measurement system architectureį The sensor nodes received data from the environment and sent it to the router.As well as running an application function, a router node could act as an intermediate router, thereby passing on data from other devices.The gateway had to be set in advance to be able to receive data from sensor or router nodes. To communicate and transmit data wirelessly, each sensor or router node required a ZigBee module.In addition, to be able to use the entire module is used, such devices require additional power from the battery. System components The sensor node consisted of a CC2530 ZigBee module, AD8302 phase detector, AD8532 operation amplifier and TPS60211 power manager.The CC2530 is a true system-on-chip (SOC) solution for ZigBee applications.The CC2530 combined the excellent performance of a leading RF transceiver with an industry-standard enhanced 8051 MCU [5]. The router node consisted of a CC2530 ZigBee module, AD9851 Direct Digital Synthesizer (DDS) and MAX2650 amplifier.The shortwave signal was generated by an AD9851, and controlled by a CC2530 ZigBee module through the SPI interface. The wireless gateway consisted of a CC2530 ZigBee module and a USR-WIFI232 Wi-Fi module.ZigBee worked in different signal channels with Wi-Fi to avoid interferences and was connected to the Wi-Fi module through the UART serial communications interfaces in gateway. The hardware specifications of the system used in this study are shown in Table 1; Table 1.System hardware specifications. SN. Hardware Specifications Wireless network configuration The Gateway had a user interface for system configuration called Manager System, which is similar to configuration application on routers in general.By default, Manager System could be accessed using a web browser via WiFi (using 10.10.100.254).The initial view of Manager System appeared shortly after logging in with correct credentials.The pre-requisite UART settings included the Bandrate, Data Bits and Stop Bit (Fig. 2).Furthermore, network configuration such as Network Name (SSID) and Channel were performed in the interfaces tab in the Manager System, as shown in Fig. 3. Experiment result In the study, four sensor nodes and four router nodes connected to a wireless gateway were used.The Gateway was then connected to an Android Pad via a Wi-Fi network.The resultant network topology is shown in Figure 4. Figure 4į į į į Network topology in this study. The short address of gateway was 0x00, and each sensor node or router node had a different short address.Pressing the "Start" button was done to receive measured data after all devices were connected to the gateway. The Android Pad received measured data from each sensor node through the gateway and processed this data to calculate quantity.The result of the measurement is given in Figure 5. Conclusion A grain bulk density measurement system based on wireless network was developed in this study.It reliably proves that Wi-Fi network and ZigBee network can be used together.By joining networks, an Android Pad can be employed to send commands, receive data and manage measured values.The system performed multi-point and real-time detection inside the grain storage.These results show that this system has the characteristics of good expansibility, networking flexibility and convenience.It is recommended, however, to explore the potential for using 4G/3G/GPRS to remotely monitor the grain bulk density in future. Figure 5į į į į Figure 5į į į į Result of the measurement.The Android Pad had a MySQL local database to store the values of the measurement.A local database can be very vulnerable to damage and memory limitations of storage, so it is advisable to back-up the data to an external storage.
1,561
2017-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Matter Effect of Light Sterile Neutrino: An Exact Analytical Approach The light sterile neutrino, if it exists, will give additional contribution to matter effect when active neutrinos propagate through terrestrial matter. In the simplest 3+1 scheme, three more rotation angles and two more CP-violating phases in lepton mixing matrix make the interaction complicated formally. In this work, the exact analytical expressions for active neutrino oscillation probabilities in terrestrial matter, including sterile neutrino contribution, are derived. It is pointed out that this set of formulas contain information both in matter and in vacuum, and can be easily tuned by choosing related parameters. Based on the generic exact formulas, we present oscillation probabilities of typic medium and long baseline experiments. Taking NO$\nu$A experiment as an example, we show that in particular parameter space sterile neutrino gives important contribution to terrestrial matter effect, and Dirac phases play a vital role. Introduction In Standard Model (SM) as a component of SU (2) left-handed doublet, neutrino is electric neutral, massless and only takes part in weak interaction. Now it has been well established that at least two active neutrinos are massive with tiny masses. The origin of neutrino mass is still an open question. Including seesaw mechanism [1] and radiative correction mechanism (for example, [2], [3] and for a recent review see [4]) many efforts have been contributed to this . General speaking, new particles out of SM particle spectrum will appear associated with neutrino mass models. As a hypothetic particle, though does not participate weak interaction, sterile neutrino is required in some neutrino mass models beyond SM. For example, in Type I seesaw mechanism the heavy right-handed neutrino singlet contributing the tiny mass of left-handed neutrino is absent from SU(2) interaction and hence is the sterile neutrino. However, the mass of sterile neutrino, varied from eV to TeV, has not been determined yet. Recently the search of sterile neutrino in experiment is active. For heavy sterile neutrino, the run of LHC provides a particular opportunity. The IceCube Neutrino Observatory, which locates in Antarctic, gives an unique vision. And recently, an event of high energy neutrino which is absolutely beyond the structure of SM, has been observed by IceCube [6]. Meanwhile neutrino oscillation experiments are usually considered as a useful platform to extract information of light sterile neutrino. Indeed it has been implicated by oscillation experiments the existence of sterile neutrino. In 2001 the LSND experiment searchedν µ →ν e oscillations, suggesting that neutrino oscillations occur in the 0.2 < ∆m 2 < 10 eV 2 range [7]. Later the MiniBooNE experiment indicated a two-neutrino oscillation,ν µ →ν e , occurred in the 0.01 < ∆m 2 < 1.0 eV 2 range [8]. Recently combininḡ ν e disappear mode from Daya Bay Collaboration and ν µ neutrino oscillation from MINOS, the two Collaborations give a joint analysis, incorporating the early Bugey-3 data, and claim that at 95% C.L. ∆m 2 41 < 0.8eV 2 can be excluded [9]. The IceCube neutrino telescope, measuring the atmospheric muon neutrino spectrum, extend the exclusion limits to sin 2 2θ 24 ≤ 0.02 at ∆m 2 ∼ 0.3eV 2 at 90% confidence level in 2016 [10]. From a recent effort of NEOS collaboration, the mixing parameter limit is obtained as sin 2 2θ 14 ≤ 0.1 for 0.2 < ∆m 2 41 < 2.3eV 2 [11]. Taking into account recent progress, a global fit of shortbaseline neutrino oscillation has been updated [28], giving ∆m 2 41 ≈ 1.7eV 2 (best-fit), 1.3eV 2 (at 2σ), 2.4eV 2 (at 3σ) and 0.00047 ≤ sin 2 2θ eµ ≤ 0.0020 at 3σ. Sterile neutrino, probably as a port to new physics, is far more than clear today. When neutrino propagates in matter, the matter effect should be taken into account and the significance is different. It is known that usually in short and medium baseline experiments, matter effect does not give a dominated contribution while in long baseline experiment, oscillation could be largely affected by matter effect. However, in the precise experiment like JUNO, even though the baseline is not so long, the matter effect probably reduce the sensitivity of mass ordering measurement, thus a careful study of terrestrial matter effect on medium baseline experiment is performed [5]. A similar analysis should be considered if sterile neutrino exists regardless the length of experiments' baseline. Some efforts have been contributed. Klop and Palazzo studied sterile neutrino induced CP violation with T2K data [14], where they developed an approximated method and helps to simplify the calculation. Choubey et. al. extended the discussion to DUNE, T2HK and T2HHK experiments [15], [16]. In [17], Ghosh et. al. studied mass hierarchy sensitivity in presence of sterile neutrino in NOνA. A general discussion on light sterile neutrino effect in long baseline experiments has been performed by Dutta et. al. [18]. Later a joint shortand long-baseline constraints on light sterile neutrino have been done by Capozzi el. al. [19]. Recently, more works related to T2HK, DUNE and NOνA have been contributed in [20], [21] and [22], and an updated global analysis has been done by Dentler el. al. [23]. In addition to the approximated method proposed by Klop and Palazzo, some other alternative approach has also been proposed [24]. Meanwhile we should keep in mind that in above works, the analysis is based on either approximate analytical method or numerical approach. For the particular modes, such approach is convenient. On the other hand, since the sterile neutrino mass (even light sterile neutrino mass) is unknown, the approximation adopted above has a risk to lose some information though calculation speeds up. Thus a complete exact analytical solution, formally complicated, is valuable. Such efforts were made previously in [25] and [26], in two different approaches. In this work, we will develop the method and improve the result in [25], and then provide a complete exact analytical solution. This paper is organized as follows. In section 2 we will give a brief introduction of neutrino oscillation with the consideration of matter effect. In section 3 we will derive the mass-square differences within matter effect, show related rotation matrix elements and propagation probabilities explicitly. The applications of such analytical solution will be presented in section 4. In section 5, we will summarize and give a conclusion. More details involved in section 3 are shown in appendix. Theorectial Framework The picture of neutrino oscillation is well understood currently. The identity of neutrinos in flavor space and mass space is not identical, or they have a mixing. Due to such a mixing, described by rotation matrix U , the identity of neutrinos can be changed during its journey from source to destination, called neutrino oscillation. The oscillation probability, that is the probability for capturing neutrino as ν β from the initial beam ν α , is while L is propagating distance, and E is the energy carried by neutrinos. Both appear mode and disappear mode are contained in Eq. (2.1). The oscillation probability is determined by universal parameters U αi , m i as well as experiment dependent parameters E and L. In Standard Model (SM) there are only three flavors of active neutrinos, thus the mixing matrix, named PMNS matrix, is parameterized by three rotation angles and one CP-violating phase. Within this theoretical framework the three angles (θ 12 , θ 23 , θ 13 ) are measured by solar neutrino, atmospheric neutrino and reactor neutrino experiments, respectively. The remaining undetermined parameter is CP phase δ, as well as the sign of ∆m 2 13 , could be reachable in the following ten years. On the other hand, the possibility to have one more light sterile neutrino still exists. The sterile neutrino (denoted as ν s ), unlike the active neutrinos (denoted as ν e , ν µ , ν τ in flavor state), are known for its absence from SM weak interactions. However, its effect appears indirectly by mixing with active neutrinos. Such a mixing is described by lepton mixing matrix U , given which characterizes the rotation between mass eigenstate and flavor eigenstate in vacuum. With more degrees of freedom in U , the oscillation probability will contain richer information. There are 6 angles and 3 phases to parameterize mixing matrix U . Similar to the standard parameterization of PMNS matrix, putting the two extra phases in 1-4, 3-4 plane, we can write down the four dimensional mixing matrix as in which R (θ ij (, δ ij )) represents Euler rotation in i-j plane without (with) a CP phase. More details for four dimensional U are shown explicitly in appendix A. When passing through matter, active neutrinos interact with matter by weak interaction. More exactly ν e interacts via both charged current and neutral current while ν µ , ν τ only receive neutral current interaction by exchanging Z bosons. Though sterile neutrino itself does not take part in weak interaction, by removing the global neutral current which will not affect oscillation probability, ν s has an induced nonzero term in effective Hamiltonian while the corresponding ones for ν µ,τ vanish, giveñ where U is the lepton mixing matrix in vacuum and Without loss of generality, the Hamiltonian can always be written in a more compact form where the effective massm i and the new defined effective lepton mixing matrixŨ have incorporated information of matter effect. And hence the oscillation probability including matter effect has the same structure of the one in vacuum, that is Hereafter we will adopt P to stand forP for its clear meaning. Obviously if one works out explicitlyŨ αi andm 2 i , the probability will be well presented. However, such a calculation would be challenging. To avoid the difficulty, by working out the effective mass differences and some necessary combinations of entries ofŨ , we can obtain complete exact expressions for P as well. In the following section, we will derive the necessary parameters. Effective Parameters Mass difference ∆m 2 ij and lepton mixing matrix U αi in vacuum are universal. The corresponding ones in matter will be corrected by matter parameters. We will give ∆m 2 ij firstly, based on whichŨ αi is also obtained. Effective mass-square difference In this part, we aim at solve ∆m 2 ij .It is known that a constant can be removed from diagonal entries simultaneously, as it contributes to a global phase which does not affect oscillation probability. Then by subtracting a global m 2 1 in Eq. (2.4), we have The induced mass difference can also be written in the form of∆m 2 ij =∆m 2 i1 − ∆m 2 j1 . With the help of∆m 2 ij , the effective mass difference ∆m 2 ij which we seek for is constructed as The effective mass difference between two arbitrary effective masses can be resort to those differences fromm 2 1 . Thus the aim now is simplified to find out∆m 2 i1 , that is the diagonalization of the right-handed side of Eq.(3.1). In principle the key point of the diagonalization is to solve a quartic equation, which is fortunately solvable. The particular involved quartic equation as well as its solutions are shown in appendix B. With necessary new-defined parameters, we can obtain the exact analytical expressions for∆m 2 j1 (j = 1, 2, 3, 4), which depend on ∆m 2 i1 (i = 1, 2, 3, 4). Hence by the usage of Eq. (3.2) the effective mass difference, including matter effect correction, now can be expressed explicitly in which b, p, q and S are intermediate parameters defined in appendix B. Note here we have assumed normal mass hierarchy (m 2 1 < m 2 2 < m 2 3 < m 2 4 ). Without loss of generality, other situations of mass ordering can be derived similarly. Effective lepton mixing matrix In addition to effective mass difference, the oscillation probabilities of neutrino propagation rely on lepton mixing matrix as well. Without relating to each entry of the matrix, only some particular combinations are concerned. We have shown how to solve these quantities in Appendix C, in which the general expressions have been given in Eq. (C.7, C.8, C.9, C.10). In this section, we restrict our interests typically in reactor neutrino and accelerator neutrino experiments. The relevant entries are listed below explicitly. • The reactor neutrino experiments:ν e →ν e For the disappear mode of anti-electron neutrino, the only concerned entryŨ ei is in which the auxiliary quantities are For the first glance, |Ũ ei | 2 relies on ∆m 2 ij , ∆m 2 ij and U αi . Note ∆m 2 ij =∆m 2 i1 −∆m 2 j1 and∆m 2 i1 is the solution for quartic equation which further relies on ∆m 2 i1 and U αi as well as matter effect parameters A, A ′ . So the free parameters for matter effect correction to |Ũ ei | 2 are ∆m 2 ij , U αi , A and A ′ . • The accelerator neutrino experiment: disappear mode ν µ → ν µ Both disappear mode and appear mode will be used in accelerator neutrino experiments. For the disappear mode, the required |Ũ µi | 2 is given as with the associated functions Except a difference in F ij α , all other terms are same as |Ũ ei | 2 up to a corresponding change e → µ. One may find a consistent result from [25]. • The accelerator neutrino experiment: appear mode ν µ → ν e In this case, a distinct difference from disappear mode is that the product of two entries,Ũ eiŨ * µi , are required. One can immediately have the following relation according to the general expression in Appendix C, Note the corresponding result provided in [25] is not consistent with ours, while our calculation can be confirmed by numerical evaluation. Exact oscillation probability Armed with effective mass difference and effective mixing matrix entries, the oscillation probabilities are spontaneously presented as, 13) where∆ ij ≡ ∆m 2 ij L/2E and L is the baseline of a particular neutrino experiment. The input parameters are (∆m 2 i1 , U αi , A, A ′ ), where the description of U αi further relies on their parametrization, one of them can be found in Appendix A. Thorough out the whole derivation, no additional assumptions are adopted except the unitary condition of U andŨ . So the exact analytical expressions are applicable for both short baseline and long baseline experiments. Meanwhile we would like to point out that the formulas derived here are the most generic ones in 3+1 scheme, since all possible situations, including SM case, are all contained in. In particular, we may get the following extreme cases by tuning parameters in our formulas, i) active neutrino propagating in matter with sterile neutrino effect: U αi = 0, A = 0, A ′ = 0. Applications and Discussion The exact analytical solution keeps the original information of sterile neutrino without any approximation. Since sterile neutrino mass is still unknown, approximated formulas, though can speed up evaluation, still have a risk to lose some information. In this section, based on the exact solutions, we give a numerical analysis for typical neutrino experiments. For each experiment, two types of input parameters are relevant. One type is the universal parameters, including mixing matrix and mass difference, while the other nonuniversal one depends on experiment location, neutrino source and matter effect parameters. For illustration, we take input parameters as follows. There are 6 rotation angles and 3 Dirac phases in mixing matrix, while the oscillation irrelevant Majorana phases can be ignored here. We take sin 2 θ 13 = 0.0218, sin 2 θ 12 = 0.304, sin 2 θ 23 = 0.437 from the SM global fitting [27], the other 3 angles we choose sin 2 θ 14 = 0.019, sin 2 θ 24 = 0.015, sin 2 θ 34 = 0 [28]. Throughout the simulation, we fix one of the three Dirac phases as δ 34 = 0, and let the other two as free parameters for the purpose of illustration. As for the mass-square difference, two of the three are consistent with SM global fitting, ∆m 2 21 = 7.5 × 10 −5 eV 2 , ∆m 2 31 = 2.457 × 10 −3 eV 2 , the remaining one is fixed as ∆m 2 41 = 0.1eV 2 . To describing matter effect, we adopt the relevant parameters from realistic oscillation experiment [29], which set matter density as ρ ≈ 2.6g/cm 3 and eletron fraction Y e ≈ 0.5. Medium baseline experiment Around a nuclear power plant (NPP), there are plenty of antielectron neutrinos produced via β decay in nuclear reactions. Detectors can be put in suitable places near to the nuclear plant to explore reactor neutrino events. Usually the baselines of such kind of experiments are in the range of short or medium baseline. For the exploring experiments, matter effect is not taken within the main considerations. But the situation could be changed in precise measurement as experiment sensitivity may be affected. The ongoing Jiangmen Underground Neitrino Observatory (JUNO) experiment [30], with its baseline L = 52.5 km, is one of such kind of experiments. Regarding to matter effect, whether the oscillation probability will change with or without matter effect, both in purely 3 flavor active neutrinos case and in the framework of active plus sterile neutrino case, is what we are concerned. In Fig. 1 we take the relative difference for probability, from matter to vacuum, and plot it varying by energy. In order to discriminate the Dirac phases' effect, we choose typic values of the two phases δ 13 , δ 14 and make various combinations. In this analysis only normal hierarchy (NH) of neutrino mass situation is presented, while the inverted hierarchy (IH) case has similar behaviors, though not shown explicitly. 1) Around the most promising range E ∼ 3MeV, the relative difference can reach 4%, which could possibly be distinguished by JUNO detector. 2) The sterile neutrino contribution does not affect probability curve dramatically, that is to say for short/medium baseline experiment, sterile neutrino effect is quite limited. 3) The effect from Dirac phase seems bleak, no distinction can be reflected from different phase combinations. Hence we may conclude that the short and medium baseline experiments are not sensitive to the matter effect of sterile neutrino, as well as the CP-violating Dirac phases. Long baseline experiment Neutrino beam produced from accelerators usually carries higher energy and can be detected in a long distance from source. In this part, we will take NOνA experiment [31], with its baseline L = 810km, as an example to illustrate the properties of long baseline experiments case. Here we show the oscillation probability of appearance mode ν µ → ν e in long baseline accelerator neutrino experiment in Fig. 2, where the brown curves stands for oscillation in 3+1 scheme and blue curves correspond to SM case while solid (dashed) lines mean matter effect has (not) been contained. In the plot, we have chosen typical CP phase combination of (δ 13 , δ 14 ) in NH case, and consider its variation in energy range 1 ∼ 3GeV. One can address the following points: 1) The matter effect can not be negligible, on the contrary, it is important both in 3ν and 4ν case. At about 1 GeV range, the relative difference for probabilities can be as large as 50% in whichever scenario. This difference could be ∼ 20% around the maxima of oscillation probabilities. 2) No matter propagating in vacuum or in matter, sterile neutrino gives nonnegligible contribution to oscillation probability. In each graph, the dashed lines have obvious deviation from their solid correspondence. 3) The CP-violating Dirac phases also plays a non-ignorable role. By comparing Fig. 2a with Fig. 2c, one may see the oscillation probability has been affected. In the scenario of (δ 13 , δ 14 ) = (0, π 2 ), one can see the blue lines are almost in the middle of corresponding brown lines. But the blue curves has a distinct deviation from the average lines of brown ones in the scenario of (δ 13 , δ 14 ) = ( π 2 , π 2 ). Therefore we may conclude that in the long baseline experiment, in the existence of sterile neutrino, the matter effect can not be ignored. The CP-violating Dirac phases in the mixing matrix may play an important role in sterile neutrino's matter effect. A more comprehensive analysis to display the entanglement of the phases is necessary, and we will show it in other places. Conclusion In this work, we have derived exact formulas of oscillation probabilities with matter in medium and long baseline experiment in presence of an additional light sterile neutrino. In particular, the key quantities contributing to oscillation probability, ∆m 2 ij andŨ αiŨ * βi , are shown explicitly. Based on exact formulas, we perform a detailed study of the matter effect correction in medium and long baseline experiments. We found that in medium baseline experiment, like JUNO, the matter effect contribution is negligible even in presence of light sterile neutrino. But in the long baseline experiment, taking NOνA as an example, the matter effect contribution plays a very important role especially when baseline grows. A The parameterization of mixing matrix In (3 + 1) scenario, the full neutrino mixing is characterized by a 4 × 4 matrix. To parameterize it, we need 6 rotation angles and 3 addtional Dirac phase angles. [32] The Majorana phase angles are closed here because it doesn't involve in the oscillation process. The mixing matrix can be constructed by 6 two-dimensional rotations R ij is a four dimensional rotation matrix, and in the (i, j) sublocks its elements reads with c ij = cos θ ij , s ij = sin θ ij . The detailed matrix elements are Apparently if we close the angles related the forth neutrino, this 4× 4 lepton mixing matrix will reduce to 3 × 3 PMNS matrix. In order to solve Eq.(3.1), we resort to solving a quartic equation in below, where its root is denoted as λ i (i = 1, 2, 3, 4). and the coefficients are defined as with Levi-Civita symbol ε ijk and ε ijkl . More auxiliary qualities are introduced to make the result more concise With the above notations, we find solutions of λ i where λ 1 < λ 2 < λ 3 < λ 4 . Notice that we don't assume the a normal or inverted mass hierachy. Thus eq.(B.4) and the inequality above are hierachy independent. If neutrinos are in the inverted mass hierachy (m 3 < m 1 < m 2 < m 4 ), we can simply setm 2 3 = m 2 1 + λ 1 ,m 2 1 = m 2 1 + λ 2 ,m 2 2 = m 2 1 + λ 3 ,m 2 4 = m 2 1 + λ 4 . If they are in the normal mass hierachy (m 1 < m 2 < m 3 < m 4 ), the result is the eq.(3.4) C Calculation of effective matrixŨ αiŨ * βi From Eq. (2.6), we may see the oscillation probability depends on some certain combinations of entries ofŨ , that isŨ αiŨ * βi , where i is not summed. In [25] some of the calculations has been done. The calculation of α = β case can be confirmed, however, the result for α = β seems wrong. Thus a reconsideration is required. In this section, we will complete this mission by an explicit calculation. Since bothŨ and U are unitary, one can subtract a diagonal matrix from Eq.
5,544
2018-08-12T00:00:00.000
[ "Physics" ]
Evaluating Multiobjective Evolutionary Algorithms Using MCDM Methods 1Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science & Technology, Nanjing 210044, China 2Research Center for Prospering Jiangsu Province with Talents, Nanjing University of Information Science & Technology, Nanjing 210044, China 3China Institute for Manufacture Developing, Nanjing University of Information Science & Technology, Nanjing 210044, China 4School of Management Science and Engineering, Nanjing University of Information Science & Technology, Nanjing 210044, China Introduction Without a loss of generality, the mathematical formula of multiobjective problems (MOPs) can be expressed as follows: min () = ( 1 () , 2 () , . . ., ()) ∈ Ω, (1) where → = ( 1 , 2 , . . ., ) is the decision vector in the decision space Ω. () is the objective function.Generally speaking, the objectives contradict each other.We cannot find a single solution to optimize all the objectives.Optimizing one objective often leads to deterioration in at least one objective. In the single optimization, the algorithm performance can be evaluated by the difference between () and function optimal value.However, the method cannot be adopted in MOPs.In order to solve the problem, many criteria are proposed to evaluate the performance of MOEAs.In fact, the experiment results of almost every algorithm indicate that the proposed algorithm is competitive compared with the stateof-the-art algorithms.Nondominated objective space and box plot are adopted in SPEA2 [4].NSGAII employs convergence and diversity metrics to compare with SPEA and PAES [8].The set convergence and inverted generation distance (IGD) are used to evaluate the performance of MOEAD [9]. Five performance measures Two MCDM methods Epsilon indicator is used in IBEA [11].Convergence measurement, spread, hypervolume and computational time are selected as performance metrics in epsilon-MOEA [12].To validate the proposed MOPSO, four quantitative performance indexes (success counting IGD, set coverage, two-set difference hypervolume) and qualitative performance index (plotting the Pareto fronts) are adopted [13].Three quality indicators, additive unary epsilon indicator, spread, and hypervolume, are considered in SMPSO [14].Spacing, binary metrics and are used in GD3 [15].Three metrics, generation distance (GD), spread, and hypervolume, are used to estimate ABYSS [16].GD, diversity, computational time, and box plot are considered as measurement in MOSOS [17].GD and diversity metrics are adopted in MOEDA [18].There are three metrics, GD, IGD, and hypervolume, in GAMODE [19].Among these metrics, some focus on the convergence of MOEAs, while some pay attention to the diversity of MOEAs.Convergence is to measure the ability to attain global Pareto front and diversity is to measure the distribution along the Pareto front.It is observed that every proposed algorithm often introduces few metrics to estimate the performance based on the results from benchmarks.The conclusions of these MOEAs are that they are the best and competitive.However, it is unfaithfully to measure MOEAs performance by one or two metrics.Every metric can just demonstrate some specific qualification of performance while neglecting other information.For instance, the metric GD can provide information about the convergence of MOEAs, but it cannot evaluate the diversity of MOEAs.Therefore, these evaluations are not comprehensive.It cannot entirely estimate the whole performance of MOEAs.As evaluation of MOEAs involves many metrics, it can be regarded as a multiple-criteria decision making (MCDE) problem.MCDE techniques can be used to cope with the problem.In order to overcome the problem and make fair comparisons, a framework using MCDE methods is proposed.In the framework, comprehensive performance metrics are established, in which both convergence and diversity are considered.Two MCDE methods are employed to evaluate six MOEAs.The efforts can give more fair and faithful comparisons than single metric. The rest of this paper is organized as follows: Section 2 proposes the framework, in which six algorithms, five performance metrics, and two MCDM methods are briefly introduced.Experiments are presented in Section 3 and conclusions are illustrated in Section 4. Evaluation Framework A framework is proposed to evaluate multiobjective algorithms in Figure 1.Six MOEAs, five performance metrics, and two MCDM methods are employed in the framework. Six MOEAs (1) NSGAII [8].NSGAII was proposed to solve the high computational complexity, lack of elitism, and specifying of the sharing parameter of NSGA.In NSGAII, a selection operator is designed by creating a mating pool to combine the parent population and offspring population.Nondominated sort and crowding distance ranking are also implemented in the algorithm. (2) PAES [2].The Pareto archived evolution strategy (PAES) is a simple evolutionary algorithm.The algorithm is considered as a (1 + 1) evolution strategy, employing local search from a population of one but using a reference archive of previously found solutions in order to identify the approximate dominance ranking of the current and candidate solutions vectors. (3) SPEA2 [4].The strength Pareto evolutionary algorithm (SPEA) was proposed in 1999 by Zitzler.Based on the SPEA, an improved version, namely, SPEA2, was proposed, which incorporated a fine-grained fitness assignment, a density estimation technique, and an enhanced archive truncation method. (4) MOEAD [9].Multiobjective evolutionary algorithm based on decomposition (MOEAD) was proposed by Li and Zhang.It decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously.Each subproblem is optimized by only using information from neighboring subproblems, which makes the algorithm effective and efficient.It won the outstanding paper award of IEEE Transactions on evolutionary computation. (5) MOPSO [13].Multiobjective particle swarm optimizer (MOSPO) is based on Pareto dominance and the use of a crowing factor to filter out the list of available leaders.Different mutation operators act on different subdivisions of the swarm.The epsilon-dominance concept is also incorporated in the algorithm. (6) SMPSO [14].Speed-constrained multiobjective PSO (SMPSO) was proposed in 2009.It allows generating new particle position in which velocity is extremely high.The turbulence factor and an external archive are designed to store the nondominated solutions found during search. Performance Metrics. Nowadays, there are many metrics to measure performance of MOEAs.Among them, the following five metrics are widely employed.They can reveal the convergence and diversity of MOEAs very well.However, many researches just employ a few of them to evaluate algorithms and argue that their proposed algorithms are the best.In fact, it is unfair to give the conclusion without comprehensive metrics and evaluations.Therefore, the five metrics are selected to make the comprehensive comparisons. (1) GD where = min ‖( ) − PF true ( )‖ is the distance between nondominated solution ( ) and the nearest Pareto front solution in objective space.It is to measure the closeness of the solutions to the real Pareto front.If GD is equal to zero, this reveals that all the nondominated solutions generated are located in the real Pareto front.Therefore, the lower value of GD indicates that the algorithm has better performance [20]. (2) IGD.PF true is a set of uniformly distributed points in the objective space. is the nondominated solution set obtained by an algorithm and the distance from PF true to is defined as where (, ) is the minimum Euclidean distance between V and the points in .Algorithms with smaller IGD values are desirable [21,22]. (3) Hypervolume.This hypervolume metric calculates the volume (in the objective space) covered by members of nondominated solutions sets obtained by MOEAs where all objectives are to be minimized [16].A hypervolume can be calculated as follows: The larger the HV value is, the better the algorithm is. (4) Spacing.The metric spacing is to measure how uniformly the nondominated set is distributed.It can be formulated as follows: where is the same as the in GD metric, is the average value of , and n is the number of individuals in nondominated set.The smaller the spacing is, the better the algorithm performs [23,24]. (5) Maximum Pareto Front Error.It is to measure the worst case and can be formulated as follows: where is the same as employed in GD.MPFE is the largest distance among these .The lower the value of MPFE is, the better the algorithm is [25]. In order to elaborate the five metrics, Figure 2(a) reveals the distance used in GD, space, and MPFE. Figure 2(b) presents the distance in IGD metric, and Figure 2(c) depicts the HV metric. TOPSIS. TOPSIS is one of MCDM methods to evaluate alternatives.In TOPSIS, the best alternative should have two characteristics: one is the farthest from the negativeideal solution and the other one is the nearest to positiveideal solution.The negative-ideal solution is a solution that maximizes the cost criteria and minimizes the benefit criteria, which has all the worst values attainable from the criteria.The positive-ideal solution minimizes the cost criteria and maximizes the benefit criteria.It is consisted of all the best values attainable from the criteria [26,27].TOPSIS consists of the following steps. Step 1 (obtain decision matrix).If the number of alternatives is and the number of criteria is , decision matrix with rows and columns will be obtained as in Table 1. Step 4 (find the negative-ideal and positive-ideal solutions). where is associated with cost criteria and is associated with benefit criteria. Step 5 (calculate the -dimensional Euclidean distance).The separation of each algorithm from the ideal solution is presented as follows: The separation of each algorithm from the negative-ideal solution is defined as follows: Step 6 (calculate the relative closeness to the ideal solution). The relative closeness of the algorithm jth is defined as Step 7 (rank algorithms order).The is between 0 and 1. The larger the is, the better the algorithm is. VIKOR Method. The VIKOR was proposed by Opricovic and Tzeng [28][29][30][31].The method is developed to rank and select from a set of alternatives.The multicriteria ranking index is introduced based on the idea of closeness to the ideal solutions.The VIKOR requires the following steps. Step where is the value of th criterion for alternative , is the number of criteria, and J is the number of alternatives. Step 2. and ( = 1, 2, . . ., ) can be formulated as follows: where is the weight of th criteria. and are employed to measure ranking. Step 3. Compute the values ( = 1, 2, . . ., ) as follows: where the alternative obtained by * is with a maximum utility, the alternative acquired by * is with a minimum individual regret of the opponent, and is the weight of the strategy of the majority of criteria and is often set to 0.5. Step 4. Rank the alternatives in decreasing order.Rank the three measurements, respectively: , , and . Step 5.The alternative is considered as the best if the following two conditions are met: , where is the alternative with second position in the ranking list by and is the number of alternatives; C2: alternative should be the best ranked by or . Experiments The experiments are designed to evaluate the above six algorithms.In order to make fair comparisons, thirteen test benchmark functions are widely used in MOPs and employed in the experiments.They can be divided into two groups: ZDT suites and WFG suites.All of these test suites are minimization of the objective.The detailed information is given in Table 2 [32,33]. The mathematical forms of WFG can be obtained in [32] and ZDT suites are presented as follows: ZDT2: ZDT3: ZDT6: The parameters settings of these algorithms are the same as the original paper.The maximum function evaluations are set to 25,000.Each algorithm runs thirty times and the average values of performance metrics are obtained. Results. In order to elaborate the whole calculation process, the ZDT1 results of five metrics are presented in Table 3.The four metrics GD, IGD, MPFE, and spacing of SMPSO are the smallest, and hypervolume is the biggest.PAES is the worst because the performances of five metrics are the worst among the six algorithms.Normalized decision matrix of five performances metrics is presented in Table 4. Suppose that the weight is equal to 1/5.Thus, according to Table 4, positive-ideal and negative-ideal solutions can be defined as follows: + = {0.0163,0.0553, 0.0239, 0.0410, 0.4150} × 1 5 ; − = {0.9939,0.9864, 0.9981, 0.9115, 0.3805} × 1 5 ; (20) then, the distances + and − are calculated according to (10) and (11), demonstrated in Table 5.The global performance of each algorithm is determined by * , calculated in ( 12) and presented in Table 5.Therefore, the ranking of six algorithms is as follows: SMPSO > SPEA2 > MOPSO > NSGAII > MOEAD > PAES.SMPSO is the best algorithm and PAES is the worst one for ZDT1. For VIKOR method, the Q, S, and R are calculated and presented in Table 6.According to the feature of Q, S, and R, SMPSO is the best one while PAES is the worst one.SPEA2 is better than MOEAD.However, as the condition ( ) − () > 1/(6 − 1) = 0.2 cannot be satisfied, S value is used to determine the ranking for NSGAII, SPEA2, MOEAD, MOPSO.Therefore, the ranking among six algorithms is SMPSO > SPEA2 > MOPSO> NSGAII > MOEAD > PAES. However, TOPSIS and VKIOR methods have different rankings for WFG1, WFG6, and WFG7.Take the WFG1 as an instance.The final values of TOPSIS and VKIOR are presented in Table 9.As there are six algorithms, J is set to six, 1/( − 1) = 1/(6 − 1) = 0.2.It indicates that the Q value difference between two algorithms should be more than 0.2.Otherwise, the rank between the two algorithms is determined by S or R. From Table 9, it can be noticed that this condition cannot be met between NSGAII and SPEA2, so the values S are used to compare the two algorithms.The value of NSGAII is smaller than that of SPEA2, so NSGAII is better than SPEA2, NSGAII is the first, and SPEA2 is the second.However, TOPSIS directly uses CC as the ranking criteria. The CC value of SPEA2 is bigger than NSGAII, so it can be ranked the first and NSGAII is the second in TOPSIS method. 3.2.Discussion.In order to further make comparisons, the best and worst performances of the above six algorithms are selected.The nondominated solutions obtained by the two kinds of algorithms are depicted in Figures 3-5.WFG1, NSGAII, and SPEA2 achieve the best ranking according to ZDT6 is a biased function as the first objective function value is larger compared to the second one.MOEAD obtains superior results, so, if the problem has the feature, MOEAD should be chosen. From Tables 7 and 8, the no-free-lunch theorem can also be observed: any optimization algorithm improves performance over one class of problems exactly paid for in loss over another class.No algorithm can achieve the best or worst performance for all test functions. Conclusions There are many MOEAs.When a multiobjective optimization algorithm is proposed, the experiment results often indicate that the algorithm is competitive based on one or two performance metrics.Generally, these comparisons are unfair and the results are unfaithful.In order to make fair comparisons and rank these MOEAs, a framework is proposed to evaluate MOEAs.The framework employs six well-known MOEAs, five performance metrics, and two MCDM methods.The six MOEAs are NSGAII, PAES, SPEA2, MOEAD, MOPSO, and SMPSO.The five performance metrics GD, IGD, MPFE, spacing, and hypervolume are selected, in which both convergence and diversity of nondominated solutions are fully considered.Two methods are TOPSIS and VIKOR. The results have indicated that SPEA2 is the best algorithm and PAES is the worst one.However, SPEA2 cannot perform well on all test functions and PAES also does not achieve the worst performance on all test functions.The experiment results are consistent with the no-free-lunch theorem.What is more, the observation of experiment results shows that the ability of MOEAs to solve MOPs depends on both MOEAs and the features of MOPs. Figure 2 : Figure 2: The distance and nondominated solutions used in above metrics. Table 1 : The multiple attribute decision matrixes. Table 3 : Five metric results of ZDT1 results. Table 4 : Normalized decision matrix of five performance metrics. Table 6 : The results of , , and from VIKOR. Table 9 : The results of , , S, and R value from TOPSIS and VIKOR.
3,676.4
2018-03-19T00:00:00.000
[ "Mathematics", "Engineering", "Computer Science" ]
Analysis of polling models with a self-ruling server Polling systems are systems consisting of multiple queues served by a single server. In this paper, we analyze polling systems with a server that is self-ruling, i.e., the server can decide to leave a queue, independent of the queue length and the number of served customers, or stay longer at a queue even if there is no customer waiting in the queue. The server decides during a service whether this is the last service of the visit and to leave the queue afterward, or it is a regular service followed, possibly, by other services. The characteristics of the last service may be different from the other services. For these polling systems, we derive a relation between the joint probability generating functions of the number of customers at the start of a server visit and, respectively, at the end of a server visit. We use these key relations to derive the joint probability generating function of the number of customers and the Laplace transform of the workload in the queues at an arbitrary time. Our analysis in this paper is a generalization of several models including the exponential time-limited model with preemptive-repeat-random service, the exponential time-limited model with non-preemptive service, the gated time-limited model, the Bernoulli time-limited model, the 1-limited discipline, the binomial gated discipline, and the binomial exhaustive discipline. Finally, we apply our results on an example of a new polling discipline, called the 1 + 1 self-ruling server, with Poisson batch arrivals. For this example, we compute numerically the expected sojourn time of an arbitrary customer in the queues. Introduction Polling systems are systems consisting of multiple queues served by shared servers. In recent years, polling models were used to model many real-life systems. For instance, traffic light systems, product-assembly systems, and wireless communication systems were modeled as polling systems. Good surveys on a broad class of polling models and their analysis can be found in, for example, [2,[14][15][16]. In the analysis of polling systems, the standard method models the system at specific time points as a Markov chain and then relates the state space at these points; see [7]. The kernel relation within this method is the joint length of the queues in the system at the end of a server visit to queue i, denoted by Q (i) , as function of the joint length of the queues at the start of the visit to Q (i) . This relation can be written in the following general form: where β (i) (z) is the joint probability generating function (p.g.f.) of the queue lengths at the end of a server visit to Q (i) , α (i) (z) is the joint p.g.f. of the queue lengths at the start of a server visit to Q (i) , and F i is an operator representing the mapping between the queue lengths at these time points which depends on the assumed service discipline. The next step in the analysis is to relate the joint p.g.f. of the queue lengths at the start of a server visit to Q (i) to the joint p.g.f. of the queue lengths at the end of a server visit to Q ( j) , j = 1, . . . , M, where M denotes the number of queues in the system, for example, (1) , . . . , β (M) )(z), (2) where G i is an operator representing the mapping between the queue lengths at the end of a visit to a queue and at the beginning of a visit to Q (i) which incorporates the effect of the switchover times and the routing of the server. We refer to [1,6] for the incorporation of Eqs. (1) and (2) into a numerical iterative framework to compute the joint queue length probabilities. However, in this paper, we do not consider the complete polling system and focus only on the relation in Eq. (1). We assume throughout the paper that the polling system is stable, i.e., we assume that all the processes under consideration have a proper limiting distribution. Here, we remark that stating and proving sufficient and/or necessary conditions for stability of the polling systems considered in this paper is a study on its own. In this paper, we concentrate on F i (α (i) )(·), which relates the joint p.g.f. of the queue lengths at the end of a server visit to Q (i) , to the joint p.g.f. of the queue lengths at the start of the visit to this queue for queues with a self-ruling server. A self-ruling server can decide to leave a queue, independent of the queue length and the number of served customers, and can stay longer at a queue even if there is no customer waiting in the queue. During a service, the server decides with probability p (i) that this is the final service of the visit and to leave the queue after this service; otherwise, it is called a regular service which is possibly followed by other services during the same server visit. After service, customers can join another queue or leave the system; moreover, new customers are added to the queues, independent of other customers. These new customers are called replacements. Regular customers are replaced, stochastically, in the same way. The replacement of the final customer can have a different distribution, for example, due to being interrupted in time-limited systems. This is the reason we assume that the server decides during the service whether it will be the final one. When the queue empties before the server decided for the final service, extra customers might be added to the queues. The server decides either to leave with probability p (i) I or to stay and serve more customers during the same visit. We assume that p (i) + p (i) I > 0, which implies that a server visit to Q (i) always ends. The distribution of the extra customers in the queues depends on the choice of the server to leave or stay after being idle. We find the relation F i (α (i) )(·) for service disciplines of the branching type when the server never decides that the ongoing service is the final one, i.e., p (i) = 0. The so-called branching property, see [11,12], plays an important role in the analysis of polling systems. Polling systems with server disciplines satisfying this property, such as exhaustive, gated or Bernoulli disciplines, can be analyzed exactly and have results in explicit form. The analysis of disciplines which do not satisfy the branching property, such as the time-limited and K -limited disciplines, is usually restricted to special cases, approximations, or numerical methods. In the description of the self-ruling server, we did not focus on how the customers are generated. In the following, we describe the various models of generating customers in more detail. In the most general case, we assume that the server will serve the customers who are present at the beginning of the visit unless he decides to leave before serving them all. After the service of a customer, new customers are put in the queues, the so-called indirect replacements. We do not specify how these indirect replacements are generated, we only know the joint distribution of the number of indirect replacements at the queues. These indirect replacements have to wait for a new visit of the server before they are served. As a second model, we assume that if the queue becomes empty before the server decided to leave, a number of new customers are added to the queue, some of which might be served during the same visit, the so-called direct replacements; these potential customers to be served are treated as if they were there at the beginning of the visit. Again, we do not specify how these new customers are generated after the queue emptied; we only assume that we know the joint distribution of the number of new customers at every queue. In the next model, the so-called service-based discipline, we assume that, after the service of a customer, we do not only have indirect replacements but also direct replacements, which are customers who are directly served after the service ends during the same visit. Note that after the final service, there are no direct replacements. Also in this model, we do not specify how the new customers are generated, we only assume that the joint distribution of the number of direct and indirect replacements is known. As a final model for the replacements, we focus on the service-based discipline, where we now assume that customers arrive according to a batch Poisson process and that we know the service-time distributions. In this paper, we derive the relation between the joint p.g.f. of the number of customers in the queues at the start and the end of a visit to a queue. Based on these relations, we also find the joint Laplace transform of the workload at an arbitrary time in the queues and the expected sojourn time of an arbitrary customer in the queues. Many polling models found in the literature can be reformulated as an SRS polling model, for example, the 1-limited discipline (take p (i) = 1), the binomial gated discipline (take the SRS discipline with p (i) = 0 and no direct replacements), and the binomial exhaustive discipline (take the SRS discipline with p (i) = 0 where the server leaves immediately without any replacements after becoming idle). The examples also extend to exponential time-limited models (where p (i) is the probability that the service of a customer is interrupted) such as the exponential time-limited model with preemptive-repeat-random service, the exponential time-limited model with non-preemptive service, the gated time-limited model, and the Bernoulli timelimited models. The paper is structured as follows: In Sect. 2, we explain the general polling model with a self-ruling server. In Sect. 3, we analyze this general model and describe and analyze some more specified systems. Section 4 relates our work to the so-called exponentially time-limited polling models which were studied in [1,5,6,8] and their extensions in [9]. This latter is done in Sect. 5. In Sect. 6, we use the key relations derived in the previous sections to find expressions for the joint p.g.f. of the queue length and the joint Laplace transform of the workload at an arbitrary time in the queues. Finally, in Sect. 7, we apply our results in a numerical example of the 1+1 self-ruling discipline with Poisson batch arrivals. For this example, we compute numerically the expected queue length at various epochs and the expected sojourn time of an arbitrary customer. Model and notation Let us consider a polling system with a single server and M ≥ 1 queues. The server visits the queues according to a routing schedule which we do not further specify, since our focus is on the relation in Eq. (1). During a visit to a queue, the server may decide to interrupt or prolong this visit independent of the queue length. More specifically, the server may decide, with probability p (i) , during the service of a customer whether this is the last customer to be served during this visit. When the queue becomes empty before the server decides to leave, the server may serve some extra customers using the same discipline. Furthermore, we assume that the underlying service discipline, that is the service discipline when we assume that p (i) = 0, is of branching type (see Sect. 2.1). We call this the self-ruling server (SRS) discipline. We start this section with a short introduction to the branching-type service discipline. In the last part of this section, we discuss the SRS discipline in more details. Branching-type service discipline In this subsection, we consider the branching-type discipline. The standard definition of a branching-type service discipline is as follows (cf. [11,12]): si customers present at Q (i) at the start of a visit, then during the course of the visit, each of these N (i) si customers will be replaced in an i.i.d. manner by a random population. Denote the number of indirect replacements for a customer in the queues by s M ) the number of customers present in the queues at the start of a visit to Q (i) and define its p.g.f. . The number of customers in the queues after a server visit to Q (i) is given by e is of the simple form In Sect. 3, we first concentrate on this general form of branching where H (i) (z) is not specified. The self-ruling server Our focus is on the behavior of queues with the self-ruling server discipline. According to this SRS discipline, the server decides during a service, with probability p (i) , whether this is the final customer to be served during this visit to. The customers that are served before the final customer are called regular customers. Let N (i) V denotes the number of regular customers that are served during the server visit. In a queue with the SRS discipline, at most a geometrically distributed number of customers, with mean 1/ p (i) , is served. After (or during) the service of a customer, the length of all the queues can increase by a random amount. The extra customers are called the indirect replacement of a customer. The indirect replacements for regular customers are stochastically equivalent; the indirect replacement of a final customer may have a different distribution. The indirect replacements of all the customers are independent. Furthermore, the indirect replacements at Q (i) are not served during the ongoing server visit and have to wait for a new server visit. Let 1 E denotes the indicator function of an event E and let F denote the event that the customer being served is the final customer. Define the p.g.f. of R on the event it is a final customer by the p.g.f. of R on the event it is a regular customer by Note that H When the server becomes idle before serving the final customer, the server will serve S (i) X extra customers during the same visit as if they were present at the start of the visit and R (i) X additional customers are added to all the queues. If no extra customers are served, i.e., S (i) X = 0, the server leaves Q (i) . The joint p.g.f. of the extra customers is denoted by the joint p.g.f. of R (i) X , S (i) X on the events, respectively, that the server leaves Q (i) and the server starts another service. For convenience, we introduce the exhaustive self-ruling server (E-SRS) discipline. This E-SRS discipline is similar to the SRS discipline but the server will immediately end a visit as soon as the server becomes idle. So, in this E-SRS discipline, H (i) X (z, z) = 1, which states that there are neither extra customers to be served nor additional customers at the queues after the server becomes idle. In the following section, we analyze polling systems with the self-ruling server discipline. Note that we will not specify how exactly the indirect replacement of a customer is implemented except they are independent. A specific form of indirect replacements is considered in Sect. 3.3. Analysis of polling systems under the SRS discipline In this section, we focus on Eq. (1), which relates the joint p.g.f. of the queue lengths at the end of a visit to the joint p.g.f. of the queue lengths at the start of a visit. We present results for the self-ruling server discipline with the general branching-type discipline in Sect. 3.1. In Sect. 3.2, we specify the replacement process of a customer by considering the so-called service-based branching-type service discipline. In this discipline, a customer does not only have an indirect replacement population R (i) X but also a direct replacement by customers that may be served during the ongoing server visit. As an example of such a system, consider a task at a certain queue. During the service of a task, a new task may be generated, either at the same queue or at other queues. Some of the tasks, generated at the queue where the server is, have to be done during the same visit; others can wait for a next visit. In the previous models, we did not focus on how replacements are generated. In Sect. 3.3, a further specification of this generating process is given where replacements arrive during the service of a customer according to a batch Poisson process. General branching-type discipline In this part, we focus on the general form of branching where we assume that the replacement p.g.f.'s H Lemma 1 For a polling system operating under an exhaustive self-ruling single server discipline, the relation between the joint p.g.f. of the queue length at the start and the end of a server visit to Q (i) reads where Proof Consider a server visit to Q (i) . Denote the total indirect replacement at the end of the server visit by R (i) T ; the T is added here to indicate that it is the total indirect replacement for all customers that are present at the start of the visit, as opposed to the indirect replacement of a single customer in Eq. (3). It is easily seen that − denotes the indirect replacement of the final customer. Therefore si is the queue length of Q (i) at the start of a server visit to this queue. By unconditioning on N (i) s , we find Eq. (5). In contrast with the E-SRS discipline, under the general SRS discipline the server may still serve extra customers at Q (i) even after the server becomes idle. Theorem 1 For a polling system operating under a self-ruling server discipline, the relation between the joint p.g.f. of the queue lengths at the start and the end of a server visit to Q (i) reads , and γ (i) Proof Consider a visit of the server to Q (i) with initially N (i) si customers. During this visit, it may occur that (i) the server decides that one of the N (i) si customers is the final customer or (ii) the server serves all N (i) si customers as regular customers. In case (i), it is readily seen that the indirect replacement process, that is R (i) T , the number of additional customers in the queues, is identical for both the E-SRS and the SRS discipline. However, in case (ii), the indirect replacement process is different for each discipline. Under the E-SRS discipline, the server immediately leaves when the queue becomes empty, say this occurs at time t 0 . Under the general SRS discipline, at time t 0 the server may remain at the queue and a sequence of idle and busy periods will follow until eventually the server decides on a final customer or decides to serve no extra customers after an idle time (S (i) X = 0). This latter contribution (after t 0 ) to the indirect replacement process is represented in the term γ (i) Observe that an idle server, will leave a) without serving extra customers, b) during the busy period serving the extra customers, or c) after this subsequent busy period. This process is regenerative in the sense that if the server does not leave before the end of the first busy period following the idle period, then the process starts anew at that specific time instant. Then, we have the following relation for γ (i) of the indirect replacements on the event that the server leaves; see Eq. (4). The other terms on the RHS are similar to Eq. (5), with the joint p.g.f. at the start of the busy period given by H . This leads to Eq. (7). Service-based branching-type discipline A special subclass of the general branching-type service discipline is the servicebased branching-type discipline, which can be described as follows: At the start of a service period at Q (i) , there are N (i) si customers at this queue; these customers are called zeroth-generation customers. With probability q (i) 1 , such a customer goes to the server, independent of the other customers, to receive service and becomes a firstgeneration customer; otherwise, this customer waits for the next visit of the server. We assume that every customer of the first generation, immediately after being served, is replaced in a stochastically identical way and independently of the other customers. The replacing customers are split into two parts, namely direct replacements, which are customers that are served at Q (i) in the same visit of the server, and indirect replacements, additional customers at all queues, including at Q (i) , which are served in the subsequent visits. The direct replacements of the first-generation customers are called second-generation customers. After being served, a second-generation customer is replaced, in an i.i.d. fashion, by third-generation customers that are served at Q (i) during the same visit of the server and additional customers to all the queues, including Q (i) to be served in subsequent visits. We can continue this construction for further generations. All customers linked to the same first-generation customer are called a family. Denote the number of additional customers that arrived during the service of an nth generation customer at Q (i) and are served during the ongoing server visit by S (i) n and denote the additional customers that are not served during this visit by Note that the order of service of customers does not influence the distribution of , the total indirect replacement during the visit to Q (i) . Furthermore, remark the difference between the total indirect replacement R (i) of a family and the indirect replacement of an nth generation customer R In the case of this service-based branching-type discipline together with a selfruling server, we assume that regular customers and the final customers may have different joint distributions of the number of direct replacements and the indirect replacement population. We also make the assumption that, whether we consider a final or regular customer, the total of new customers at the queues per service, that is the sum of the indirect replacement and direct replacements, has the same distribution for every generation. By this assumption, the order in which customers are handled does not matter for the indirect replacement population when the server leaves, which can be seen as follows: Suppose that customers are served in one specific order. Now assume that the server leaves without serving a final customer. Then, in any generation, for all orders, the same number of customers have been fully served, so the indirect replacement population is the same. Next assume that the server leaves after a final customer; in this case, in all orders, the same number of customers will be served. The indirect replacement population added to the population in the queues at the start of the visit consists of all customers that were present at the start, plus new customers after the regular services and the final service, minus the number of served customers. By assumption, the total number of new customers does not depend on the generation number of the served customers, so also in this case, the indirect replacement population does not depend on the order of the customers. In the case where we have a service-based branching-type discipline, we can further specify H We can then use these specified p.g.f.'s to modify Th. 1. As remarked before, a customer present at the start of a visit is replaced by a set of customers at every queue in the system. Let us introduce the following joint p.g.f.'s: where we used that after a final customer, no other customers are served at Q (i) during the same visit. Note that by the assumption that, per service, the total number of new customers per queue does not depend on the generation n, both H do not depend on the generation n. Also by this assumption, the order in which we serve the customers is not important for the indirect replacement population and we can assume that we serve a first-generation customer and its complete family consecutively. Within a family, we handle the customers generation by generation. Let G (i) n+ (z, z) denote the joint p.g.f. of the direct and indirect replacements after serving the nth generation on the event that the visit still continues in the next generation. By the assumption that a zeroth-generation customer visits the server with probability q where we use that the direct replacements of the nth generation are served as the (n + 1)th generation of a family. Let G (i) n− (z, z) denote the p.g.f. of the total population at the queues that will be served in later visits on the event that the visit to Q (i) was interrupted during the service of an nth generation customer. We then find Since the system is stable, lim n→∞ P(S (i) n = 0) = 1 and we get the joint p.g.f. of R (i) for a customer with an offspring that is fully served (cf. Eq. (3)): Note that the limit on the RHS of Eq. (9) does not depend on z. Because an interrupted family is interrupted in exactly one generation, and G (i) where we used Eq. (9) and the assumption that both H (i) * − (z) and H (i) * + (z) do not depend on the generation. Theorem 2 For a polling system operating under a service-based self-ruling server discipline where H do not depend on the generation, the relation between the joint p.g.f. of the queue lengths at the start and the end of a server visit to Q (i) reads and γ (i) . . Example 1 In this example, we introduce a special branching-type discipline, the 1+1 service discipline. In this discipline, all first-generation customers are served during a visit of the server. From all the customers which arrive during the service time of the same first-generation customer, at most one is served during the same visit. All other second-or third-generation customers, if any, have to wait until the next server visit. By considering different scenarios, namely whether the first-generation customer was a final customer or not and, in the latter case, whether there was no arrival or at least one during its service time, we find and It is easily verified that this expression for H (i) − (z) also can be derived from Eqs. (10) and (12) by observing that does not depend on the generation number n. Then, by taking a different order of serving the customers, namely by serving its direct replacements directly after a customer, we find that a customer of the second generation has, stochastically, the same indirect replacement as a customer of the first generation. We then see that H 1+ (z, z)| z=1 < 1, and we can prove, by using Rouché's theorem, that this equation for H (i) + (z) has exactly one solution for all z with z j ≤ 1 for j = 1, . . . , M. Poisson arrivals We will go one step further in specifying the replacement p.g.f. H (i) + (z, z), by assuming that customers arrive to the system according to a batch Poisson processes with rate λ, where the batch may split over several queues, and that the service times of individual customers follow a general distribution. In this system, a customer is replaced by customers that arrive during its service. For a final customer, the service-time distribution may be different for the other customers. For the final customer, the Laplace-Stieltjes transform (LST) of the service time at Q (i) is denoted by T (i) S− (s) and for the other customers by T (i) S+ (s). We assume a Bernoulli discipline where a customer of the nth generation, if any, visits the server to be served with probability q (i) n independent of other customers. A customer that does not visit the server can not be the final customer in the ongoing server visit. These customers are their own indirect replacement at Q (i) and will be served in a future server visit. Note that if q (i) n = 0 then, effectively, there is no nth generation. The customer that is served at Q (i) leaves the system or joins Q ( j) ; see, for example, [13]. For a final customer, the routing probabilities may differ from the other customers and are denoted by r As before, we assume that the service processes, the arrival process, the batch sizes, and the routing of customers are independent. Denote the numbers of simultaneously arriving customers at Q ( j) by A ( j) for j = 1, . . . , M, and their joint p.g.f. by A (z) := E z A , where A = (A (1) , . . . , A (M) ). Denote the number of customers that arrived to Q ( j) during the service of a customer at Q (i) by A n+1 by the assumption of the Bernoulli discipline. By conditioning on the service times, we can calculate that (cf. Eqs. (9) and (10)) and In the following, we use several times that Under the SRS discipline, the server will always move to a next queue when the final customer is served. However, when the queue becomes empty and the last customer was not the final one, the server remains idle at the queue for some random time. If, during this idle time, no new customers arrive at Q (i) , the server will move to another queue. Otherwise, the server will immediately start serving again under the same SRS discipline; if the server started working at the same queue and becomes idle for a second time, the server will behave, stochastically, the same as the first time the server became idle. Note that by these assumptions, Eq. (4) does not hold; if a batch arrives at Q (i) and all its customers are not served, which occurs with probability A (i) 1 (0, 1), then the server does not leave but will stay idle at Q (i) for another period. Now, we can also specify H (i) X + (z, z) and H (i) X − (z) (see Eq. (4)), and the joint generating functions for the direct and indirect replacements after a period in which the server is idle, where the LST of the idle time is denoted by W (i) I (s). We get and Combining the above observations with Th. 2 leads to the following theorem. Theorem 3 For a polling system with batch Poisson arrivals operating under a servicebased self-ruling server discipline, the relation between the joint p.g.f. of the queue lengths at the start and the end of a server visit to Q (i) reads and γ (i) Remark 1 If the idle time has an exponential distribution with rate ξ (i) , that is W (i) . Remark 2 If we use the definition in Eq. (4), that is, the server will either leave after the extra time or (really) start a new service, we would have to ignore the arrivals of the not served batches at Q (i) since they do not end the extra time. This gives and . Remark 3 In many papers on polling systems, it is assumed that customers arrive to the system according to M independent Poisson processes, to Q ( j) with rate λ j , j = 1, . . . , M. The generating function of the batch size at Q ( j) is denoted by B ( j) (z) := E z B ( j) . In our framework, we get λ = M j=1 λ j and Exponential time-limited polling systems In this section, we study exponential time-limited polling systems with Poisson arrivals where the total visit time of a server to a queue is at most an exponentially distributed time. If the timer expires during a service, the ongoing service is interrupted. We consider two types of time-limited disciplines: (a) the pure time-limited case (P-TL) in which the server visits a queue for an exponentially distributed time and (b) the exhaustive time-limited discipline (E-TL) in which the server visits a queue for at most an exponentially distributed time, but also leaves when the queue becomes empty. In the case of the P-TL, customers that arrive at Q (i) at the end of an idle period are served as if they were present at the start of the visit. We model these time-limited systems in the framework of the customer-limited system. To do so, we first specify p (i) and T (i) S+ (.) since these quantities do not depend on the effect of the interruption or on the underlying branching-type discipline. Then, we also focus on T It is readily seen that the probability that the next customer is the final one is given by and that the LST of the service time of a regular customer is given by By combining the formulas above with Th. 3 and Remark 1, we obtain the following result. Theorem 4 For a polling system with batch Poisson arrivals operating under a branching-type service discipline combined with the exhaustive time-limited regime with the preemptive-repeat-random strategy, the relation between the joint p.g.f. of the queue lengths at the start and the end of a server visit to Q (i) reads where, for the E-TL case, and for the P-TL case, γ (i) . Remark 4 To apply Th. 4 to specific models, we need to find the functions T (i) S− (s) and H (i) . As examples, we will specify these functions for two interruption rules and three branching-type strategies. For any of the six combinations of strategy and interruption rule, we can then easily find γ (i) − (z) and γ (i) + (z), with which we specify the relation between α (i) (z) and β (i) (z). Interruption rules We consider two interruption rules, namely the preemptive-repeatrandom strategy, i.e., the server immediately leaves after the timer expires, and at the next server visit a new service time will be drawn from the original service-time distribution for the interrupted service; and the non-preemptive strategy where the server finishes the ongoing service. For the preemptive rule, the LST of the service time of the final customer is given by and the p.g.f. of the routing probabilities by r For the non-preemptive rule, the LST of the service time of the final customer is given by and the p.g.f. of the routing probabilities by r (i) Remark 5 Conjecture 5.13 in [4], page 112, is a special case of Th. 4 combined with the preemptive-repeat-random strategy and independent Poisson arrival streams at all queues (see Remark 3). This is readily seen, since for the preemptive-repeat-random strategy the LST of a non-preempted service time is given by Eq. (16) and we can write and, for the P-TL case, γ (i) Branching-type strategy We can also vary the underlying branching-type service discipline. We focus on four disciplines, two well-known ones, the Bernoulli gated discipline and the Bernoulli exhaustive discipline, and two new ones, the 1+1 discipline introduced in Ex. 1 and the so-called 2G-gated discipline, where only first-and second-generation customers are served. Similar observations can be made for other branching-type service disciplines. First, consider the Bernoulli gated discipline, in which only first-generation customers are served. Second-generation customers have to wait until the next server visit. In this case, finding the total indirect replacement of a customer is relatively simple; all the customers who arrive during its service are indirect replacements, so H (i) Secondly, consider the Bernoulli exhaustive discipline, where every customer that arrives during a visit to Q (i) is also served during the same visit with some fixed probability q (i) for all generations. For the exhaustive case, the assumption that q (i) does not depend on the generation, means that A (i) n (z, z) defined in Eq. (13) is independent of the generation too. We, therefore, omit the index for the generation. Following the same steps as in Example 2, we find that H (i) The third example we focus on is the 1 + 1 discipline (see Ex. 1). We find, by distinguishing the cases where a first-generation customer has direct replacements or not, that As a last example for a branching-type discipline, we focus on the so-called Bernoulli 2G-gated discipline in which only first-and second-generation customers are served; in the terminology of this paper, a second-generation customer only has indirect replacements. In this discipline, the joint p.g.f. of the total indirect replacement of a family is given by Extension The SRS discipline discussed in this paper can be extended to cover a variant of the exponential time-limited queues introduced in Eliazar and Yechiali [9]. In this paper, the authors study exponential time-limited queues with the exhaustive service discipline with preemptive-repeat interruptions and with non-preemptive interruptions. In addition, they consider the case where customers who are present in Q (i) at the time of the interruption are served during the ongoing server visit but all customers who arrive after that time have to wait for a new visit. The interrupted service is normally finished. In the setting of the SRS, we would not only have regular customers and a final customer but also the so-called tail customers who have indirect replacements which may be, stochastically different from the regular and the final customers. To indicate that a function is related to a tail customer, we use the subscript "=". We again give first the results for the general SRS model, then for the case of service-based indirect replacements and, finally, the results for the exponentially time-limited systems. For the general case, it is easily verified that Eqs. (5) and (6) transform into Note that for the original variant we studied, the indirect replacement of a tail customer is just itself, so H (i) = (z) = z i . For the service-based indirect replacement discipline, it is a bit more complicated, since a final customer might have direct replacements that are served during the same visit, so we need to redefine H n z R (i) n 1 F . With this notation, we can rewrite Eq. (8) as follows: Furthermore, we have to assume that the p.g.f.'s H (i) n= (z) do not depend on the generation n; under this assumption, the results and proofs are similar to those of the theorems in Sect. 3.2. Finally, we consider the exponential time-limited system with the Bernoulli exhaustive discipline where the probability q (i) n that a customer will visit the server does not depend on the generation of the customer. In this system, a timer interrupts the normal service. After the timer expires, all customers already present at Q (i) will be served (with probability q (i) n ) whereas customers that arrive during the visit after the interruption have to wait for the next visit of the server. In this case, the probability that the next, non-tail, customer is interrupted is given by Eq. (14) and the LST of the service time of a regular customer by Eq. (15). The service-time LST of a tail customer is given by T (i) S (s) and therefore H (i) The LST of the final customer is given by Eq. (17); however, we have to distinguish between the time of service before (denoted by T By setting , z and s 2 = Λ(z), we find that the joint p.g.f. of the numbers of indirect replacements of an interrupted customer is given by With these observations, we can find similar results as in Sect. 4. Queue length at the start of a service and customer sojourn time In this section, we focus on a practical application of the results developed in this paper. We first focus on the p.g.f. of the joint queue length distribution at the start of an arbitrary service for the system with general branching-type discipline (see Sect. 3.1). We also derive this joint p.g.f. for continuous-time system with servicebased branching-type discipline (see Sect. 3.3). Once we have obtained this p.g.f., we can use the techniques in [3] to find, for example, the expected sojourn time of a customer at Q (i) . The queue length at the start of a service In this section, we consider the system of Sect. 3.1. To find the p.g.f. of the joint queue length distribution at the start of an arbitrary visit, we need to specify the routing of the server and the switchover times of the server between queues. For convenience, we assume a cyclic routing of the server, i.e., the server goes from Q (i) to Q (i+1) , where Q (M+1) denotes Q (1) . The number of new customers during switchovers form an independent process. The joint p.g.f. of the number of new customers during the switchover from Q (i) to Q (i+1) is denoted by U (i,i+1) (z). We can now specify key equation (2) to Together with the relation between α (i) (z) and β (i) (z), found in Sect. 3 for several variants of the SRS polling systems, we can compute in a numerical experiment using an iterative algorithm α (i) (z); see, for example, [1]. Before we focus on the derivation details of the p.g.f. of the joint queue length distribution, we first determine the expectation of both N ( j) V , the number of served customers during a visit to Q ( j) , and N ( j) X , the number of times the server becomes idle during that visit. To compute E N ( j) X , observe that once the server becomes idle during a visit to Q ( j) , the probability that it becomes idle again in the same visit equals H (1), 1), where 1 is a vector of size M with all entries equal to one. Moreover, the probability that the server becomes free at least once during this visit equals α ( j) (1 ( j) ), where 1 ( j) is a vector of size M with entries equal to one except for the jth entry which equals H (i) j+ (1). Therefore, it is easy to find . Next, since we have a stable system, E N ( j) V equals the expected number of customers that arrive to Q ( j) between the start of two consecutive server visits to Q ( j) . Customers enter the system either during a service, an extra time, or during a switchover time. This gives us the following set of equations: Note that this system has a unique solution due to the stability assumption. To find the p.g.f. of the joint queue length distribution at the start of an arbitrary service in Th. 5 below, we closely follow the arguments of Eisenberg in [7] and of de Haan in [4]. Define the following events for i = 1, . . It is readily seen that e . Next consider the average of the generating functions of the joint queue length over the first k events that occur; by the assumption that the joint queue length is ergodic, we can take the limit k → ∞ to find that where, for example, S (i) s (z) denotes the p.g.f. of the joint queue length at an arbitrary service start at Q (i) and a service starts at Q (i) . This can be rewritten as Note that S (i) s (1) equals the fraction of events that are a service start at (1) represents the joint conditional p.g.f. of the queue length at an arbitrary service start at Q (i) given a service starts at Q (i) . Analogously, define the conditional p.g.f.'s and (1), (1). Furthermore, it is readily seen that E N Since the number of starts and ends of a service, resp., extra time at Q (i) during the first k events differ at most by 1, S (i) Theorem 5 For a polling system operating under a self-ruling server discipline, the p.g.f. of the joint queue length distribution at a service start, given that the server starts a service at Q (i) , is given by . Proof Consider either a service or an extra time. The number of customers at the end of such a period is the sum of the customers that are present at the beginning of that same period plus the indirect replacement or the new customers that joined the system during that period, so we can write and as the sum of all the p.g.f.'s at the start of the extra times during a visit to Q (i) , so we can write . Remark 6 To derive this theorem, we could also have looked at the start of a service for an arbitrary customer during a visit. For the nth served customer with n ≤ N (i) s , the number of customers at Q (i) to be served has decreased by n − 1, but there were also n − 1 replacement moments. For n > N (i) s , we should also include the replacement during (at least) one extra time followed by extra services. Elaborating along this line gives the same result. We chose to use the approach along the line of Eisenberg in [7] since this also applies in the more involved settings of the service-based strategy. The queue length and workload at an arbitrary time In continuous time, it is convenient to consider the cycle time, T C , of the server, that is, the time between two consecutive visit starts at, say, Q (1) . To find the expectation E[T C ], we remark that the expected number of customers served at Q (i) during a cycle of the server satisfies (cf. Eq. (18)) The expected cycle time of a server is now readily found since Lemma 2 For a polling system with batch Poisson arrivals operating under a servicebased SRS service discipline, the p.g.f. of the joint queue length distribution at a service start, given that the server is at Q (i) , is given by . Proof We proceed a similar way to the proof of Th. 5, where Eq. (22) replaced by S (i) e (z) = S (i) s H (i) * ± (z)/z i . This is because here we have the service-based discipline. Remark 7 Th. 5 still holds when we assume that the members of a family are served represents the number of first-generation customers that is served during a visit and that the end of a service corresponds to the end of the service of the last customer in a family being served. In other words, the total time to serve a family is then seen as one service time. Theorem 6 For a polling system with batch Poisson arrivals operating under a servicebased SRS service discipline, the p.g.f. of the joint queue length distribution at an arbitrary time in the above continuous-time system satisfies s (z) is given in Lemma 5. Proof Along the same lines as the proof of Theorem 1 in [3], we can write, by the stochastic mean value theorem, that Q O (z) denote the p.g.f.'s of the joint queue length distribution at an arbitrary time during a service at Q ( j) , during an extra time at Q ( j) , and during a switchover period between Q ( j) and Q ( j+1) . To find these p.g.f.'s, we have to multiply the p.g.f. of the queue length at the start of an interval by the p.g.f. of the number of arriving customers between the start and the arbitrary moment. We then find, by using Eq. (22), that and When we write the numerator of the RHS of Eq. (24) as S Remark 8 Consider a system where customers visit only one queue. We can then write e (z) as given in Eq. (22). Consider the marginal distribution of the queue length at Q (i) at an arbitrary epoch, that is, where 1 (i) z i is a vector of size M with entries equal to one with the ith entry replaced by z i . By multiplying this equation by the p.g.f. of the number of customers arriving to Q (i) in front of an arbitrary customer for Q (i) in the same batch, which is given by we get the p.g.f. of the queue length observed by an arbitrary customer arriving to Q (i) . This is a well-known result from batch arrival queues: the distribution at the departure of a customer equals the distribution at the arrival of a customer where we assume that the customers enter one by one, albeit at the same time. Remark 9 Using Little's law, the expectation of W T , the sojourn time of an arbitrary customer in the system, is given by , and the expectation of W (i) T , the sojourn time of an arbitrary customer per visit to Q (i) , by Before we give the LST of the workload in the system, we introduce some notation. Let T S± (t − s) . Using these observations and following both the proof of Th. 6 and [3] leads to the following theorem. Theorem 7 For a polling system with batch Poisson arrivals operating under a servicebased SRS service discipline, the LST of the joint workload distribution at an arbitrary time satisfies 7 Numerical results for a system with the 1 + 1 SRS discipline In this section, we apply the results from the previous section to obtain numerical results for an example system with two queues with Poisson batch arrivals; one of the queues is operating under the 1+1 SRS discipline (see Example 1) with additional idle time while the other queue is served according to the gated discipline. In the example system, there is a single server attending two types of tasks, for example, tasks A and B. All jobs need task B before leaving the system but some need also task A, where task A has to be finished before task B starts. It takes an exponentially distributed time with mean one for the server to switch from task A to B and vice versa, so that E T (AB) The server, therefore, decides to handle first a (random) number of A tasks before switching to B tasks. The served task A jobs join the task B queue. After switching, the server will handle all B tasks present at the start time of his handling according to the gated service discipline. After handling these tasks B, the server switches immediately to task A jobs. In line with the description above, we assume a 1+1 SRS discipline for task A with the probability that a task A job is the last to be processed before the server switches to task B equal to p (A) . Moreover, a job will always go to the server, both for task A and for task B, so q (A) = q (B) = 1. If the server does not decide to switch before the queue for task A is empty, he waits 2 time units for another task A job to arrive. The service times of task A have an Erlang distribution with LST T Customers arrive in batches of size N BS = 2 according to a Poisson process with rate λ = 1/3. A fraction of p TB = 1/3 of the arriving customers go directly to the queue of task B, independent of other customers. This makes the joint p.g.f. of the number of jobs in an arriving batch starting with task A and, respectively, task B, In the following, we shall determine for which p (A) the expected sojourn time of an arbitrary customer is minimal. Before we find the best value of p (A) , we first determine its possible range. Let p (A) S be the supremum of all p (A) for which the system is stable and suppose p (A) < p This gives Obviously, for the system to be stable, ρ * < 1. Since the server switches after at most a geometrically distributed number of task A jobs, the average number of served task A jobs during a cycle is at most Combine this with Eq. (25) to find that is a necessary condition for stability. To investigate whether this condition is also sufficient, we can argue as follows: Intuitively, when p (A) is getting closer to p (A) S , the expected number of task A jobs at the beginning of a cycle will become large, which implies that P(N (A) X > 0) tends to zero and so which gives that The expected number of task A jobs that can be handled during a cycle equals 1/ p (A) > 1/ p , the expected number of task A jobs that arrive during a cycle for p (A) close to p (A) S . Hence, Eq. (26) also seems to be sufficient. For the parameters in the example system, this implies that p (A) S = 0.5. In Table 1, we present the expected queue length for task A (N A ) and task B (N B ) at the beginning and end of a visit. Table 2 contains the queue length both at the beginning of a service and at an arbitrary time. Furthermore, we present the sojourn time at an arbitrary time, where we consider the sojourn time of an arbitrary customer visiting task A (W A ), visiting task B (W B ), and in the whole system (W T ). Using a numerical search, we find that the optimal value of p (A) that minimizes the expected sojourn time is p (A) * = 0.15573. Conclusion We consider stable polling systems with a self-ruling server and find for these a relation between the joint p.g.f. of the queue lengths at the end of a server visit and Q (i) , and the joint p.g.f. of the queue lengths at the start of the visit to this queue for a class of polling systems with the branching-type discipline. In [1,Sec. 6], the method implementation and the test of its performance from a computational point of view is analyzed. Along the lines of [7], once we have found the p.g.f.'s at the start and end of a visit, we derive the joint p.g.f. of queue lengths at the start of a customer service. For the general pure time-limited systems with Poisson arrivals, the departures of a server can be seen as a Poisson process, which implies that the p.g.f. at the end of a visit also represents the p.g.f. of the queue lengths of an arbitrary arrival. In this case, we can derive the joint LST of the workload at arbitrary time in the queues and the expected sojourn of an arbitrary customer in the queues. A key assumption in this paper is the customer limit, which is geometrically distributed ( p (i) is fixed per queue). As future work, it would be interesting to relax this assumption to cover customer limits with a discrete phase-type distribution. Another research direction is the determination of the stability condition of the SRS disciplines. To give an indication of why it is a study on its own, consider the following counter intuitive example, where extra arrivals make a system stable: Consider a system with two queues with the following characteristics: -independent single Poisson arrival streams at the queues with rates λ 1 (specified later) and λ 2 = 2; -at Q (1) the service time is 0, p (1) = 1, that is, the server always decides that the first customer is the final one and the time the server waits idle for a customer to arrive, if any, has an exponential time length with rate ξ = 1; -the service times at Q (2) have an exponential distribution with rate μ 2 = 4, p (2) = 1, and the server leaves immediately after becoming idle, so during a visit to Q (2) , the expected number of arrivals to Q (1) is λ 1 /4 and to Q (2) 1/2. When λ 1 = 0, Q (1) will be empty and the expected number of extra customers to Q (2) after an idle period at Q (1) is λ 2 /ξ = 2, so Q (2) is not stable since the server serves only one customer per visit to Q (2) . On the other hand, when λ 1 = 3, both queues are stable. We can see this as follows: Q (2) will always become empty when Q (1) is full since the expected number of new arrivals to Q (2) between two visit starts at this queue is λ 2 /μ 2 = 1/2; once Q (2) is empty, Q (1) will become empty. Note that, in this case, the expected number of extra customers in Q (2) after an idle period at Q (1) will be λ 2 /(λ 1 + ξ) = 1/2. Since not every visit to Q (1) ends with an idle time, the expected number of new customers arriving at Q (2) in between to visit starts at Q (2) is less than one, which makes Q (2) stable. For some special cases of the exponentially time-limited systems, stability proofs can be found in [4,Sec. 3.3 and 3.4], which are strongly related to the stability proof of Fricker and Jaibi [10] for a class of polling systems with non-preemptive and work-conserving service disciplines. This approach is also promising for time-limited systems with general underlying branching-type service disciplines.
13,505.8
2019-11-22T00:00:00.000
[ "Mathematics" ]
Coevolution of Stars and Gas: Using an Analysis of Synthetic Observations to Investigate the Star–Gas Correlation in STARFORGE We explore the relation between stellar surface density and gas surface density (the star–gas, or S-G, correlation) in a 20,000 M ⊙ simulation from the STAR FORmation in Gaseous Environments (starforge) project. We create synthetic observations based on the Spitzer and Herschel telescopes by modeling contamination by active galactic nuclei, smoothing based on angular resolution, cropping the field of view, and removing close neighbors and low-mass sources. We extract S-G properties such as the dense gas-mass fraction, the Class II:I ratio, and the S-G correlation (ΣYSO/Σgas) from the simulation and compare them to observations of giant molecular clouds, young clusters, and star-forming regions, as well as to analytical models. We find that the simulation reproduces trends in the counts of young stellar objects and the median slope of the S-G correlation. This implies that the S-G correlation is not simply the result of observational biases, but is in fact a real effect. However, other statistics, such as the Class II:I ratio and dense gas-mass fraction, do not always match observed equivalents in nearby clouds. This motivates further observations covering the full simulation age range and more realistic modeling of cloud formation. INTRODUCTION The majority of stars form in associations or groups within giant molecular clouds (GMCs, Lada et al. 1991;Krumholz et al. 2019;Cheng et al. 2022), which can vary greatly in size, from ∼10 to thousands of stars (Porras et al. 2004).Feedback from embedded clusters often quickly disperses the natal clump or even the entire GMC (Lada 2005;Krause et al. 2020).Therefore, the relationship between gas and young stellar object (YSO) density provides important clues about the star formation process and cloud evolution.Schmidt (1959) was one of the first to present an analytical model of the relationship between star formation rate (SFR), and thus stellar mass, and gas density.That work suggested that SFR and gas density follow a power law relationship. This correlation was examined over the next several decades by a number of authors (e.g.Sanduleak 1969;Hartwick 1971).However, it was not until improved observational capabilities and analysis techniques in the 1980s and 1990s (e.g.Kennicutt 1989Kennicutt , 1998) that strong evidence was found for its viability.This work motivated an analogous relation known as the Kennicutt-Schmidt (KS) law that applies to line-of-sight surface densities of gas and the star formation rate per area: Henceforth, we refer to this relation as the Star-Gas or S-G correlation.This relationship has since been wellcharacterized as a power-law with an index of N∼1.4 as applied to galaxy-scale star formation (see Kennicutt & Evans (2012) for a detailed review). At smaller scales within individual galaxies, there is also evidence for the presence of an S-G correlation.For example, Bigiel et al. (2008) used HI, CO, 24 µm, and UV data to examine the S-G correlation at 750 pc resolution in 18 nearby spiral and dwarf galaxies.Many regions showed a strong power-law relation, although the power-law index varied from 1.1 to 2.7 based on position.They also observed that the star formation efficiency (SFE) decreased with galactic radius, which they argued implies a connection between environment and the S-G correlation. However, the methods used to measure the SFR on ≳ kpc scales, such as Hα, far-UV, and 24 µm emission, become less effective at smaller spacial scales.The results of Liu et al. (2011), as well as modeling by Calzetti et al. (2012) show that this kind of analysis breaks down with shrinking sample area because star formation is not well-sampled statistically.Gutermuth et al. (2011) (G11 hereafter) demonstrated that the SFR calculated from far-IR luminosity (L FIR , e.g., Heiderman et al. 2010) underestimates the SFR calculated from counts of YSOs in nearby young clusters by up to an order of magnitude.This is because measurements based on far-IR luminosity assume a well-sampled stellar initial mass function (IMF) and reliable sampling of the GMC mass function to fully sample the lifetime of high-mass stars.However, in order to satisfy these assumptions, measurements must be integrated over physical scales ≳ 1 kpc (Calzetti et al. 2012). To avoid the smoothing inherent to measurements of star formation relations in other galaxies, some recent studies instead focus on individual star-forming regions in the local Milky Way, where it is possible to identify and count individual forming stars with high completeness.Since YSOs provide a direct measurement of the SFR, a simple estimate of the total mass converted to stars per time is given by where m YSO is the average mass of a YSO, n YSO is the number of YSOs, and t avg is the characteristic timescale for the YSO evolutionary stage or stages considered. By utilizing YSO censuses from Spitzer, G11 and Pokhrel et al. (2020) (P20 hereafter) found and measured an intracloud S-G correlation with an index of N ≈ 2 in several nearby GMCs.While initial measurements varied widely (N = 1.5 -4) (G11), P20 reduced intrinsic scatter in the measurements by adopting a uniform YSO extraction from the Spitzer Extended Solar Neighbor Archive (SESNA), utilizing more robust Herschel -based GMC gas column density maps, and by specifically using YSOs in the early stages of star formation.This led to N = 1.8 -2.3 in 12 nearby clouds with gas masses varying over three orders of magnitude.Also, the scaling factor in the S-G correlation varies between clouds (Lada et al. 2013, G11, P20), but the scatter in the scaling factor is reduced significantly when it is normalized by the gas freefall time (Pokhrel et al. 2021).This implies that the SFE per freefall time has limited variation, which may indicate that local processes (e.g., protostellar outflows and stellar winds) govern and regulate star formation (Guszejnov et al. 2021;Pokhrel et al. 2021;Hu et al. 2022). In order to gain a better understanding of how local processes impact star formation, it is useful to turn to theoretical models and numerical simulations.However, observed S-G correlations have only recently started to become incorporated as constraints for models of starforming molecular gas.P20 used simulations by Qian et al. (2015) that used the ORION adaptive mesh refinement code (Truelove et al. 1998;Klein 1999) to create synthetic observations similar to observations taken by Herschel.That work reproduced similar S-G correlations for 12 nearby GMCs using hydrodynamic turbulent simulations and an analytical model of thermal fragmentation.While the simulation produced an S-G correlation that is very similar to observations, it did not include magnetic fields or kinematic feedback.In this work, we analyze a 20, 000 M ⊙ run of the STAR FORmation in Gaseous Environments (starforge) project, the first massive GMC magnetohydrodynamics simulation to resolve individual stars while including multiband radiation, stellar winds, protostellar outflows, and supernovae (Grudić et al. 2021(Grudić et al. , 2022, etc.), etc.). In order to most effectively compare the starforge simulation to observations, we construct synthetic observations according to the data used in P20, taking into account the known specifications and limitations of Spitzer and Herschel data.In Section 2, we describe the specifics of the simulation snapshots and our methods for creating synthetic observations.In Section 3, we present results from our investigation into various star-gas properties in the simulation and compare to observations.Discussion is provided in Section 4, and a summary and conclusions are given in Section 5. starforge Simulations The starforge framework is built on the gizmo meshless finite mass magnetohydrodynamics code (Hopkins 2015).The framework includes a variety of modifications that enable the modeling of individual forming stars and the interactions that occur with their cloud environment.In this work we analyze the starforge simulation presented in Grudić et al. (2022).We briefly summarize the simulation properties here and refer the reader to Grudić et al. (2021) for a detailed description of the starforge numerical methods. The simulation follows the evolution of a 20,000 M ⊙ cloud with initial radius of 10 pc.The cloud turbulence is initialized so that the cloud is virialized with α ≡ 5σ 2 3D R cloud /(3GM cloud ) = 2, where σ 3D is the gas velocity dispersion.The initial magnetic field is uniform in the ẑ direction and corresponds to a massto-flux ratio relative to the critical value for stability µ ≡ 0.4 E grav /E mag = 4.2, where E grav and E mag are the total gravitational and magnetic energies, respectively. The calculation follows the gas thermodynamics selfconsistently, including treatment of line cooling, cosmicray heating, dust cooling and heating, photoelectric heating, hydrogen photoionization, and collisional excitation of both hydrogen and helium.The evolution of the dust temperature is coupled to the radiative transfer step.gizmo's radiation transfer module follows five bands, which cover the frequencies corresponding to ionizating radiation, FUV, NUV, optical-NIR, and FIR (Hopkins & Grudić 2019;Hopkins et al. 2020). Once gas satisfies multiple criteria intended to identify centers of unstable collapse, Lagrangian sink particles are inserted, which occurs at densities of ρ max ∼ 10 −14 g cm −3 . The cell mass resolution is dm = 10 −3 M ⊙ , which allows the calculation to resolve the stellar mass spectrum down to ∼ 0.1 M ⊙ .The sink particles, henceforth referred to as stars, follow a sub-grid model for protostellar evolution and radiative feedback as described in Offner et al. (2009).The particles are also coupled to models describing protostellar outflow launching, stellar winds, and supernovae (Cunningham et al. 2011;Guszejnov et al. 2021;Grudić et al. 2021).The calculation continues until stellar feedback disperses the natal cloud and star formation concludes, which happens at ∼ 9 Myr. The simulation has a final SFE of 8% that agrees with statistical models of nearby galaxies.Protostellar jets dominate feedback for most of the simulation and are important for regulating the IMF, but they cannot wholly disrupt the cloud.Eventually, radiation and winds from massive stars create bubbles that expand and disrupt the cloud, drastically reducing SF.By following GMC evolution, Grudić et al. (2022) measure a relatively unambiguous IMF.It resembles the Chabrier IMF with a high-mass slope of α = −2±0.1.The IMF is much more realistic than previous simulations without full feedback.Feedback from radiation/winds of massive stars limits the maximum observed mass to 55 solar masses, moderating the high-mass tail of the IMF.The integrated luminosity and ionizing photon rate are also very close to an equal-mass cluster with a canonical IMF.A more detailed study of the impact of various feedback processes and cloud initial conditions on the IMF is presented in Guszejnov et al. (2022).Grudić et al. (2022) also note the importance of directly comparing observations and simulations via synthetic observations, as we aim to do in this work. To construct the stellar surface density, we require a minimum of 11 YSOs.The first snapshot with at least this number of sources is at 1.47 Myr.Altogether our analysis uses 16 snapshots, spaced 0.49 Myr apart, which span 1.47 to 8.80 Myr. Constructing Synthetic Observations For our analysis to better mirror that of P20, we create synthetic observations by including various considerations to bring our data closer to that which might have been observed by Spitzer and Herschel.We refer to analysis done with minimal adjustments, i.e., only 2D projection, age-to-class conversion, and 0.01 M ⊙ mass cutoff (see below) as the "fiducial analysis", while analysis with further considerations are collectively referred to as "synthetic observations."The fiducial (minimallyadjusted) case allows us to examine how well the simulation can reproduce various statistics and identify where observational biases may affect the agreement.In order to create these synthetic observations, we extract or compute the (line-of-sight-projected when applicable) molecular number density of H 2 and the masses, coordinates, ages, and particle indexes of the sink particles, which represent YSOs. YSOs YSOs fall into distinct groups based on their observed properties.Historically, these have been binned into representative classes (Lada 1987;Shu et al. 1987;Greene et al. 1994;Robitaille et al. 2007;Dunham et al. 2015), e.g., Class I, Class II, and Class III.1 .Note that class does not have a direct mapping to source age, but it is often used as a proxy for evolutionary stage.YSOs in each class differ in the shape of their spectral energy distribution (SED), which depends on the characteristics of the circumstellar material around the YSO.Class Is are usually deeply embedded in cold, dense, and dusty gaseous envelopes, Class IIs have classical protoplanetary disks, and Class IIIs have mostly lost their disks (or the visible disk material has substantially coalesced into larger planetesimals that are generally invisible in the infrared). For the first step of our analysis, we map each of the starforge stars to an observational class.Ideally, the stellar age would be employed to directly map each source to the appropriate spectral class.However, the average age and lifetime of each class is uncertain, since the individual classes are not completely distinct and the boundaries between them are somewhat arbitrary.Class lifetimes are inferred observationally using the relative number of sources in each class and by assuming a typical disk lifetime (e.g.Dunham et al. 2014).Consequently, a full self-consistent class assignment requires constructing synthetic observations using radiative transfer to model the SEDs.Instead, we assign each star to a class based on its age (the time elapsed since the sink particle forms in the simulation) and adopt a statistical approach rather than an exact mapping. We model the transitions from Class I → Class II and Class II → Class III as exponential decays, adapting the models and half-lives of the transitions from Kristensen & Dunham (2018) and Mamajek (2009) to represent the age-to-class conversion.Using these half-lives, we calculate two numbers corresponding to each source, f a and f b , which corresponds to the statistical weighting given to each source for transitions (a) (Class I → II) and (b) (Class II → III): where t age is the age of the YSO and t 1/2;a and t 1/2;b are the half-lives of the Class I → Class II transition (0.22 Myr) and the Class II → Class III transition (1.7 Myr), respectively.Then, we generate two random numbers (r a and r b ) for each source using consistent seeds and the persistent source index from starforge, so that each YSO has the same r a and r b for the entire run.If r a < f a , then the YSO is assigned to Class I.If not, we check whether r b < f b .If so, the YSO is assigned to Class II.If not, it is assigned to Class III. By fixing r a and r b for each source, we ensure that the sources progress forward through the classes (I to II to III) as they age in the simulation.However, in actual observations, a YSO's trajectory may not be so linear.For example, Dunham et al. (2010) used models of accreting sources to show that YSOs undergoing episodic mass accretion may transition to an earlier Class.The notion that older sources can populate the earlier classes is also supported by the work of Hernández et al. (2007a), who observed what appear to be older, "evolved" disks.Another problematic assumption is that the Class lifetimes are the same for every environment, which is unlikely since protostars in areas of high YSO density tend to have greater luminosity (Kryukova et al. 2014;Cheng et al. 2022).Despite the approximate nature of our model for Class assignment, we find that it reproduces the expected YSO distributions well.Whereas, assuming an exact one-to-one mapping between age and Class leads to sharp transitions that do not match observations as closely. Next, in order to model source confusion present in Spitzer observations, we inject Active Galactic Nuclei (AGN) contaminants.In Spitzer observations, background AGN can appear as YSOs of Class I and II with roughly equal probability (Gutermuth et al. 2008(Gutermuth et al. , 2009)).To simulate this effect, we randomly place N Class Is and IIs within the dataset, where N was determined to be ∼9 per square degree (P20).This has the immediate effect of introducing many sources with low spatial density.This is especially significant for the synthetic clouds at closer distances due to the commensurately larger angular size of the cloud (see Figure 1, where it is clear that AGN dominate over YSOs in low gas density regions).We then correct for these contaminants following the method used by G11: we adopt a threshold of log Σ gas > 1.3 M ⊙ pc −2 for points on the S-G plot (see Section 3.4).We adopt the same distribution of AGN contaminants for all snapshots to ensure that the AGN stay the same (i.e.same position and Class). We also model instrumental detection limits to account for undetectable low-luminosity sources.To replicate this in the synthetic observations, we implement a simple mass cutoff, where we remove sources below 0.08 M ⊙ (200 and 400 pc distance) or 0.2 M ⊙ (800 pc). 2 Last, we model Spitzer 's limited angular resolution by removing stars in close proximity.When a source and its nearest neighbor (YSO or AGN) are within the adopted beam size threshold of 5 ′′ , we remove the lowermass source.We assign AGN a mass of 1.1 M ⊙ to avoid losing them to the mass cutoff.We do only one pass to remove sources, but this is sufficient to remove the vast majority of close neighbors. Gas We construct 2D projected column density maps with cloud distances of 200, 400, and 800 pc, which are chosen to model the majority of the clouds in the P20 sample.Figure 1 shows one of these maps with a spatial distribution plot of YSOs and AGN contaminants. The Spitzer and Herschel fields of view focus on regions of high column density (clumps) within the clouds.To simulate this, we crop the gas maps to the bounds set by a 10 21 cm −2 column density contour on a 120 ′′smoothed gas map constructed specifically for this purpose.This map is not used again in the further analysis.We smooth to keep small overdensities from ar-tificially enlarging the cropping area.3This greatly reduces the field of view compared to the full view, as shown in Figure 2, and makes our maps more similar to the Spitzer and Herschel data we compare with.Ad-ditionally, this significantly reduces the amount of lowdensity AGN contamination (see Figure 1 and Section 4 for more details).In order to simulate the angular resolution of Herschel, the gas maps are smoothed with a 36 ′′ Gaussian kernel. Overview Statistics To better compare with observations, We first define a few bulk cloud properties.We define the total cloud gas mass, M gas , as the combined mass of gas at column densities 10 21 cm −2 and above.Similarly, the dense gas mass, M dense , is the total gas at column densities 10 22 cm −2 and above.The dense gas mass fraction is then the ratio of dense to cloud gas mass.This metric gives an indication of the fraction of the cloud that is most likely to form clusters (Battisti & Heyer 2014;Heyer et al. 2016).We define the disk fraction as the ratio of the number of Class I and II YSOs to the total number of YSOs regardless of circumstellar material.Disk fraction can be used as a proxy for the population age (Haisch et al. 2001;Hernández et al. 2007b).A similar statistic, the Class II to Class I ratio, is generally believed to be a good relative evolution indicator for YSOs, especially for earlier evolution (G11, P20). Figure 3 shows the evolution of the gas properties with time for the fiducial case.The cloud mass and dense gas mass increase steadily over time and peak at ∼ 5.4 Myr.The maximum mass reaches about half of the 20,000 M ⊙ of gas that makes up the entire simulated GMC.After this point, the cloud mass decreases rapidly to less than 1/10 of the initial GMC mass.The dense gas mass fraction exhibits a similar trend, peaking at M dense /M cloud ∼ 0.6 at the same time. Figure 4 shows that the number of Class I and II sources evolve in a similar way to the gas.Star formation increases steadily for the first 3.43 Myr as indicated by the rising number of Class Is.After 3.43 Myr, star formation declines to 63% of the maximum in 2.45 Myr and then drops to only half this value in the next snapshot.The number of Class IIs evolves more gradually, peaking at 5.88 Myr, after which point it steadily decreases to about half its maximum value by the end of the simulation.formation event, which supports the use of a clusterderived model.It is shown with a vertical stretch and time axis shift to make the model more visible without adjusting the main model parameters. We also plot a tweaked version of the model with minor adjustments to better fit our assumptions and outputs.Namely, we shift the time axis by 1.47 Myr to match our snapshots, increase the SFR from 100 to 435 (unitless metric which changes the vertical scale of the model), lengthen the rise and decay times for the Class Is from 0.5 to 1.7 Myr and 0.5 to 1.5 Myr, respectively, and shorten the lifetimes of Class Is and IIs to be closer to (but not exactly the same as) the half-lives for our adopted Class transitions (0.5 to 0.3 Myr and 2.0 to 1.5 Myr, respectively).With these parameters, the model reproduces the fiducial starforge data re-markably well.This suggests the starforge simulation provides a good representation of cluster formation.As we shall see below, the simulation appears to agree less well with star formation observed in full GMCs, which generally contain multiple distinct star-forming regions and have longer and more complex star formation histories. Figure 5 shows the evolution of the Class II:I ratio and the disk fraction in this simulation.The disk fraction starts near 1.0 and then decreases nearly linearly to 0.21 in the final snapshot.This is more drawn-out than the traditional disk fraction versus stellar age plot (e.g.Mamajek 2009).The starforge calculation exhibits a broad range of Class II:I ratios, which span 1.3-19.0.For comparison, P20 recorded the Class II:I ratio and the cloud mass for 12 clouds at distances between 140 and 1400 pc.They found that the Class II:I ratio remained between ∼3.5-9.7 for each cloud observed, which is a much narrower range than we find in the starforge snapshots.However, the P20 values are uncorrected for AGN and edge-on disk contamination, which would likely change the Class II:I ratios, as will be seen below. Using publicly available Herschel data (André et al. 2010, P20), we calculate the dense gas mass fraction of the clouds and clusters P20 and Gutermuth et al. (2009) observed.We adopt the publicly available YSO lists from SESNA, correct for AGN and edge-on disk contamination, and crop for coverage consistency and to the N(H 2 ) = 10 21 cm −2 limit.In the case of the Gutermuth et al. (2009) data shown, we adopt all "cluster cores" that overlap with clouds from the P20 sample, and crop to square areas that are twice the diameter implied by the R circ radii listed in that paper, once converted to the most recent heliocentric distances reported in P20.Some of the selected areas of adjacent cluster cores overlap significantly.The assumed and computed data for these plots are listed in Tables 3 & 4. Figure 6a shows that starforge and the clouds in P20 occupy different regions of the Dense Gas Mass Fraction -Class II:I ratio parameter space.The trajectory agrees better with the clusters from Gutermuth et al. (2009), except for the earliest snapshots.We could correct for this by assuming some amount of ambient star formation occurs in the cloud before the main cluster forms, which would increase the Class II:I ratio, more noticeably in the early and late snapshots that have few Class I and II sources.This supports the implication that starforge more closely models the formation of a large cluster rather than star formation in a GMC.Inspection of Tables 3 & 4 indicates that the total gas mass and dense gas mass in the simulation are also more consistent with the ranges reported for the Gutermuth et al. (2009) clusters.We next apply a correction for AGN to the synthetic observations by removing 4.5 sources per square degree for both Class Is and IIs.We find that the synthetic observation trajectories exhibit strong agreement with each other and the fiducial (Figure 6b).This is expected, since we add that same density of AGN contaminants at the beginning of the synthetic analysis. Evolution of the Star-Gas Fraction The calculation of the S-G correlation in this work emulates the treatment from P20, allowing us to better compare the outcomes of the two.We calculate the n th nearest neighbor distance (NND) for each Class I YSO, for n = 11 using scipy.spatial.KDTree.KDTree uses the algorithm described by Maneewongvatana & Mount (1999) to create a binary tree of 3-dimensional nodes (the positions of the sources).This allows for the quick lookup and classification of nearest-neighbors.We use n = 11 because it is a good compromise of spatial resolution (typically 0.1-2 pc smoothing-scale in nearby clouds) and low relative uncertainty (33%, Casertano & Hut 1985).This choice is consistent with Casertano & Hut (1985), G11, and P20.Using a circular mask with a radius of the NND we calculate the area A of each circular mask, the mean column density in each circle Σ C , and the ratio C of covered area to total area within each circle.C covers edge effects and is thus almost always unity.From this, we calculate Σ gas , the gas mass surface density.Σ YSO is the measurement of the surface density of YSOs, calculated as (Casertano & Hut 1985) where M YSO is the adopted mean mass per YSO and n, A n and C are defined above.Except for our fiducial analysis, where we try to avoid as much observational bias as possible, we fix M YSO at 0.5 M ⊙ to keep the analysis consistent with P20. Figure 7 shows the median Σ YSO /Σ 2 gas versus time, which captures the vertical offset and spread around the power-law fit (G11).While anything more than a general positive trend with time showing increasing stellar density as a function of gas density is not immediately clear, the Class I and II values are close to each other for the first ∼ 6 Myr.After this point, the populations no longer appear correlated.This points to a large-scale decoupling of the YSOs from their surrounding gas at around 6-6.5 Myr.This is supported by visual examination of the snapshots.Figure 8 shows a snapshot before decoupling occurs (3.42 Myr) and a snapshot after decoupling occurs (7.82 Myr).It is clear that nearly all YSOs reside near or within dense gas before the decoupling, but afterwards the two groups are significantly less correlated. The dense gas mass fraction peaks at ∼5.38 Myr (Figure 3).This is when feedback begins to disperse the cloud (see Figure 2), and there is a ∼1 Myr lag before the effects are seen in the other statistics.For example, Figure 5 shows that the number of Class Is drastically declines and the number of Class IIs peaks at ∼6.36 Myr.This causes the Class II:I ratio to rise significantly.And, as mentioned above, this is also the time when Class Is and IIs in Figure 7 appear to decouple. Star-Gas Correlation versus Time Figure 9 shows the slopes and uncertainties of the S-G correlations for the fiducial analysis along with the three sets of synthetic observations as a function of time.Most of the slopes lie relatively close to 2.0, however the wellcorrelated slopes either lie above or below 2.0, usually localized around 2.4-2.5 or 1.7-1.8.Over half of the fiducial snapshots visually appear to have a tight YSO and gas surface density correlation, with an uncertainty in the slope of ≤ 0.2.This provides significant evidence that the power-law relationship for the S-G correlation is a real effect that is a result of underlying physics and not a result of observational bias (see Figure 13 in the Appendix for the fiducial S-G correlation plots). However, many of the snapshots are not wellcorrelated, appearing as a clump of points that lie on the for AGN and edge-on disk contamination, and cropped for coverage consistency and N(H2) > 10 21 cm −2 , so they differ to varying degrees from the raw values reported in those works.Black points and line represent the time evolution trajectory for the fiducial analysis of this work, starting from the bottom left.Right: fiducial trajectory overlaid with trajectories from the synthetic analyses at different distances, corrected for AGN contamination.Note that points at high Class II:I ratio are highly uncertain (e.g. Figure 5c). .Median value of ΣYSO/Σ 2 gas versus time for both Class I and Class II sources.In addition to showing the increasing stellar density as a function of gas density, these values are closely correlated until ∼6 Myr (dashed vertical line).At this time, feedback clearly begins to disrupt the gas (see Figure 3) thereby inducing decoupling of the gas structure and the YSO distribution. expected line, but do not span a significant range of surface densities.This is especially true for snapshots with fewer than ∼100 Class I sources, since this often leads to poorly-constrained slopes with error bars as large as 0.6.This difficulty hinders comparison with previous observations, as many of the observed clouds in G11 and P20 have many more sources and more completely populate the S-G space and thus the S-G correlation.However, the addition of synthetic observation effects, especially adding AGN or removing close neighbors, can artificially compensate for this by filling out the low-density region and depleting the high-density region of the plot, respectively.This is discussed in Section 3.4 below.In addition, the S-G correlations for each snapshot of the simulation in the fiducial and 200, 400, and 800 pc synthetic analyses can be found in the Appendix. Figure 9 illustrates the evolution of the S-G slope as a function of time in the simulation.While the shape, slope, or scatter of the S-G correlation do not change monotonically with time, there are several features that are roughly independent of distance and the presence of the synthetic considerations.This implies that the synthetic observation effects don't obscure the underlying physics, except for in snapshots where low-number (of YSOs) statistics are significant (e.g., the poorly correlated snapshot in Figure 12).Figure 9 shows the S-G slope declines until around ∼ 4 Myr (most noticeable at closer distances), at which point the number of Class I sources peaks.From ∼4-6 Myr the S-G slope increases as the number of Class II sources continues to rise; the peak in the S-G slope at 6 Myr coincides with the peak in the number of Class II sources.After 6 Myr the S-G slope declines sharply as feedback begins to disperse the cloud in earnest.Many of the snapshots around this time also have poor S-G correlations.Even though much of the cloud gas is dispersed from the central region, the YSOs' dynamics take longer than the gas to respond to the changing gravitational potential.However, star formation still occurs in the remaining pockets of dense gas, maintaining some degree of S-G correlation in the later snapshots (Figure 1).While it is clear that some star-gas statistics evolve with time, the slope of the S-G correlation appears to be relatively constant across the history of the cloud.This is consistent with the observations of G11 and P20 who found little variation in the slopes across a wide collection of GMCs with very different ages.The spread in the S-G slopes and fit uncertainties are significantly larger for the simulations than for the observations of G11 and P20, however.This comparison may not be fully equal, as there is a selection effect on which clouds are actually observed and included for analysis.For this reason the observed clouds may span a narrower range in the cloud lifetime: young clouds with little star formation may not be identified as distinct and/or interesting star-forming regions and thus will be excluded, while older clouds that are in the process of dispersing are excluded since they have little remaining dense material.As a result, the especially-poorly-populated early and late snapshots are not well represented in real data, as it is difficult to find and observe very young and very old clouds.More observational work will need to be done to more effectively compare with these snapshots. Demonstration of Synthetic Effects on the Star-Gas Correlation In this section, we explore how each synthetic effect impacts the apparent S-G correlation.Figures 11 and 12 compare the fiducial S-G correlation with those obtained for five different synthetic effects. The first effect we add to the synthetic observation is the adoption of a uniform YSO mass. Figure 10 shows the mean and spread of YSO masses in the simulation, and as can clearly be seen, adopting a fixed average mass does not well-represent the true average mass, which varies by a factor of ∼ 10 over time.However, since in- In many of the earlier snapshots, undersampling causes large uncertainties and produces slopes that are discrepant with observations.We limit the y axis to better compare differences between the runs.This obscures the (unreasonable) points of some of the earlier snapshots.See Figures 13-16 for full slope and uncertainty values.The leftmost point in the bottom right panel is missing since there is no slope for that individual snapshot (see Figure 16). dividual real YSO masses cannot be directly measured, observational analyses such as P20 must adopt some approximation.Figures 11b and 12b illustrate the S-G correlation assuming a uniform YSO mass of 0.5 M ⊙ . Comparing panels (a) and (b) indicates that using the true mass of the sources has little effect on the S-G correlation.While the points move slightly vertically, the slopes change by less than 0.1.Consequently, source mass appears to have a relatively minor impact on the S-G correlation.Considering the minor effects, the uniform mass is used in the rest of the demonstration.The impact of the removal of close neighbors is more significant.When multiple close sources appear as a single source, the effect is to remove many of the highest density points in the S-G relationship shown in Figure 9c.This, in turn, has a flattening effect on the power-law slope, for example, bringing the slopes of most snapshots (all snapshots between 2 and 6.5 Myr) within 2σ of 2.0 (see Figure 9).The earlier and later snapshots tend to be (often significantly) less well-sampled, which likely explains their inconsistent slopes (as shown in Figure 12).The impact of this effect increases with distance, as the 5 ′′ minimum separation imposed on the YSO lists translates to larger physical separations.This can be observed qualitatively in Figures 13 -16 in the Appendix. Figure 11d illustrates the effect of detection limits on the S-G correlation.We find that implementing a mass sensitivity limit significantly decreases the number of sources at all densities, which increases the fit uncertainties across all snapshots.However, this does not significantly change the slope in well-correlated snapshots.In contrast, Figure 12 shows that the addition or removal of a single point can significantly change the slope in a snapshot with fewer YSOs.This effect is also more extreme at larger distances.Specifically, at 200 and 400 pc, the mass limit is the Hydrogen-burning limit: 0.08 M ⊙ , while the limit at 800 pc is 0.2 M ⊙ , significantly reducing the number of sources with which to calculate the S-G correlation.Compare Figure 16 with Figures 14 and 15 in the Appendix for a visualization of how the number of sources decreases with increasing distance. Next we investigate the impact of AGN contamination on the S-G correlation.The addition of AGN has a significant impact on the S-G correlation as shown in Figures 11e and 12e.Since the AGN are uniformly randomly distributed throughout the field of view, they add a relatively constant (Σ YSO = ∼0.3,∼0.1, ∼0.03 M ⊙ /pc 2 at 200, 400, and 800 pc, respectively) "foot" of points to the bottom of the S-G correlation.This disproportionately affects low Σ YSO regions and artificially flattens the power law.The flattening increases with distance, so much so that the slope of the S-G correlation for every snapshot at 200 pc and 400 pc would lie below 2 at all times.However, we follow observational convention and implement a column density cutoff when fitting the slope as described below. For nearby clouds, the number of YSOs observed at relatively low column densities is small, which causes observations of those regions to be dominated by AGN contaminants.To deal with the similar issue of our syn-thetic AGN, we adopt the same approach as G11 and remove YSOs in our catalog in regions with log(Σ gas ) < 1.3 M ⊙ pc −2 and refit the remainder.This is demonstrated in Figures 11e,f and 12e,f.Applying this treatment to the synthetic observations with AGN confirms that such a cut is justified to minimize the bias of the fit caused by AGN contamination.After applying this cut, most slopes steepen and approach the expected value of ∼2.0 (Figure 9).The presence of many well-correlated S-G relationships for the fiducial starforge run implies that the S-G correlation is a physical phenomenon and not solely the result of observational biases.However, the addition of synthetic observation considerations does artificially lower the slope of the S-G correlation for many of the snapshots, generally increasing agreement with observations.This begs the question of whether the very consistent value of 2 determined by P20 is partially caused by observational effects.In this case, the S-G correlation slope is not as invariant and universal as it appears in P20.However, in the latest snapshots, which have better agreement in Class II:I ratio versus dense gas mass fraction with P20 clouds (Figure 6), the S-G correlation slopes are much lower than observed. Nonetheless, it is striking that the broad range of evolutionary stages spanned by one cloud, modeled with all key physical effects, produce a relatively uniform powerlaw slope.Once star formation is underway, stellar feedback helps to regulate the relationship between dense gas and YSOs.Clouds with particularly high Class II to I ratios are likely dominated by stellar feedback and in the process of cloud dispersal.Follow-up observations that minimize observational effects are required to fully constrain if and the extent to which these biases conspire to produce an S-G power-law slope of ∼ 2. Comparison to Previous Work Chevance et al. ( 2022) find that GMCs in nine nearby disc galaxies usually disperse within ∼ 3 Myr after unembedded high-mass stars emerge.While not directly measured in this work, we believe the first high-mass stars likely emerge shortly before feedback begins dispersing the cloud in earnest.We estimate dispersal to become qualitatively significant sometime between ∼ 5.4 − 6.4 Myr, as described in Section 3. And, considering the GMC is nearly completely dispersed by 8.8 Myr, the simulations are consistent with the observed ≲ 3 Myr time frame, as well as the ∼ 10 Myr total cloud lifetime they estimate.There is a slight vertical shift in the points on the plot, but it does not significantly change the slope or the uncertainty of the S-G correlation.We adopt uniform mass for the rest of the panels.c) The removal of close neighbors that would have been indistinguishable by Spitzer.This predominantly removes high-density points, lowering the slope.d) The removal of low-mass sources that would have been undetected by Spitzer.This removes points without visible bias towards density, increasing the uncertainty in the slope.e) The addition of AGN.This predominantly adds low-density sources, lowering the slope.f) All previous synthetic effects at once.The slope is much closer to 2. Black dashed lines represent a density cutoff imposed to account for the presence of AGN.The slopes and best-fit lines for e) and f) are only based on points to the right of the black line. As mentioned in the Introduction, P20 adapted HD simulations by Qian et al. (2015) to create synthetic observations of their 12 observed clouds.These synthetic observations included 2D projection and neighbor removal.The HD simulations produced slopes between 2.3 − 2.7, higher than the observed 1.8 − 2.3.The simulations are also limited in density dynamic range compared to some of the clouds they model (see Figure 6 in P20).However, the simulated slopes are similar to the values of 2.0 − 3.0 we observe in the fiducial run (before cloud dispersal and excluding the first two snapshots, see 2).Caution is required when comparing with these simulations since P20 modeled 12 different clouds at a single time, while this work models one cloud at many different times.Regardless, the main improvement of starforge over the simulations in Qian et al. (2015) is more realistic physics, especially magnetic fields and kinematic feedback.While magnetic fields do not play a very significant role in setting the slope of the S-G correlation, kinematic feedback allows starforge to evolve the GMC without driven turbulence (which was necessary for the simulations in Qian et al. 2015).While the STARFORGE simulation starts with an initial turbulent setup, the turbulence, evolution, and dispersal of the cloud are regulated entirely by stellar feedback. Model and Analysis Caveats While starforge faithfully reproduces the S-G correlation in many snapshots, there are some areas for improvement.For example, Figure 6, which displays the Class II to I ratio versus dense gas mass fraction, that the simulation exhibits poor agreement with the P20 clouds.In contrast, the simulation agrees better with the cluster data from Gutermuth et al. (2009).This implies that the starforge simulation analyzed here produces something that is closer to a smaller denser structure (i.e., a cluster) than the stellar complexes formed in the GMCs of the P20 sample, which may be characterized by a longer and richer star formation history. The drastic evolution in the number of Class Is with time highlights that the simulation SFR is not constant, in contrast with the assumption of constant SFR made by G11, P20, and others when using class ratios (to infer age, for example).Figure 5 shows that this leads to Class II:I ratios that vary much more than in the P20 observations.Megeath et al. (2022) argued that a variable SFR similar to that produced by the starforge simulation is necessary to explain the ensemble of Class II:I ratios and disk fractions in nearby clusters.The agreement between this model and the starforge data (Figure 4) provides more evidence that starforge produces something more similar to a monolithic cluster than a full GMC (i.e., with several smaller, distinct clusters). However, we caution that here we only analyze one simulation that aims to model the typical conditions of a Milky Way cloud.Future work is needed to explore the broad range of conditions modeled in the starforge simulation suite, which includes clouds with varying initial magnetic field, turbulence, interstellar radiation field, surface density, cloud size, and cloud initialization (Guszejnov et al. 2022).In particular, the initial cloud setup, that of a uniform density sphere, is a significant oversimplification of the complexity of forming and accreting molecular clouds.Overall, agreement with both data sets would likely be improved by using more realistic initial conditions.For example, a slower star formation start could increase the Class II:I ratio in the early snapshots, improving agreement.Simulations that begin with more realistic cloud initialization, such as a driven-turbulence sphere (Lane et al. 2022) or models that follow cloud formation from galaxy scales (Hu et al. 2023;Ganguly et al. 2023, Hopkins et al. 2023 in prep.), are likely necessary to advance agreement between the starforge framework and observations. One recent interesting aspect of starforge comes from Grudić et al. (2023), who ran 100 2000 M ⊙ STAR-FORGE simulations and found a sharp mass cutoff on the IMF at 28 M ⊙ .This is in contrast to a simulation with similar parameters but 10 times the mass, which generated a 44 M ⊙ star, and a simulation with 10 times the gas surface density which generated a 107 M ⊙ star.They suggest that the STARFORGE IMF has a highmass cutoff that depends on the environment.This cutoff is generally different from the canonical 100−150M ⊙ cutoff, which they conclude implies that the IMF cannot be reproduced in small clouds simply by randomly sampling from the full IMF. Here we also outline some inconsistencies in our data processing.The bounds on the cropped field of view (see Figure 1) are set by the furthest extent of the N(H 2 ) = 10 21 cm −2 contour.This occasionally causes larger-than-intended fields of view when an area of gas denser than N(H 2 ) = 10 21 cm −2 is present away from the central cluster.The only major impact this has on the analysis is to increase the number of AGN when calculating the S-G correlation.However, this impact is largely mitigated by the low density cut discussed previously.We also neglect a number of steps that would be needed to complete a fully "apples-to-apples" comparison with the observations.For example, we do not use radiative post-processing to construct the YSO SEDs (e.g., Offner et al. 2012).Nor do we construct synthetic dust continuum maps in order to compute the column density (e.g., Juvela 2019).These steps would allow us to apply the observational biases, such as the detection limits, more directly.However, we expect any impact on the S-G slope to be minimal, since the YSO positions and relative amounts of dense gas would be unchanged. CONCLUSIONS In this study, we examine a 20,000 M ⊙ star-forming cloud in the starforge simulation suite in order to investigate the presence and evolution of the S-G correlation.To effectively do so, we create synthetic observations to compare with previous observational work, specifically P20 and G11.These synthetic observations include 2D projection of gas and star particle distributions at multiple distances, an age-to-Class conversion for the simulated stars using an exponential decay model, AGN contamination, a low stellar mass cutoff, the removal of close (unresolved) neighbors, gas map smoothing to mimic limited angular resolution, and a field-of-view crop at a gas column density of N(H 2 ) = 10 21 cm −2 . Since most of these effects depend on distance, we place each cloud at 200, 400 and 800 pc to mimic the distances of star-forming regions observed by G11 and P20.This changes the angular size of the cloud, the number of AGN, the mass sensitivity limit, and the neighbor threshold.From these synthetic observations, we examine the dense gas fraction, YSO distribution and frequency, and the S-G correlation for the fiducial analysis and for the synthetic analyses at each distance. We find that the starforge simulation successfully reproduces the S-G correlation in many snapshots and exhibits a typical S-G slope within 1σ of the observed slope of 2. The presence of the S-G correlation both with and without accounting for observational effects implies that this is a real relationship that is a product of the underlying physical processes.However, observational biases, such as AGN contamination, appear to strengthen the S-G correlation, reduce time variation and promote a slope closer to 2. We find that the Class II:I ratios and dense gas fraction characteristic of the starforge simulation exhibit better agreement with those of the clusters in the Gutermuth et al. (2009) sample than the stellar complexes forming in the clouds in P20.No regions in either observational study match the low Class II:I ratios found at early times (< 3 Myr) in the simulation.This implies that the P20 and Gutermuth et al. (2009) clouds/clusters form stars at a low rate for a few million years.Thus, bias in cloud selection, which favors actively star-forming clouds with significant amounts of dense gas, possibly also contributes to the apparent universality of the S-G correlation. The present study only considers the S-G correlation under one set of typical simulated cloud conditions.Future work is needed to examine the impact of cloud properties and more realistic initial conditions on the S-G correlation. A. TABLES In Tables 1 and 2, we present various statistics extracted from the analysis of the fiducial snapshot.We present statistics of the clouds and cluster cores from P20 and G09 used in our analysis in Tables 3 and 4. We have updated the data from P20 and G09 to the latest datasets from SESNA and Herschel, corrected for AGN and edge-on disk contamination, and adopted distances from P20. B. S-G CORRELATION PLOTS We present here in Figures 13,14,15,and 16 the full collection of S-G correlation plots for the fiducial and synthetic analyses.The fiducial case represents analysis done with minimal adjustments, while the others contain all synthetic effects described in Section 2.2. Figure 1 . Figure 1.Projected N(H2) column density map of a 200 pc-distance cloud with N(H2) = 10 21 cm −2 contour over-plotted in green.Colored circles indicate the locations of YSOs and AGN.a) Full field of view of the simulation at ∼5.4 Myr.b) Zoomed (∼20-pc) field of view cropped to the furthest extent of the green contour at ∼5.4 Myr.AGN contaminants dominate the source counts in the low-column density regions. Figure 2 . Figure 2. Projected N(H2) column density map with N(H2) = 10 21 cm −2 contour over-plotted in green and N(H2) = 10 22 cm −2 contour in magenta at (a) ∼2.4 Myr.(b) ∼5.4 Myr.(c) ∼8.3 Myr.The green contour outlines the likely Spitzer field of view for an equivalent cloud.Note that the high-density (magenta) region coalesces as star formation increases and eventually breaks apart due to stellar feedback, which is in the process of dispersing the cloud in (c). Figure 4 . Figure 4. Evolution of each Class of starforge-derived YSO counts in this work (black points) overlaid with analytical models adapted from Megeath et al. (2022): (a) Number of Class I sources versus time, (b) Number of Class II sources versus time, (c) Number of Class III sources versus time.The orange lines are shifted and rescaled versions of the Megeath et al. (2022) models using their parameter selections, while the blue lines adopt parameter value adjustments to achieve strong agreement with the STARFORGE data. Figure 5 . Figure 5. (a) Class II:I ratio versus time increases steadily until about 6 Myr, at which point it jumps up and does not follow a consistent trend.(d) Disk fraction, i.e., ratio of Class I and Class II sources to total number of sources, decreases steadily, but slower than comparable observations based on mean stellar ages in real clouds.Error bars are calculated through standard error propagation. Figure 6 . Figure 6.Dense Gas Mass Fraction versus Class II:I ratio.Left: Blue triangles are values from nearby molecular clouds in P20.Orange triangles represent clusters in those clouds fromGutermuth et al. (2009).All observed data have been corrected for AGN and edge-on disk contamination, and cropped for coverage consistency and N(H2) > 10 21 cm −2 , so they differ to varying degrees from the raw values reported in those works.Black points and line represent the time evolution trajectory for the fiducial analysis of this work, starting from the bottom left.Right: fiducial trajectory overlaid with trajectories from the synthetic analyses at different distances, corrected for AGN contamination.Note that points at high Class II:I ratio are highly uncertain (e.g.Figure5c). Figure 7. Median value of ΣYSO/Σ 2gas versus time for both Class I and Class II sources.In addition to showing the increasing stellar density as a function of gas density, these values are closely correlated until ∼6 Myr (dashed vertical line).At this time, feedback clearly begins to disrupt the gas (see Figure3) thereby inducing decoupling of the gas structure and the YSO distribution. Figure 8 . Figure 8. Projected N(H2) column density map with N(H2) = 10 21 cm −2 contour over-plotted in green and N(H2) = 10 22 cm −2 contour in cyan at (a) ∼3.4 Myr.(b) ∼7.8 Myr.Note in (a) that most YSOs, especially Class I sources, remain close to or within the dense gas cyan contour.In (b) however, many of the YSOs are no longer correlated with the locations of the denser gas at either contour level, indicating YSO-gas decoupling.Existing YSOs (mainly Class IIs and IIIs) remain relatively stationary for the first few million years as the gas dissipates, being bound together by gravity.However, new Class Is continue to form in the denser gas (almost all Class Is are within the cyan contours). Figure 9 . Figure9.Slope and uncertainty of the S-G correlation for each snapshot.The shaded region corresponds to the range of values observed by P20.In many of the earlier snapshots, undersampling causes large uncertainties and produces slopes that are discrepant with observations.We limit the y axis to better compare differences between the runs.This obscures the (unreasonable) points of some of the earlier snapshots.See Figures13-16for full slope and uncertainty values.The leftmost point in the bottom right panel is missing since there is no slope for that individual snapshot (see Figure16). Figure 10 . Figure10.Average combined Class I and II mass for each snapshot.Error bars represent 95th percentile.It is clear that an assumed mass of 0.5 M⊙ does not accurately represent YSO masses at all times.However, this has little effect on the calculation of the S-G correlation (see Section 3). Implications for the S-G Correlation Figure 11 . Figure11.Comparison of different synthetic observational effects on a well-correlated snapshot.a) The "fiducial" analysis with no extra considerations.Each panel b) through e) demonstrates 1 synthetic effect each.b) In the calculation of the S-G correlation, uniform 0.5 M⊙ mass for each source is used.There is a slight vertical shift in the points on the plot, but it does not significantly change the slope or the uncertainty of the S-G correlation.We adopt uniform mass for the rest of the panels.c) The removal of close neighbors that would have been indistinguishable by Spitzer.This predominantly removes high-density points, lowering the slope.d) The removal of low-mass sources that would have been undetected by Spitzer.This removes points without visible bias towards density, increasing the uncertainty in the slope.e) The addition of AGN.This predominantly adds low-density sources, lowering the slope.f) All previous synthetic effects at once.The slope is much closer to 2. Black dashed lines represent a density cutoff imposed to account for the presence of AGN.The slopes and best-fit lines for e) and f) are only based on points to the right of the black line. Figure 12 . Figure 12.Comparison of different synthetic observational effects on a poorly-correlated snapshot.Features of figure the same as in Figure 11.Note that this snapshot is particularly sensitive to the removal of a single high-density point. a Stellar completeness corrected by 0.163 following P20. a Stellar completeness corrected by 0.163 following P20. )Figure 14 .Figure 15 .Figure 16 . Figure13.S-G Correlation Plots for each snapshot in the fiducial analysis.Note that the first snapshot is extremely undersampled. Table 1 . Table of various fiducial snapshot statistics.
12,614
2023-10-17T00:00:00.000
[ "Physics" ]
New constraints and discovery potential for Higgs to Higgs cascade decays through vectorlike leptons One of the cleanest signatures of a heavy Higgs boson in models with vectorlike leptons is $H\to e_4^\pm \ell^\mp \to h\ell^+\ell^-$ which, in two Higgs doublet model type-II, can even be the dominant decay mode of heavy Higgses. Among the decay modes of the standard model like Higgs boson, $h$, we consider $b \bar b$ and $\gamma \gamma$ as representative channels with sizable and negligible background, respectively. We obtained new model independent limits on production cross section for this process from recasting existing experimental searches and interpret them within the two Higgs doublet model. In addition, we show that these limits can be improved by about two orders of magnitude with appropriate selection cuts immediately with existing data sets. We also discuss expected sensitivities with integrated luminosity up to 3 ab$^{-1}$ and present a brief overview of other channels. Introduction In models with vectorlike fermions, even a very small mixing with one of the Standard Model (SM) families forces the lightest vectorlike eigenstate to decay into W/Z/h and a SM fermion. If there are more Higgs bosons, as in models with extended Higgs sector, the same mixing allows the heavy Higgses to decay into a vectorlike and a SM fermion. This leads to many new opportunities to search for new Higgs bosons and vectorlike matter simultaneously [1]. Limits from direct searches for vectorlike leptons are significantly weaker than for vectorlike quarks [1][2][3][4][5][6]. In addition, leptons in final states typically result in clean signatures. Thus searching for combined signatures of vectorlike leptons and new Higgs bosons is especially advantageous. In this work, we focus on the process: where H is the heavy CP even Higgs and the e 4 is a new charged lepton (note that, in a small region of the parameter space e ± 4 µ ∓ is also a possible decay mode for the SM Higgs [7]). We obtain new constraints on this process from recasting existing experimental searches and find future experimental sensitivities by optimizing the selection cuts. This process appears for example in a two Higgs doublet model type-II with vectorlike leptons mixing with second SM family introduced in ref. [2,8] and it was identified as Higgs decay mode Table 1. The 13 TeV LHC production rates for H → hµ + µ − for various decay channels of the SM Higgs boson in two Higgs doublet model type-II for m H = 200 GeV, tan β = 1 and BR(H → e ± 4 µ ∓ → hµ + µ − ) = 0.5. The value for h → µ + µ − assumes that the µ − µ − h Yukawa coupling is not modified; in our model however it can be suppressed or enhanced, see ref. [8]. one of the cleanest signatures of heavy Higgses in this class of models [1]. It was found that H → e ± 4 µ ∓ can be the dominant decay mode of the heavy Higgs in a large range of parameters [1]. Moreover, as we will show, the high luminosity LHC is sensitive to this process even for branching ratio ∼ 10 −5 . 1 Depending on the decay mode of the SM-like Higgs boson, h, the process (1.1) leads to several interesting final states with rates summarized in table 1 for a representative set of parameters: m H = 200 GeV, tan β = 1, BR(H → e ± 4 µ ∓ → hµ + µ − ) = 0.5. Each decay mode of the SM-like Higgs boson h → bb, W W * , ZZ * , γγ, τ + τ − , µ + µ − provides its unique signal [1]. A prominent feature of all these channels is that the dimuon pair produced with the SM Higgs does not peak at the Z boson invariant mass as is the case for most backgrounds. Moreover, in most channels, it is possible to reconstruct the H and e 4 masses. Although specific searches for the process (1.1) do not exist, the particle content in final states is the same as for pp → Zh or pp → A → Zh and thus related Higgs searches constrain our process. We recast experimental searches for A → hZ → bb + − , where A is a heavy new particle and = e, µ, performed at ATLAS [9] and CMS [10] and for pp → h X → γγ X [11] and pp → Zγγ → + − γγ [12] performed at ATLAS. We set model independent limits on production cross section of (1.1) in bbµ + µ − and γγµ + µ − final states as functions of masses of H and e 4 . Then we suggest a simple modification of existing searches, the addition of the "off-Z" cut, which takes advantage of the two muons in the final state not originating from a Z boson, and show how the limits could be improved immediately with current data and indicate experimental sensitivities with future data sets. After deriving model independent limits we interpret them within the two Higgs doublet model type-II. We first use the scan of the parameter space of this model for m H < 2m t presented in ref. [1], where constraints from electroweak precision observables (oblique corrections, muon lifetime, Z-pole observables, W → µν), constraints on pair production of vectorlike leptons obtained from searches for anomalous production of multilepton events [3], H → (W W, γγ) and h → γγ [1,13] have been included. In addition, we extend the scan for m H > 2m t and whole range of tan β. We show how current experimental studies constrain the allowed parameter space and what we can achieve by means of optimized search strategies. This paper is organized as follows. In section 2 we briefly summarize our analysis method such as implementing the event simulations and setting the limits on our parameter space. The new constraints recasted from the existing searches are shown in section 3 and the expected experimental sensitivities in the future with our suggested cuts are discussed in section 4. We study the impact of the new constraints and future prospects of existing and suggested searches on the two Higgs doublet model type-II with vectorlike leptons in section 5. We further analyze the parameters of heavy Higgs above the tt threshold in section 6. Finally we give conclusions in section 7. Analysis method In this section we discuss the tools used for the event simulation and the statistical approach we adopt to set the limits. The new physics model is implemented in FeynRules [14], events are generated with MadGraph5 [15] and showered with Pythia6 [16]. The resulting StdHEP event files are converted into CERN root format using Delphes [17]. Jets are identified using the anti-k t algorithm of FastJet [18,19] with angular separation ∆R = 0.4. We present 95% C.L. upper limits calculated using a modified frequentist construction (CL s ) [20,21]. In recasting the searches presented in refs. [9,11,12], we follow the method described in refs. [3,22] where the Poisson likelihood is assumed. In order to calculate the number of events, N 95 s , that corresponds to the 95% C.L. upper limits, we consider a set of event numbers ({n i }) corresponding to a Poisson distribution with expectation value b equal to the number of background events. For each n i the signal-plus-background hypothesis is tested using the CL s method. The expected upper limit N 95 s is the median of the {n i } that pass the test. 2 The 95% C.L. upper limits on the total pp → H → hµ + µ − cross section normalized to the production cross section of a SM-like heavy Higgs (H SM ) are given by where A NPbb and A NPγγ are the MC level acceptances (calculated using the selection cuts of the analyses that we recast) of the bbµ + µ − and γγµ + µ − channels, ξ bb and ξ γγ are the detector level efficiencies and L is the integrated luminosity. Note that the negligible background to the γγµ + µ − mode implies N 95 s (γγ) = 3 (with Poisson statistics, a null observation over a null background is compatible with up to three signal events at 95% C.L. [3,22]). 3 For this reason the bbµ + µ − can provide a stronger constraint as long as where the ratio of experimental efficiencies (ξ bb /ξ γγ ) is about one, the ratio of Monte Carlo level acceptances (A NPbb /A NPγγ ) varies between one and three, N 95 s (γγ) is almost constant, and N 95 s (bb) increases with the integrated luminosity as can be seen in table 2. New constraints from the 8 TeV LHC data In this section we extract upper bounds on the heavy Higgs cascade decays we consider from existing searches with 20.3 fb −1 of integrated luminosity at 8 TeV. The process H → hµ + µ − → bbµ + µ − is constrained by searches for A → hZ → bb + − , where A is a heavy new particle and = e, µ. These searches have been performed at ATLAS [9] 4 and CMS [10] (we focus on the former because they provide the explicit number of observed and expected events, allowing us to investigate the impact of the different cuts). The process H → hµ + µ − → γγµ + µ − is constrained by the h → γγ ATLAS search [11] where the results with an inclusive lepton cut are presented (pp → h X → γγ X) and also pp → Zγγ → + − γγ [12]. The results that we obtain and describe in details in the next three subsections are presented in figures 1-3. The constraints on (σ H /σ H SM ) × BR(H → hµ + µ − ) are mostly constant as a function of the H and e 4 masses and vary in the range [0.1, 0.3]. We steeply loose sensitivity for e 4 close in mass to either the SM or the heavy Higgs (the transverse momentum of one the muons becomes too soft), or for a lighter heavy Higgs (the maximum value of the dilepton invariant mass is m H − m h , see eq. (4.1) and the related discussion, and the requirement of an on-shell Z cuts all signal events for small m H ). Recast of the bbµ + µ − search From the results presented in ref. [9] we extract the observed upper limit N 95 s (bb). We extract the detector level efficiency ξ bb by comparing the expected number of the Higgsstrahlung (pp → hZ) events given in ref. [9] to the fiducial number of events that we calculate. In this way our ξ bb includes the effect of the profile likelihood fit of MC background events to the data in the control region. Using the Higgsstrahlung cross section presented in refs. [24,25] and the acceptances we calculate, we find ξ bb 32%. The fiducial region adopted in ref. [9] is defined as follows. The two muons are required to have pseudorapidity |η| < 2.5 and transverse momenta larger than 25 and 7 GeV. Their invariant mass is required to lie in the range 83 GeV < m < 99 GeV; note that this requirement cuts out a large part of our signal because we do not have an on-shell Z. A missing transverse energy cut E miss T < 60 GeV is imposed to reject the tt background. In order to reduce the Z+jets background the transverse momentum of the dilepton system the two leptons and two b-jets. The two b tagged-jets are required to have |η| < 2.5 and p T > 45, 30 GeV to suppress Z+jets background. The invariant mass of the bb system is required to lie in the range 105 GeV < m bb < 145 GeV. Finally, in order to improve the resolution of m V H , the Higgs boson candidate jet momenta are scaled by m h /m bb where m h = 125 GeV. Using the observed and expected background events given in table 1 of ref. [9] we obtain N 95 s (bb) 88. In figure 1 we present the upper limits on pp → H → e ± 4 µ ∓ → hµ + µ − for various choices of m H and m e 4 . The limits become very weak for m H 215 GeV because of the hard lepton selection cuts. Note that in the type-II two Higgs doublet model the ratio of Higgs production cross sections, that we show on the vertical axes, depends on tan β. For tan β < 7 this ratio is given by cot 2 β to a good approximation. At larger values of tan β the bottom Yukawa coupling increases implying a non-negligible impact on the bb and gluon fusion production cross sections. We express our result in terms of (σ H /σ H SM ) × BR(H → hµ + µ − ) because the limits on this quantity are model independent. Recast of the γγµX search The fiducial region adopted in ref. [11] to study the γγµX final state is defined as follows. The diphoton event is selected when the invariant mass is in the range 105 GeV ≤ m γγ < 160 GeV and p γ T > 0.35 (0.25) of m γγ for the leading (next-to-leading) photon. At least one isolated lepton with p µ T > 15 GeV is requested. The majority of our signal events pass this inclusive lepton selection cut (N ≥ 1) leading to large acceptance. From table 3 of ref. [11] the upper limit on the fiducial cross section is about 0.80 fb at 95% C.L.. The limit on BR( 80 fb, and is presented in figure 2 for various values of m H and m e 4 . We can see that the strength of this constraint is similar to that of the bbµ + µ − search. Recast of the γγµ + µ − search In ref. [12] ATLAS presented a study of the γγµ + µ − final state. The fiducial cuts adopted are E γ T > 15 GeV, p µ T > 25 GeV, m µµ > 40 GeV and ∆R γγ,γµ > 0.4. Muons and photons are required to be isolated from nearby hadronic activity within a cone of size ∆R = 0.4. In order to place a constraint on our signal we consider only three bins with m γγ ∈ [100, 160] GeV (from the right panel of figure 4 of ref. [12]). The observed number of events is 8 over a background of 5 (mainly from pp → Z(µ + µ − )γγ). The implied 95% C.L. upper limit on a new physics signal is 9.6 events. Using a detector level efficiency of 37.7% (as given in table 6 of ref. [12]), we obtain the bounds shown in figure 3. These bounds are slightly worse than those from the γγµX analysis in part because in that analysis the observed limit was slightly better than the expected one, while in the γγµ + µ − analysis there was a small excess for 100 GeV < m γγ < 160 GeV. Expected experimental sensitivities In this section we suggest new selection cuts to improve the sensitivity to our signal. First let us discuss the distribution of the invariant mass of the dilepton system m . In our process the two oppositely charged muons are not produced from a Z decay. The analytic formula for m is where θ is the angle between the two muons in the heavy Higgs rest frame. As discussed in ref. [1] a large part of our signal lies in the region |m − M Z | > 15 GeV allowing us to veto a major background process, Z + (heavy flavored) jets with Z → µ + µ − . For this reason we propose to consider separately m > M Z + 15 GeV and 20 < m < M Z − 15 GeV cuts (we added a lower limit m > 20 GeV to suppress the background events with µ + µ − from γ * ). We call these cuts "off-Z below" and "off-Z above" cuts. In each panel of figures 4 and 5 we show such regions with blue vertical lines and arrows. We see that for small m H − m h and/or m H − m e 4 the "above" cut is depleted of events. For the bbµ + µ − channel we keep the rest of the cuts in ref. [9] other than 83 GeV < m < 99 GeV. Additionally we request that the invariant mass of all the final states m µµbb should be within 10% of each m H hypothesis. Profile likelihood fits can be used once actual data are available. For the γγµ + µ − channel we further impose a missing transverse energy cut E miss T < 60 GeV to suppress the background from the top-quark decays. Moreover we request two leptons with p T > 15 GeV. Sensitivity of bbµ + µ − We begin by studying how the sensitivity of the existing 8 TeV 20 fb −1 bbµ + µ − search changes with the adoption of the new cuts we propose. This is controlled by the change in the expected number of background event that is obtained by computing the ratio of acceptances of new and original cuts: 2) [GeV] where b 0 is the number of background events given in ref. [9], A new B and A original B are the MC level acceptances for the new and original cuts, respectively. The ratio of acceptances are obtained from the sample including Z + b-jets, tt, and Higgsstrahlung processes. 5 The 95% CL s median upper limits N 95 s obtained from the number of expected background and the ratio of acceptances are shown in table 2 for our reference parameters m H = 215, 250, 300, 340 GeV. We present separate results for the "off-Z below" and "off -Z above" cuts. Finally, the experimental sensitivities are obtained by inserting these limits in eq. (2.1). To estimate the sensitivity at 13 TeV we start with considering the cuts used in the recent ATLAS analysis [26] performed with 3.2 fb −1 of integrated luminosity at 13 TeV. Since we are interested in m H < 340 GeV for now, we consider the low p Z T category. The basic cuts adopted in this search are the following. One of the two leptons must have p T > 25 GeV with |η| < 2.5 and the invariant mass of the dilepton should be in the 70 GeV < m < 110 GeV window. Events with two b-tagged jets are selected when one of them satisfies p T > 45 GeV on top of their basic b-jet selection criteria. The invariant mass of the two b-tagged jets must be in the range 110 GeV < m bb < 140 GeV. In order to suppress the tt background the missing transverse energy should be in the range E miss GeV where H T is the scalar sum of the p T of the leptons and b-tagged jets. To improve the resolution of the bb + − resonance signal the four momentum of the bb system is rescaled by m h /m bb where m h = 125 GeV as in the 8 TeV search [9]. Because the main goal of the search in ref. [26] is finding the resonant signal A → hZ, the four momentum of the dimuon system is rescaled by M Z /m µµ with M Z = 91.2 GeV: this requirement strongly suppresses the acceptance of our signal implying the absence of any constraint. Our proposed cuts involve adding the "off-Z above" and "off-Z below" cuts described above, removing the rescaling of the four momentum of the dimuon system and including the invariant mass cut |m µµbb −m H | < 0.1 m H . The number of expected background events with integrated luminosities of 100 fb −1 and 3 ab −1 are calculated analogously to the 8 TeV case and the corresponding N 95 s are summarized in table 2. Sensitivity of γγµ + µ − The cuts that we suggest are those considered in ref. [11] (and described in section 3.2) with the inclusion of the "off-Z below"/"off-Z above" cuts, a missing transverse energy cut E miss T < 60 GeV (to suppress the htt final state) and the requirement of a second isolated muon with p T > 15 GeV. Additionally one could add a veto on high p T b-jets (for an additional suppression of the htt background) and a cut on the invariant mass of the γγµ + µ − system. The latter, in particular, could turn useful if non-irreducible sources of background turns out to be larger than expected. The background to the γγµ + µ − channel has been studied in detail in ref. [12] (E γ > 15 GeV and p µ T > 25 GeV) and it is found to be dominated by pp → Z(µ + µ − )γγ and pp → Z + γj, jγ, jj with one or two jets misidentified as isolated photons. These backgrounds are also found to decrease steeply with the transverse energy of the photon. The E γ T cuts that we suggest are much stronger (the hardest photon has E γ T > 37 − 56 GeV depending on the diphoton invariant mass) than those considered in ref. [12] and make this background completely negligible (also taking into account the further reduction due to the off-Z cuts). Two more sources of background (that are not suppressed by a stronger E γ T cut) are pp → hZ → γγµ + µ − and pp → htt → bbµ + µ − γγν µνµ (Presently we do not require vetos on b-jets, hence any γγµ + µ − X final state is a background). At 8 TeV the combined total cross section for these two processes is about 35 ab corresponding to 0.7 events with 20 fb −1 before applying any selection cut; therefore, we set this background to zero and find N 95 s = 3. At 13 TeV the combined cross sections rises to 80 ab corresponding to 8 and 240 events with 100 fb −1 and 3 ab −1 , respectively; in this case a discussion of fiducial acceptances and detector efficiencies is crucial to estimate the expected background. Using these selection cuts we find that the fiducial acceptances for the "off-Z below" and "off-Z above" cases are 1.4% and 1.2%, respectively. Assuming an overall detector efficiency of about 37.7% (as suggested in ref. [12]), we then find that the expected number of background events at 13 TeV with 100 fb −1 and 3 ab −1 are 0 and 1, respectively: the corresponding N 95 s are 3 and 4 events. Constraints and future prospects in two Higgs doublet model In this section we study the impact of the limits derived in previous sections (and indicate future prospects of existing and suggested searches) on the two Higgs doublet model type-II with vectorlike pairs of new leptons introduced in ref. [2]. We assume that the new leptons mix only with one family of SM leptons and we consider the second family as an example. In figure 6 we present the parameter space scan of this model in the plane spanned by m e 4 and (σ(pp → H)/σ(pp → H SM )) × BR(H → hµ + µ − ) for four different heavy Higgs masses (m H = 215, 250, 300, 340 GeV). 6 The charged sector Yukawa couplings are scanned in the range [-0.5, 0.5], as described in ref. [1]. Each point satisfies precision EW data constraints related to the muon and muon neutrino: muon lifetime, Z-pole observables, the W partial width and oblique observables. In addition, we impose constraints on pair production of vectorlike leptons obtained from searches for anomalous production of multilepton events [3] and constraints from searches for heavy Higgs bosons in H → W W, γγ discussed in ref. [1,13,22] and for the SM Higgs h → γγ discussed in ref. [1]. The solid red, blue and green contours in figure 6 are the new constraints obtained from recasting the existing bbµ + µ − , γγµX and γγµ + µ − searches. Note that the γγµX and γγµ + µ − constraints dominate at low m H because the bbµ + µ − search looses sensitivity due to a strong cut on the transverse momentum of the hardest muon. Dashed contours indicate expected sensitivities using our proposed off-Z cuts for three scenarios of LHC energies and integrated luminosities: (8 TeV, 20 fb −1 ), (13 TeV, 100 fb −1 ) and (13 TeV, 3 ab −1 ). The contours shown correspond to the "off-Z below" cut for m H = 215 and 250 GeV and "off-Z above" cut for m H = 340 GeV. For m H = 300 GeV both off-Z cuts result in similar bounds. A direct inspection of figure 6 shows that the analysis strategy we propose has the potential to improve the experimental sensitivity between one and two orders of magnitude depending on the heavy Higgs and vectorlike lepton masses. From the sensitivities shown in figure 6 we see that the impact of the off-Z cuts is much more pronounced for the bbµ + µ − final state rather for the γγµ + µ − one and the expected bounds converge at very high integrated luminosity. The reason is that the background to the existing γγµ + µ − search is very small at all luminosities and, therefore, is not affected much by the additional off-Z cuts; in the bbµ + µ − channel the background is large and is sizably reduced by the cuts we propose. At very large luminosity the expected number of background events increases much more for bbµ + µ − rather than γγµ + µ − and the sensitivity of the two channels become comparable. At very high luminosities (beyond what is planned for the LHC) the di-photon channel would dominate. Overall the potential for exclusion (discovery) of new physics in these channels in the next few years seems very strong: sensitivity to branching ratios of order O(10 −4 − 10 −3 ) is within reach and, correspondently, a very large part of this model parameter space will be tested. We should note that, in ref. [27], ATLAS presented a search for bbµ + µ − that makes use of multivariate techniques to massively reduce the irreducible background. While we were not able to use this analysis to place constraints on our model, we expect that a dedicated experimental study of the signal we propose using a similar approach has the potential to improve significantly the bounds we presented. The sensitivity could be additionally increased by looking for the e 4 → hµ → (bb, γγ)µ resonance. Finally let us briefly discuss the decay H → hµ + µ − with the SM Higgs decaying into the other possible channels we mention in table 1. The h → ZZ * decay yields a 4 µ + µ − final state that has negligible SM background; nevertheless the small branching ratio makes this channel less sensitive than the γγµ + µ − one. On the other hand, the sizable h → τ + τ − branching ratio (about 6.3%) makes the τ + τ − µ + µ − final state competitive with the bbµ + µ − one; a detailed study of this final state from pp → A → hZ has been performed by both ATLAS [28] and CMS [29]. The h → W W * mode yields the 2 2µ2ν final state and is expected to yield sensitivities even higher than the γγµ + µ − channel (both have negligible background and the former has a larger branching ratio). Finally the h → µ + µ − decay yields a 4µ final state with a rate that depends strongly on the model Yukawa couplings (see the discussion in refs. [7,8]). Heavy Higgs above the top threshold In this section we discuss the constraints and prospects for m H 2m t where the H → tt contribution to the heavy Higgs decay width reduces its branching ratio into vectorlike leptons. In this mass range, the heavy Higgs width into SM fermions is dominated by the tt channel at moderate tan β < 7 and by the bb at larger tan β. From the analysis presented in ref. [1] (see bottom-left panel of figure 3 of that paper) it is clear that the H → e ± 4 µ ∓ branching ratio can easily be dominant for all values of tan β 20 and m H < 2m t . This implies immediately that we expect BR(H → e ± 4 µ ∓ ) to be sizable for Higgs masses above the tt threshold at large tan β (where the H → tt partial width is suppressed with respect to the H → bb one). For tan β < 7 the H → tt partial width becomes dominant and we need a detailed numerical calculation in order to assess the size of the H → e ± 4 µ ∓ branching ratio. In order to check whether large BR(H → e ± 4 µ ∓ ) are allowed, we rescan the parameters for m H above the tt threshold up to 800 GeV and allow only parameter space points that satisfy all the constraints discussed in ref. [1]: electroweak precision data, anomalous multilepton production with missing E T , SM Higgs data for h → γγ, and heavy Higgs searches in the γγ and W W channels. As in the previous case the charged sector Yukawa couplings are scanned in the range [-0.5, 0.5]. In figure 7 we show the resulting heavy Higgs partial widths (calculated assuming H is the heavy CP even Higgs) as a function of tan β. [30], the gg → H → tt resonant peak can be destroyed by interference with the SM background (especially for 400 GeV m H 900 GeV and tan β < 15 in the aligned two Higgs doublet model type-II). For the CP odd Higgs this effect leads to more dip-like signals in a large range of parameters but it is still hard to observe for m H < 600 GeV and tan β 5. If the heavy Higgs couples to vectorlike leptons, as in the models we consider, the H → e ± 4 µ ∓ channel offers a new and very promising avenue to discovery. In figure 9 we show the allowed parameter space in the m H and (σ H /σ H SM ) × BR(H → hµ + µ − ) plane. For simplicity we do not vary the vectorlike lepton mass and set it to m e 4 = 250 GeV; moreover we consider only the region m H < 2m e 4 to kinematically forbid the H → e 4 e 4 channel. Green, red, blue and magenta point correspond to tan β < 1, 1 < tan β < 3, 3 < tan β < 20, and 20 < tan β < 50 respectively. From figure 8 we see that our BR(H → e ± 4 µ ∓ ) can be larger than 0.25 for 3 < tan β < 20. For larger tan β > 20 the heavy Higgs production cross section is enhanced compared to σ(pp → H SM ) so the values of (σ H /σ H SM ) × BR(H → hµ + µ − ) are as large as those for 3 < tan β < 20. The solid black contour is the recasted constraint from the 13 TeV A → hZ resonance search [26]. The expected sensitivities of future bbµ + µ − and γγµ + µ − studies are displayed as dashed lines. We conclude that recasted searches barely touch the allowed parameter space around (σ H /σ H SM ) × BR(H → hµ + µ − ) ∼ 0.05. However, future searches employing the off-Z cuts that we propose have the potential to constrain this quantity at 10 −5 level. Conclusions In this paper we discuss the Higgs cascade decay pp → H → e ± 4 µ ∓ → hµ + µ − that appears in models with extra vectorlike leptons and an extended Higgs sector. Among the various Figure 9. Parameter space satisfying all the constraints discussed in ref. [1] for m e4 = 250 GeV. We show estimates of the bound (black solid line) recasted from the 13 TeV A → hZ resonance search [26] and future experimental sensitivities (black dashed lines) for integrated luminosities L = 300 fb −1 and 3 ab −1 at 13 TeV. decay channels of the SM Higgs h we considered the bb and γγ ones, which yield bbµ + µ − and γγµ + µ − final states. These are two representative channels with sizable and negligible background, respectively. We were able to recast existing pp → A → hZ → bb + − , pp → h X → γγ X and pp → Zγγ → + − γγ searches into constraints on the two modes we consider. We also presented the expected sensitivities of dedicated searches in the full 8 and 13 TeV data sets. A unique feature of cascade decay we consider is that the two leptons do not reconstruct a Z boson, while the hµ and hµµ invariant masses peak at m e 4 and m H , respectively. Therefore, we suggest to employ two off-Z cuts that focus on the region above and below the Z resonance: 20 GeV < m < M Z − 15 GeV and m > M Z + 15 GeV. In addition to these suggested cuts, the searches for two resonances corresponding to the H and e 4 masses will lead further to higher sensitivities. We find that this analysis strategy has the potential to improve the experimental sensitivity between one and two orders of magnitude depending on the heavy Higgs and vectorlike lepton masses. We discussed an explicit realization of a new physics model in which this cascade decay is allowed to proceed with sizable branching ratio. The model has been introduced in ref. [1] and involves a new family of vectorlike leptons and an extra Higgs doublet. We found that a vast majority of this model parameter space that survives various indirect and direct constraints can be easily tested by searches for heavy Higgs cascade decays. One major result of our analysis is that the bbµ + µ − channel dominates the γγµ + µ − in most of the parameter space up to an integrated luminosity of 3 ab −1 at 13 TeV. We also briefly discussed other possible channels and found that the τ + τ − µ + µ − and 2 2µ2ν have the potential to offer constraints comparable to those obtained from the bbµ + µ − and γγµ + µ − modes. Furthermore we discuss the reach of our search strategy for a heavy Higgs with mass above the tt threshold. We find that the H → e ± 4 µ ∓ branching ratio can dominate over both H → tt and H → bb for 4 tan β 17 (4 tan β 32) when charged sector Yukawa couplings are allowed in [-0.5, 0.5] ([-1, 1]). However, even in the range of parameters where our process has only a small branching ratio, it can be the most promising search channel since the usual search strategies for H → tt suffer from interference effect with the SM background. Rough estimates of future experimental sensitivities are extremely promising.
8,450.4
2016-08-02T00:00:00.000
[ "Materials Science" ]
Modeling and Stability Analysis for the Vibrating Motion of Three Degrees-of-Freedom Dynamical System Near Resonance : The focus of this article is on the investigation of a dynamical system consisting of a linear damped transverse tuned-absorber connected with a non-linear damped-spring-pendulum, in which its hanged point moves in an elliptic path. The regulating system of motion is derived using Lagrange’s equations, which is then solved analytically up to the third approximation employing the approach of multiple scales (AMS). The emerging cases of resonance are categorized according to the solvability requirements wherein the modulation equations (ME) have been found. The stability areas and the instability ones are examined utilizing the Routh–Hurwitz criteria (RHC) and analyzed in line with the solutions at the steady state. The obtained results, resonance responses, and stability regions are addressed and graphically depicted to explore the positive influence of the various inputs of the physical parameters on the rheological behavior of the inspected system. The significance of the present work stems from its numerous applications in theoretical physics and engineering. Introduction In the last two decades, some researchers have produced numerous works trying to solve the problems of excessive vibrations of mechanical systems, including the use of absorbers to treat and absorb active and the passive vibrations, e.g., [1][2][3][4][5][6]. The motion of a pendulum vibration absorber (PVA) with a spin base is investigated in [2] to deal with vertical excitation. By altering the rotational motion, the distinctive frequency of the pendulum absorber can be modified dynamically over a large range. A longitudinal absorber is used in [3] to stabilize and regulate vibrations of a spring pendulum, with non-linear stiffness, expressing ship roll motion. To achieve a semi-closed solution for the approximation from second order, the authors used the approach of multiple scales (AMS) [7], investigating the response of the considered model near resonance cases. They applied the influence of an additional transvers absorber to generalize this problem as in [4] and [5]. It is demonstrated in [6] how to autonomously modify the rotating speed of PVA, with two degrees of freedom (DOF), by identifying the phase between the PVA and primary vibrations. The pivot's motion of a simple pendulum with rigid arm that connected with a longitudinal absorber on an elliptic trajectory is examined in [8]. All resonance cases are generally grouped, and the case of two concurrent basic external resonances is examined. The generalization of this work is found in [9] for the case of a damped elastic pendulum instead of the un-stretched one. The ME are obtained and solved numerically to check the stability and instability regions in view of RHC. Description of the Vibrating System A vibrational system with 3DOF is described in Figure 1, in which a damped elastic pendulum with normal length l 0 , non-linear stiffness K 1 and K 2 , and pendulum mass m 1 is considered. A pivot point O 1 of the pendulum is limited to move in an elliptic route with stationary angular velocity Ω, while the pendulum's other end is attached to a linear absorber of mass m 2 , a normal length l 10 , and linear stiffness K 3 . According to the sketch of Figure 1, we may write the corresponding coordinates of O 1 to the axes OX and OY in the forms a cos(Ωt) and b sin(Ωt), respectively. Here, the ellipse's minor and major axes are represented by a and b, respectively. On the supplementary circle b, the equivalent point of O 1 will be assigned by Q. Description of the Vibrating System A vibrational system with 3DOF is described in Figure 1, in which a damped elastic pendulum with normal length 0 l , non-linear stiffness 1 K and 2 K , and pendulum mass 1 m is considered. A pivot point 1 O of the pendulum is limited to move in an elliptic route with stationary angular velocity , while the pendulum's other end is attached to a linear absorber of mass 2 m , a normal length 10 l , and linear stiffness 3 K . According to the sketch of Figure 1, we may write the corresponding coordinates of 1 O to the axes OX and OY in the forms cos( ) a t  and sin( ) b t  , respectively. Here, the ellipse's minor and major axes are represented by a and b , respectively. On the supplementary circle b , the equivalent point of 1 O will be assigned by Q . The system's motion is addressed to be under the influence of applied harmonic force 1 1 ( ) cos( ) F t F t   at the spring's radial direction, as well as a harmonic moment  , respectively. Furthermore, 1 2 , , C C and 3 C are thought to indicate the coefficients of viscous damping for the spring's longitudinal, swing oscillations, and the absorber's elongation, respectively. The system's motion is addressed to be under the influence of applied harmonic force F(t) = F 1 cos(Ω 1 t) at the spring's radial direction, as well as a harmonic moment M(t) = M 0 cos(Ω 2 t) at O 1 in the anticlockwise direction. Here, Ω 1 , Ω 2 and F 1 , M 0 are the frequencies and amplitudes of F(t) and M(t). The extensions of the spring and absorber are supposed to be Z(t) and ξ(t), respectively. Furthermore, C 1 , C 2 , and C 3 are thought to indicate the coefficients of viscous damping for the spring's longitudinal, swing oscillations, and the absorber's elongation, respectively. According to Equation (1), Lagrange's function L = T − V can be determined, and then, the controlling system of motion can be obtained using the next equations of Lagrange [30] Here, (Q Z , Q Φ , Q ξ ) are the system's non-conservative generalized forces, while (Z, Φ, ξ) and (Z , Φ , ξ ) are the generalized coordinates and velocities, respectively. The forms of Q Z , Q Φ , and Q ξ are Substituting from (1) and (3) into (2), the dimensionless form of the controlling system of EOM has the following form .. where 11, 11943 5 of 40 Denote the parameters that are dimensionless, and the dots represent the differentiation regarding to τ, in which the generalized coordinates and their corresponding first derivatives have the following initial conditions The Desired Solutions In this part of the present research work, we use the AMS to acquire the approximate analytic solutions of the EOM (4), categorize the different cases of resonance, and extract both the solvability criteria and the ME. Then, we look at the oscillations of the system close to the static equilibrium position [31]. To accomplish this target, we approximate the functions sin Φ and cos Φ up to the third-order as follows The damping coefficients, force's amplitudes, moment, elliptical semi-axes, and other parameters can then be represented in terms of a small parameter 0 < ε << 1 as follows As a result, we can express the functions z, Φ, and η in terms of ε and the new functions z, Φ, and η as follows According to the procedure of AMS, we can write these functions as follows where τ n = ε n τ (n = 0, 1, 2) denote new time dependent scales on τ, in which τ 0 and τ k (k = 1, 2) denote a fast and slow time scales, respectively. Because of the smallness of ε, the orders ε 3 and higher have been excluded. In light of the supposed solutions (8), we need to transform the time derivatives regarding τ into additional ones with respect to the scales τ 0 , τ 1 , and τ 2 . Therefore, we consider the following differential operators where D n = ∂ ∂τ n ; n = 1, 2. When (5)-(9) are substituted into (4), a family of partial differential equations (PDE) regarding ε arises. Equating the coefficients of the like powers of ε on both sides of each one of the families of PDE to gain the below groups of PDE gives as a result Order of ε Order of ε 2 Order of ε 3 Based on the preceding groups of Equations (10)-(12), we can solve them sequentially. Accordingly, we will start with the general solutions of the system of Equation (10) which take the following forms Here, B j (j = 1, 2, 3) represent functions of τ k (k = 1, 2) and B j denote their complex conjugates. Making use of the above solutions (13) into the second group of PDE (11) yields secular terms, the removal of which demands that As a result, the second-order solutions are as follows: Here, c.c. signifies the complex conjugate of the prior terms. Substituting (13)- (15) into the third group of PDE (12) and removing terms that produce the secular one to gain the requirements of solvability of the third order of approximation results as follows Based on the foregoing, we can phrase the third-order solutions as in the forms where q s (s = 1, 2, 3, . . . , 33) are given in Appendix A. In light of the removal conditions (14) and (16) of secular terms, we can estimate the functions B j (j = 1, 2, 3). We can easily acquire the desired approximate analytical expressions of z, Φ, and η up to the third approximation in view of the uses of the hypothesis (7), solutions (8), and the attained solutions (13), (15), and (17). Resonance Categorizes and Modulation Equations (ME) In this section, we look at how to categorize the various cases of resonance based on the aforementioned solutions, which are legitimate as long as their dominators are not zero [8]. As these dominators go closer to zero, resonance cases emerge. As a result, these cases can be categorized into the fundamental external case of resonance which is met at p 1 = 1, p 2 = w 1 , p 2 = w 2 ; the internal case of resonance which is found at w 1 = 1, w 2 = 1, = 1, w 1 = 2, 2 w 1 = 1, w 2 = 2, = w 1 , = w 2 , w 1 = w 2 , 3w 1 = w 2 , 2w 1 = w 2 , w 1 = 0, w 2 = 0, = 0; and the combined resonance case which is encountered It should be emphasized that if any of the prior resonance cases is achieved, the behavior of the examined system would be difficult. As a result, the methods employed would have to be modified. We will look at two fundamental external resonances and one internal resonance that are carried out simultaneously to handle this situation. As a consequence, we take into account the occurrence of all three of the following cases at the same time: These relations (18) indicate how close p 1 , p 2 , and 3w 1 are to 1 , w 1 , and w 2 , respectively. To do this, the dimensionless values defined by the parameters σ j (j = 1, 2, 3) of detuning (which characterize the distance between the oscillations and the rigorous resonance) can be inserted as follows: Therefore, the order of σ j can thus be inferred as follows: Substituting (19) and (20) into (11) and (12), and then removing the generated secular terms, we get the relevant solvability requirements that are based on the approximated equations as follows: According to a careful inspection of the foregoing solvability conditions, we have a system of six non-linear PDE in terms of the functions B j (j = 1, 2, 3) that are dependent on the slow scales τ k (k = 1, 2). Then, we can introduce the following polar form of these functions: where ψ j and a j denote real functions of the phases and their amplitudes of z, Φ, and η. Based on the above analysis, the first-order derivative of operators B j (j = 1, 2, 3) can be stated as follows: Therefore, we can convert the PDE (21) into ordinary differential equations (ODE) according to the uses of (22), (23), and the next modified phases, into the requirement of solvability (21). Partitioning the real parts and the imaginary ones yields the next system of six first-order ODE in terms of a j and θ j (j = 1, 2, 3) This system reveals the ME for both a j and θ j (j = 1, 2, 3) of the studied three cases of resonances. For the following selected values of the physical parameters of the considered model, the solutions of the system are graphically displayed in distinct plots as drawn in Figure 5 and ω 1 = 3.354 as in Figure 6. When the amplitudes a j and the adjusted phases θ j are varied with time τ for distinct values of the damping coefficients c j (j = 1, 2, 3) and the frequencies ω k (k = 1, 2), we can predict that the above system (25) has a good influence according to these values. According to the plotted curves in Figure 2, we observe that when c 1 has various values, the time histories of the amplitude a 1 and the adjusted phase θ 1 behave as decaying waves until reaching a stationary behavior at the end of the investigated period of time, as seen in Figure 2a,b. The fluctuations of a 2 and θ 2 waves with time τ become clear in the first quarter of the period of time and become stationary after that as explored in Figure 2c,d. On the other hand, a sharp descent of the curves describing the waves of a 3 and θ 3 is observed in the curves of Figure 2e,f, which is due to the last two equations of the system (25). There is no variation of the a 2 , a 3 and θ 2 , θ 3 curves with the change of c 1 values due to the formulations of the equations of these curves. The change of the various values of c 2 is evident in the curves describing the time histories of the amplitude a 2 and the modified phase θ 2 because the third and fourth equations of system (25) are dependent on c 2 , as seen in Figure 3c,d, while the other equations do not depend explicitly on c 2 . Therefore, there is no variation, to some extent, of the curves describing a 1 , a 3 and θ 1 , θ 3 as drawn in the other parts of Figure 3. Since the last two equations of system (25) depend on c 3 , an observed variation of the curves describes the modified phase θ 3 , as seen in Figure 4f. There is no observed variation in the curves of the other variables because the first four equations of system (25) do not depend on c 3 explicitly as indicated in the other parts of Figure 4. An examination of the system of equations (25) shows that these equations are dependent on ω 1 and ω 2 . Therefore, we expect a good impact of these parameters on the time histories of a j (j = 1, 2, 3) and θ j which met with the plotted curves of Figures 5 and 6. These curves describing the waves of these variables oscillate in a decaying manner as drawn in parts (a)-(d) of these figures or monotonically decrease with time as seen in parts (e) and (f) of the same figures. Based on this analysis, we come to the conclusion that the behavior of the system of equations (25) is stable and free of chaos. Figures 7-11 present the phase plane diagrams of a j (j = 1, 2, 3) and θ j when c j and ω k (k = 1, 2) have various values. An inspection of the curves of these figures shows that we have spirals curves that are directed to one point, which gives an impression of the steady motion of these amplitudes and phases. Figures 10 and 11, respectively. Therefore, we can say that these curves have a spiral form from the outside to the inside, and it is directed toward a single point for each curve, which means that all values of ( 1,2) k k   have a significant impact on the curves of these planes. The reason goes back to the equations of system (25) that depend explicitly on Parts of Figures 7-9 are drawn when c 1 , c 2 , and c 3 have different values, respectively, to reveal the variation of curves of the phase planes a j θ j (j = 1, 2, 3) with these values, while Figures 10 and 11 describe the change of these planes that happened at different values of ω 1 and ω 2 , respectively. According to the curves of these figures and the system of equations (25), we observe that the plane a 1 θ 1 is impacted by the various values of c 1 , as seen in Figure 7a, while there is no variation of the curves of planes a 2 θ 2 and a 3 θ 3 as indicated in Figure 7b,c. The curves of the phase planes a 2 θ 2 and a 3 θ 3 have been impacted with the variation of c 2 values of as seen in Figure 8b,c. On the other hand, there is no observed variation of the curves drawn in plane a 1 θ 1 when c 2 changes, as noticed in Figure 8a. According to the plotted curves in Figure 9, we can see that the curves shown in parts (a) and (b) that describe the phase planes a 1 θ 1 and a 2 θ 2 respectively, have no variation with various values of c 3 . The good impact of the values of c 3 is observed in parts (c), (d), and (e) of Figure 9 for the phase plane a 3 θ 3 . The influence of the frequencies ω 1 (=3.316, 3.354, 3.391) and ω 2 (=3.212, 3.131, 3.084) on the phase plane diagrams a j θ j (j = 1, 2, 3) is observed from the curves of Figures 10 and 11, respectively. Therefore, we can say that these curves have a spiral form from the outside to the inside, and it is directed toward a single point for each curve, which means that all values of ω k (k = 1, 2) have a significant impact on the curves of these planes. The reason goes back to the equations of system (25) that depend explicitly on ω k (k = 1, 2). It is important to remember that the obtained approximate solutions z, Φ, and η describe the spring's elongation, the rotation angle at the point O 1 , and the elongation of the transverse absorber, respectively.   0 1000 .   0 1000 . Steady state Solutions The major objective of the present section is to study the oscillations of the examined system in the case of steady state. From the equations of system (25) Based on the sketched curves of the solutions z, Φ, and η, we observe that the waves describing these solutions have a periodic manner in which the number of oscillations and their wavelengths remain stationary to some extent with the variation of c j (j = 1, 2, 3) values. Parts (a) of these figures have an explicitly periodic form for the wave of the solution z. It is notable from parts (b) of these figures that each period of the wave contains a constant number of vibrations that are repeated for each period. This is due to the analytical form of the rotation angle Φ, in which its behavior has a spinning form. On the other hand, the wave describing spring's elongation η experiences rapid oscillations at the beginning of the motion due to the absorber's effect and damping impact on the investigated dynamical system, in which it settles down after that and vanishes at the end of time interval, as seen in parts (c) of these figures. According to the calculations of Figures 15 and 16, we get to the conclusion that the change of the ω k (k = 1, 2) values has a considerable impact on the attitude of the describing waves of the attained solutions. Regardless of the fact that the wave's behavior of the solutions is periodic, we observe that the amplitudes of these waves increase and decrease with the increasing of ω k as seen in Figures 15a-f and 16a-f, respectively. Steady State Solutions The major objective of the present section is to study the oscillations of the examined system in the case of steady state. From the equations of system (25), we can obtain both of the modified phases θ j (j = 1, 2, 3) and amplitudes a j in the steady state case. Alternatively, the zero values of the left-hand sides of the equations of this system are considered. Therefore, we consider da j dτ = 0 and dθ j dτ = 0 [32], to obtain the next algebraic system of six equations of the functions θ j and a j . Now, we can remove the adjusted phases θ j from the preceding system to produce the following three non-linear algebraic equations between longitudinal amplitude a 1 , the swing oscillations a 2 , and the frequency represented by the detuning parameters σ j and the absorber's amplitude a 3 . Stability testing is considered a crucial aspect of the vibrations in the steady state case. To explore such a circumstance, the behavior of the system in a domain relatively near to fixed points is investigated. Therefore, the substitutions listed below are employed in (25) to achieve this purpose. a 1 = a 10 + a 11 , θ 1 = θ 10 + θ 11 , a 2 = a 20 + a 21 , θ 2 = θ 20 + θ 21 , a 3 = a 30 + a 31 , Here, a j0 and θ j0 (j = 1, 2, 3) denote the steady state solutions, whereas a j1 and θ j1 represent relatively small disturbances in relation to a j0 and θ j0 . As a result of linearization and the reality of the fixed points of (25), we get da 11 dτ = 1 2 ( f 1 θ 11 cos θ 10 − c 1 a 11 ), a 10 1 a 20 (a 20 a 11 + 2a 10 a 21 ) + 1 2 f 1 θ 11 sin θ 10 , Since a j1 and θ j1 (j = 1, 2, 3) are perturbed functions of amplitudes and phases of the aforementioned linear system. Then, the linear function k s e λτ (s = 1, 2, 3, 4, 5, 6) of the exponential form can be used to express about their solutions, where k s and λ are constants and the perturbation's eigenvalue, respectively. The real parts of the roots of the next characteristic equation of (29) should be negative if the steady state solutions are asymptotically stable [33,34]. where Γ s (s = 1, 2, . . . , 6) are functions of a j0 , θ j0 , and c j (see Appendix B). The required and sufficient requirements of stability for certain solutions at steady state can be expressed as follows The Stability Analysis In this section, we investigate the model's stability as well as its non-linear evolution using the Routh-Hurwitz non-linear stability approach. It must be remembered that a damped spring connected with a transverse absorber under the action of F(t) and M(t). Some factors have been revealed to play a substantial impact in the stability processes such as damping's constants c j (j = 1, 2, 3), frequencies ω k (k = 1, 2), and parameters of detuning σ j . To gain the stability plots of system (25), a specific process with various parameters of the system has been used. The adjusted amplitudes a j (j = 1, 2, 3) plotted via time are for various parametrical regions, in addition to the graphical representations of their characteristics through the phase plane paths. Curves of frequency responses of a j versus σ 2 and the system's fixed points have been portrayed in Figures 17-31, in which the flowing data have been taken into account besides the previous ones. It is obvious that c 1 has a bigger role on the curves of plane a 1 σ 2 than on the frequency response curves of planes a 2 σ 2 and a 3 σ 2 , which is due to the formulation of system (25). It is noted that all parts of these figures have only one critical fixed point, and this means that we have one only area for both stability and instability. Stable fixed points have been detected in the area −0.5 ≤ σ 2 ≤ −0.04 while the unstable fixed points have been found in the area −0.04 < σ 2 ≤ 0.5, in which the stable points and the unstable ones have been presented by solid and dashed curves, respectively. Non-Linear Interpretations This section focuses on elucidating the non-linear amplitude's characteristics of system (25) as well as evaluating their stability. As a result, the following transformations are taken into account [31,35] First, (32) were substituted into (25), and then, the real and imaginary parts were separated to produce Figures 26-28, we conclude that there exists one peak point with different locations, and each curve has just one essential fixed point. It is clear that ω 1 has a significant impact on the frequency response curves because the equations of system (25) depend directly on the frequencies parameters. Moreover, the stable and unstable regions of the fixed points are calculated as in Table 1. The above remarks can be applied to the curves of Figures 29-31 when the frequency values ω 2 (=3.212, 3.131, 3.084) are considered. The ranges of stable fixed points and unstable ones have been given in Table 2. Non-Linear Interpretations This section focuses on elucidating the non-linear amplitude's characteristics of system (25) as well as evaluating their stability. As a result, the following transformations are taken into account [31,35] First, (32) were substituted into (25), and then, the real and imaginary parts were separated to produce . where U j = ε u j , V j = ε v j ; (j = 1, 2, 3). The adjusted amplitudes were then modified throughout time in various parametric zones and the amplitude's properties were depicted in phase-plane curves. Then, the parameters prior values are taken into account to plot Figures 35 and 36 when ω k (k = 1, 2) has distinct values. Moreover, the curves in planes u j v j , represented in parts (c), (f), and (i), behave as spiral curves and move toward a single point to confirm that the motion is free of chaos. A closer look to Figures 32-34 reveals that the new parameters u 1 and v 1 besides the phase plane curves u 1 v 1 have been impacted with the change of c 1 more than the others values of c 2 and c 3 . The time histories of u 2 , v 2 and u 3 , v 3 in addition to the curves of the planes u 2 v 2 and u 3 v 3 have been impacted with the change of the damping parameters c 2 and c 3 , respectively. The principal reason goes back to the nature formulation of the system of equations (33). The spiral curves are directed from the outside to the inside, describing the stability of the studied system. Figures 35 and 36. It is noted that these waves have been impacted with the various values of the frequency parameters, in which decay curves of the time histories are obtained and spiral ones of the phase plane toward one single point are plotted, indicating that the motion is smooth, steady and without disorder. Conclusions The non-linear motion of a damped spring pendulum with an attached linear damped transverse absorber in the direction of the spring has been investigated. Under the impact of a harmonic force and moment, the motion of the pendulum's hanging point has been constrained to an elliptic path. The EOM have been derived applying Lagrange's equations from the second kind. The AMS has been used to achieve the approximate solutions up to the third order. Based on the solvability requirements, the ME The good effectiveness of the change of the frequency parameters ω 1 and ω 2 on the dynamical behavior of considered system (33) has been shown in parts of Figures 35 and 36. Time histories curves of the new parameters u j (j = 1, 2, 3) and v j are plotted in parts (a), (d), and (g) and (b), (e), and (h), respectively. Whereas the plane curves u j v j have been drawn in parts (c), (f), and (i) of Figures 35 and 36. It is noted that these waves have been impacted with the various values of the frequency parameters, in which decay curves of the time histories are obtained and spiral ones of the phase plane toward one single point are plotted, indicating that the motion is smooth, steady and without disorder. Conclusions The non-linear motion of a damped spring pendulum with an attached linear damped transverse absorber in the direction of the spring has been investigated. Under the impact of a harmonic force and moment, the motion of the pendulum's hanging point has been constrained to an elliptic path. The EOM have been derived applying Lagrange's equations from the second kind. The AMS has been used to achieve the approximate solutions up to the third order. Based on the solvability requirements, the ME have been recognized. Three resonance cases of primary external and internal resonance were investigated simultaneously. The RHC was used to investigate and evaluate the stability of fixed points' locations. The time histories of the achieved solutions, resonance responses, and the stability and instability zones at the steady state case were drawn and analyzed. The impact of various inputs of physical parameters on the performance of the system under investigation was examined. This system carries a lot of weight due to its use in engineering vibrational control applications.
7,161
2021-12-15T00:00:00.000
[ "Engineering", "Physics" ]
Phenomenological implications of the Friedberg-Lee transformation in a neutrino mass model with $\mu\tau$-flavored CP symmetry We propose a neutrino mass model with $\mu\tau$-flavored CP symmetry, where the effective light neutrino Lagrangian enjoys an additional invariance under a Friedberg-Lee (FL) transformation on the left-handed flavor neutrino fields, that leads to a highly predictive and testable scenario. While both types of the light neutrino mass ordering, i.e., Normal Ordering (NO) as well as the Inverted Ordering (IO) are allowed, the absolute scale of neutrino masses is fixed by the vanishing determinant of light Majorana neutrino mass matrix $M_\nu$. We show that for both types of mass ordering, whilst the atmospheric mixing angle $\theta_{23}$ is in general nonmaximal ($\theta_{23}\neq \pi/4$), the Dirac CP phase $\delta$ is exactly maximal ($\delta=\pi/2,3\pi/2$) for IO and nearly maximal for NO owing to $\cos\delta\propto \sin\theta_{13}$. For the NO, very tiny nonvanishing Majorana CP violation might appear through one of the Majorana phases $\beta$; otherwise the model predicts vanishing Majorana CP violation. Thus, despite the fact, that from the measurement of $\theta_{23}$, it is difficult to rule out the model, any large deviation of $\delta$ from its maximality, will surely falsify the scenario. For a comprehensive numerical analysis, beside fitting the neutrino oscillation global fit data, we also present a study on the $\nu_\mu\rightarrow \nu_e$ oscillation which is expected to show up Dirac CP violation in different long baseline experiments. Finally, assuming purely astrophysical sources, we calculate the Ultra High Energy (UHE) neutrino flavor flux ratios at neutrino telescopes, such as IceCube, from which statements on the octant of $\theta_{23}$ could be made in our model. Introduction In spite of the spectacular developments in last couple of decades, the theoretical origin of neutrino masses, flavor mixing and CP violation [1] in the leptonic sector remain unresolved. In addition, models with definitive statements about the mass ordering and the absolute scale of three light neutrino masses are yet to be tested. Experiments so far with solar, atmospheric, reactor and accelerator neutrinos have determined the three mixing angles and the two independent mass-squared differences to a reasonably decent accuracy, while the current cosmological upper bound on the sum of the three light neutrino masses is fairly robust: i m i < 0.17 eV [2]. The octant of the atmospheric mixing angle θ 23 remains unknown though the best-fit values are reported as 47.2 • for NO and 48.1 • for IO [3,4]. Therefore, a precise prediction of θ 23 can be used to exclude and discriminate models in the light of forthcoming precision measurements. On the other hand, the current best-fit values of the Dirac CP phase δ, are close to 234 • for NO and 278 • for IO. While the possibility of CP conservation (sin δ = 0) is allowed at slightly above 1σ, one of the CP violating value δ = π/2 is disfavored at 99% CL. Thus, the remaining CP violating value δ = 3π/2 and deviations around it still remain potentially viable and tantalizing possibilities. Beside all these, it still remains a baffling conundrum for neutrino experts whether the light neutrinos are Dirac or Majorana in nature. Till date, despite relentless searches, no experimental signature of the neutrinoless double β−decay signal have been observed. However, the rapid development in the long baseline experiments such as T2K [5], NOνA [6] and also 0νββ experiments such as KamLandZen [7], GERDA [8,9] is expected to shed light on the above issues shortly. Thus, from a theoretical perspective, this is a moment of paramount importance in neutrino mass model building, since many of the existing models that have predictions of θ 23 , δ and the neutrino mass ordering are likely to be challenged through precise measurements of these quantities in ongoing and forthcoming experiments. A particular generalization [36,46] of (1.1) is CP µτ θ which is implemented in the neutrino Majorana mass term with the field transformation In the neutrino flavor space G µτ θ has the generic form with 'θ' being an arbitrary mixing angle that mixes the ν Lµ and ν Lτ flavor fields. The negative signs in (1.5) are to comply with the PDG convention. It is worth noticing that θ = π/2 reduces the mixing symmetry G µτ θ lm to the interchange symmetry G µτ lm and any nonzero value of θ − π/2 has the potential to account for the deviation from CP µτ . Eq.(1.5) is a special case of Eq.8 of Ref. [47] with α = π, β = −π and γ = 0. Though, in general, CP symmetries are highly predictive in terms of mixing angles and CP-violating phases, for most of the cases, it lacks information regarding light neutrino masses and mass ordering unless one invokes additional flavor symmetries to reduce the number of parameters [12], e.g, by the means of 'texture zeros' in the light neutrino mass matrix [32,43]. In this work, to have testable predictions in each sector (masses as well as mixing) instead of any additional flavor symmetry, in combination with (1.4), we consider a Friedberg-Lee (FL) transformation [48][49][50][51][52][53] This leads to where η l (l = e, µ, τ ) are three arbitrary complex numbers, η = (η e η µ η τ ) T and ξ is a fermionic Grassmann field [48]. Note that, (1.6) is a simple CP generalization of the ordinary (general) FL transformation (also known as twisted FL symmetry [54,55]) We would like to stress that in this work we mainly focus on the effective field transformation (1.6) and its low energy phenomenological consequences without an explicit top down model realization like in the cases of CP combined with flavor symmetries [30,31,34]. Nevertheless, the generalized µτ and FL could arise from a discrete flavor symmetries such D 4 [56] and singlet scalar extension to the Standard Model [51] respectively. Since the residual symmetries in the charged lepton sector and the neutrino sector decide the low energy predictions for the neutrino parameters, from the phenomenological point of view it is a challenging task to identify proper residual symmetries which are predictive while being consistent with the extant neutrino data. Individually, flavor symmetries, CP symmetries or FL symmetries would not suffice to lead to residual symmetries which are predictive in mass as well as mixing sectors. That is why certain combinations of these symmetries are always attractive at least at the phenomenological level. For example, various models discussed in [12] deal with a combined theory of CP and flavor at high energy as well as at low energy (after spontaneous symmetry breaking, the low energy effective symmetries are still a combined theory of CP and flavor). Ref. [32,43] combines a U (1) global symmetry and its discrete subgroups such as Z 8 with µτ reflection to have texture zeros in light neutrino mass matrices so that the model could predict neutrino parameters in both the sectors, masses as well as mixing. Due to the blindness in the mixing sector, a combination of µτ symmetry with FL symmetry has been proposed in [54]. Similar to these models, in our work, FL symmetry could be thought of as a complementary symmetry to the generalized µτ reflection and vice versa, rather than treating any of them (FL or general µτ ) as an expedient partner of each other. Amongst many of the interesting results (which we shall discuss in the next section) that emerge as a consequence of the transformation in (1.6), it is worthwhile to stress two important departures from CP µτ . • First of all, as mentioned earlier, G µτ θ lm in (1.5) is a µτ mixing symmetry. It reduces to 'µτ -interchange' in the limit θ → π/2 which we address in rest of this paper as 'µτinterchange limit (MTIL)'. It is now trivial to anticipate that the mixing parameter θ( = π/2) conspires for the departure from maximal δ and θ 23 . However, we show in this paper that despite the generalization from CP µτ to CP µτ θ , the additionally imposed FL symmetry only allows a tiny deviation from the maximality of δ in this model. • The first condition in (1.7) is satisfied for a nontrivial eigenvector η if det M ν = 0 which means at least one of the light neutrino masses is zero. Thus, by construction, this model predicts the absolute light neutrino mass scale. For a consistent phenomenological analysis, apart from fitting the neutrino oscillation global-fit data, we study here the impact of CP µτ θ symmetry on ν µ → ν e oscillation in the long baseline experiments such as NOνA, T2K and DUNE. In addition, in the context of recent discovery of high energy neutrino events at IceCube [57][58][59][60][61], assuming high energy neutrinos originate purely from distant astrophysical sources 1 , we also calculate the fluxratios which will be measured with enhanced statistics at advanced neutrino telescopes (e.g. IceCube and ANTARES[62]) in near future. These calculations show that any potential deviation from the democratic 1:1:1 distribution of flux ratios [63][64][65][66] can lead to predictions on the octant of θ 23 in our model. The rest of the paper is organized as follows. Sec.2 contains the most general parametrization of M ν that is invariant under (1.6), thereby satisfying the conditions of (1. where c θ ≡ cos θ, s θ ≡ sin θ and t θ/2 = tan θ 2 . For simplicity, we restrict to a reasonable choice that η l are a priori arbitrary complex numbers with same phases, so that the ratios η 1 η 1 , η 2 η 3 and η 3 η 1 are all real. In (2.1), there are five real free parameters: a 1 , a 2 , c 1 , η 1 η 2 and θ which can be well constrained by existing neutrino oscillation global-fit data. It is to be noted that (2.1) does not contain the parameter η 3 owing to a consistency relation of the form η 2 The mass matrix M ν in (2.1) can be diagonalized by a similarity transformation with a unitary matrix U : where m i (i = 1, 2, 3) are real and we assume that m i ≥ 0. Without any loss of generality, we work in the diagonal basis of the charged lepton so that U can be related to the P M N S mixing matrix U P M N S as is an unphysical diagonal phase matrix and c ij ≡ cos θ ij , s ij ≡ sin θ ij with the mixing angles θ ij ∈ [0, π/2]. We work within the PDG convention [67] but denote our Majorana phases by α and β. CP-violation enters through nontrivial values of the Dirac phase δ and of the Majorana phases α, β where δ, α, β ∈ [0, 2π]. is either +1 or −1, and therefore (3.1) can be written in the following explicit form: Eq.(3.2) is equivalent to nine equations for the three rows: It is useful to construct the following two rephasing invariant quantities, that are independent of the unphysical phases, for calculating the Majorana phases: From the first row of (3.3), we get, Again, using the above different expressions for I 1,2 , in (3.4) and (3.5), we find the following relations, c 12 s 12 c 2 13 e −iα/2 =d 1d2 c 12 s 12 c 2 13 e iα/2 (3.6) and From (3.6) and (3.7), we find, i.e., either α = 0 or α = π, and either β = 2δ or β = 2δ − π . Therefore, there are four possible distinct pairs of values for the Majorana phases. From the third row of (3.3), taking the absolute square, we obtain, Similarly, the absolute square of the second relation in the third row in (3.3) is devoid of the unphysical phase difference (φ 2 − φ 3 ), and we get, Note that, both the relations, i.e., (3.10) and (3.11) reduce to the co-bimaximal prediction of CP µτ in the MTIL, as expected. We also stress that the relations (3.8), (3.10) and (3.11) hold irrespective of the neutrino mass ordering. Now, due to FL invariance, M ν has a vanishing eigenvalue with corresponding normalized eigenvector given by where γ is an arbitrary phase signifying that the normalized eigenvector is unique up to an overall phase. If the zero eigenvalue is associated with m 1 = 0 (m 3 = 0), we discover additional consequences for the normal (inverted) ordering. Normal ordering Here, v is associated with the first column of PMNS. Equating v with the first column of U in (2.3), we get, Note that, (3.14) and (3.15) together imply Taking the product of (3.14) with the complex conjugate of (3.15), and taking its imaginary part, we obtain, Eliminating sin 2 (φ 2 − φ 3 ) and using (3.11), we finally get Using (3.16) and eliminating cos(φ 2 − φ 3 ) from (3.10), we obtain, As we shall see in the numerical analysis in the next section, though in general cos δ = 0 for NO, the numerically allowed range of δ is very close to 3π/2, lying in the narrow interval 269.6 • − 270.4 • (Fig. 1). Since the possibility of δ = π/2 is excluded at more than 99% CL, by maximal CP violation, we refer only to δ = 3π/2. P DF (δ) dδ = 0.795. Thus upon a large number of random trial (we choose that number to be 10 6 ), there is 80 % probability that δ will be in the range 270 ± 0.2. Inverted ordering In this case, v is associated with the third column of PMNS. Equating v with the third column of U in (2.3), we get, 20) Note that, (3.21) and (3.22) together imply which is consistent with the relation (3.10). Note that, since the unphysical phase difference (φ 2 − φ 3 ) = π, it follows from (3.11) that the Dirac CP violation is maximal irrespective of the value of θ 23 i.e., cos δ = 0. Clearly, since the Dirac CP phase deviates slightly from its maximal value only for the NO, and both types of mass ordering in this model predict arbitrary nonmaximality in θ 23 , it is difficult to make comments on the mass ordering, only from the measurement of these two parameters. Though any large nonmaximality in δ will rule out CP µτ as well as this model (CP µτ θ + FL), however, if the experiments favour nonmaximal θ 23 along with a maximal value of δ the latter model will survive while the former will be in tension. One might wonder whether the minimal seesaw, which also leads to a vanishing eigenvalue, will lead to the same predictions as above when combined with general µτ symmetry. Though Eq.3.11 holds for both the cases (combination of the generalized µτ reflection symmetry with minimal seesaw or FL symmetry), a closer inspection of Eq. 3.18 reveals in general predictions for cos δ need not be the same. This is because in each case the model parameters are different and will be constrained differently by the neutrino oscillation data. Parameter Estimation We present a comprehensive numerical analysis to demonstrate the phenomenological viability of our proposal, and explore its implications on neutrino phenomenology in general. It is organized as follows. We utilize the (3σ) ranges of the globally fitted neutrino oscillation data[4] together with the upper bound of 0.17 eV [2] on the sum of the light neutrino masses from PLANCK and other cosmological observations in Table 1. The allowed range of parameters of M ν are tabulated in Table 2. Subsequently, we discuss the predictions in our model on neutrinoless double beta decay, CP asymmetry in ν µ → ν e oscillations and flavor flux ratios at neutrino telescopes in three separate subsections. Neutrinoless double beta (0νββ) decay process For certain nuclei such as Ge-76, it is energetically favorable to undergo a double beta decay (2νββ) instead of a singular β−decay emitting two electrons and two neutrinos. Moreover, if the neutrino is a Majorana particle those two neutrinos can annihilate each other to give rise to a neutrinoless double beta decay (0νββ): which clearly violates the lepton number by 2 units. Observation of such decay will firmly establish the Majorana nature of the neutrinos. The half-life corresponding to the above decay is given by 1 T 0ν where G 0ν denote the two-body phase space factor, M is the nuclear matrix element (NME), m e is the mass of the electron and M ee is the (1,1) element of the effective light neutrino mass matrix M ν . Using the PDG parametrization convention for U P M N S , the M ee can be written as M ee = c 2 12 c 2 13 m 1 + s 2 12 c 2 13 m 2 e iα + s 2 13 m 3 e i(β−2δ) . (iv) α = π, β = 2δ − π ⇒ M ee = −s 2 12 c 2 13 m 2 − s 2 13 m 3 . Since the observations give upper bounds on |M ee |, cases (i) and (iv) give identical predictions, as can be clearly seen from the upper left and lower right panels of Fig.2 Fig.2. For the inverted ordering, δ = π/2 or 3π/2, and m 3 = 0. Here, due to the latter condition, the expression (4.3) becomes independent of β and reduces to two different possibilities: . Similar situations occur for cases (ii) (upper right panel) and (iii) (lower left panel) in (a) α = 0, β = 0, π ⇒ M ee = c 2 12 c 2 13 m 1 + s 2 12 c 2 13 m 2 , (b) α = π, β = 0, π ⇒ M ee = c 2 12 c 2 13 m 1 − s 2 12 c 2 13 m 2 . The plots of |M ee | versus the sum of the light neutrino masses i m i for both NO and IO are displayed in Fig.2. Several upper limits on |M ee | from various ongoing and upcoming experiments have been shown. It is evident from Fig.2 that |M ee | in each plot leads to an upper limit which is below the sensitivity reach of the GERDA phase-II experimental data. The upper bounds on |M ee | from experiments such as LEGEND-200 (40 meV), LEGEND-1K (17 meV) and nEXO (9 meV) [69], shown in Fig.2, can probe our model better. Note that, for each case, the entire parameter space corresponding to the inverted mass ordering is likely to be ruled out in case nEXO does not observe any 0νββ signal covering its entire reach. Also the bounds on Effect of CP asymmetry in neutrino oscillations In this section, we work out the effect of the presence of leptonic Dirac CP violation δ in neutrino oscillation experiments. The phase δ will appear in the asymmetry parameter A lm , defined as where l, m = (e, µ, τ ) are flavor indices and the P 's are transition probabilities. First, let us consider oscillation in vacuum. The ν µ → ν e transition probability is given by P µe ≡ P (ν µ → ν e ) = P atm + P sol + 2 P atm P sol cos(∆ 32 + δ), (4.5) where ∆ ij = ∆m 2 ij L/4E is the kinematic phase factor (L being the baseline length and E being the beam energy) and P atm , P sol are respectively defined as Here a = G F N e / √ 2 with G F as the Fermi constant and N e is the number density of electrons in the medium of propagation, so that a take into account the matter effects in neutrino propagation through the earth. An approximate value of a for the earth is (3500km) −1 [47,74]. In the limit a → 0, (4.5) leads to the oscillation probability in vacuum. With this, the CP asymmetry parameter is given by where δ is given by (3.18) and (3.24) for NO and IO respectively. In Fig.3 represents the variation of P µe and A µe against the baseline length L for IO, i.e., for δ = 3π/2, while in Fig.5 we give same plots for δ given by (3.18) i.e., for NO. The baseline lengths T2K, NOνA and DUNE are indicated in these figures by vertical lines. In Fig.4 and 6 the CP asymmetry A µe is plotted against the beam energy E for the same three experiments for IO and NO respectively. Figure 5. Plots of the transition probability (P µe ) and CP asymmetry parameter (A µe ) with baseline length L for NO (E = 1GeV). The bands are due to 3σ ranges of the mixing angles and also the ranges for the parameters 79.6 • < θ < 101.6 • and 1.79 < |η 1 /η 2 | < 2.11. In this case, δ is not fixed, but varies over a range predicted from (3.18) with the same ranges of the mixing angles, and model parameters θ and η 1 /η 2 . The three vertical dashed lines and the horizontal dotted line specify the same as Fig.3. Octant of θ 23 from flavor flux measurement at neutrino telescope Recent discovery [57][58][59][60][61] of Ultra High Energy (UHE) neutrino events at IceCube has opened a new era in the neutrino astronomy. Including track+shower, IceCube has reported 82 high-energy starting events (HESE) which constitute more than 7σ excess over the atmospheric background and thus points towards an extraterrestrial origin of the UHE neutrinos(for a recent update see Ref. [75]). In addition, no significant spatial clustering has been found and the recent data seems to be consistent with isotropic neutrino flux from uniformly distributed point sources [76] and hints towards extra galactic nature of the observed events. Although the HESE events are not consistent with the standard astrophysical one component unbroken isotropic power-law spectrum Φ(E ν ) ∝ E −2 ν and also suffer constraints from multi-messenger gamma-ray observation [77], two component explanation of the observed neutrino flux from purely astrophysical sources is still a plausible scenario [78]. Before we discuss the predictions of our model based on the flavor flux ratios, statements on which could be made from enhanced statistics at neutrino telescopes (e.g., IceCube) and fits like [78], we first lay out a short summary of the subject as a necessary prerequisite. The dominant source of UHE cosmic neutrinos are pp (hadro-nuclear) collisions in cosmic ray reservoirs such as galaxy clusters and pγ (photo-hadronic) collisions in cosmic ray accelerators [79,80] such as gamma-ray bursts, active galactic nuclei and blazars. In pp collisions, protons of TeV−PeV range produce neutrinos via the decays π + → µ + ν µ , π − → µ −ν µ , µ + → e + ν eνµ and µ − → e −ν e ν µ . Therefore, the normalized flux distributions over flavor are [65] where the superscript S denotes 'source'. On the other hand, the pγ collisions involve relatively less energetic γ−rays (GeV-10 2 GeV range). Therefore, the center-of-mass energy of γp system is such that it can only produce γp → ∆ + → π + n, which in turn give rise to decays π + → µ + ν µ and µ + → e + ν eνµ . The corresponding normalized flux distributions over flavor In either case, if we take φ S l = φ S ν l + φ S ν l with l = e, µ, τ , As neutrino oscillations will change flavor distributions from source (S) to telescope (T) [81] the flux reaching the telescope will be given by (4.12) Since the source-to-telescope distance is much greater than the oscillation length, the flavor oscillation probability averaged over many oscillations is given by (4.13) Thus the flux reaching the telescope is given by where φ 0 is the overall flux normalization. The unitarity of the PMNS matrix implies (4. 15) where ∆ i = |U µi | 2 − |U τ i | 2 . Existence of exact µτ (anti)symmetry, therefore dictates that ∆ i = 0, and φ T e = φ T µ = φ T τ . With the above background, one can define certain flavor flux ratios R l (l = e, µ, τ ) at the neutrino telescope as where l, m = e, µ, τ and U is given in (2.3). Each R l depends on all three mixing angles and cos δ. For NO, θ 23 and cos δ are given by (3.19) and (3.18) while for IO the corresponding quantities are given by (3.23) and (3.24) respectively. For both types of ordering, we display in Fig.7 the variation of R e,µ,τ w.r.t θ in its phenomenologically allowed ranges (Table 2) using the exact expressions in (4.16). For NO, θ 23 can be eliminated in favor of θ and η 1 /η 2 . Keeping the latter fixed at a value 1.5, we show in Fig.7 (left panel) the contour corresponding to the best-fit values of θ 12 and θ 13 , while the bands arise when θ 12 and θ 13 are allowed to vary in their current 3σ ranges. It should be emphasized that the contours corresponding to cos δ > 0 and cos δ < 0 are practically indistinguishable, and therefore, we show the contours and bands only for the case cos δ > 0. Next, in case of IO, θ 23 can be eliminated in favor of θ only. The resulting variation of R e,µ,τ with θ are shown in the right panel of Fig.7. In generating these plots, the mixing angles θ 12 and θ 13 are again allowed to vary in their experimental 3σ ranges. The contours within the bands represent the case when θ 12 and θ 13 are kept fixed at their best-fit values. Unlike NO, the expressions for R l in case of IO are relatively simple and can be used to explain the nature of the plots. The expressions for R e,µ,τ for IO are: where we have used (3.24), (3.23) and neglected terms of O(s 2 13 ). It is evident from the approximate expressions (4.17) that in the exact µτ interchange limit θ = π 2 , all the flavor flux ratios converge to the value 1 2 . It is clear from the figure as well as from the approximate expression of R e that for R e < 1 2 (R e > 1 2 ), we have θ < π 2 (θ > π 2 ). Since (3.23) implies 2θ 23 = π − θ, observed value of R e will give a definite value of θ 23 . In particular, θ > π 2 implies θ 23 < π 4 and vice versa. Similar conclusion can be made from the observed value of R µ . Although, the expression for R µ in (4.17) is quadratic in cos θ, only one of the roots of this equation belongs to the numerically allowed range of θ (Table 2). However, a definite observational value of R τ cannot unambiguously predict the value of θ. This is because of the quadratic dependence of R τ on c θ which is clearly visible from Fig.7, specifically for θ < π/2. For consistency, the unique value of θ determined from the future precision measurement of R e (or R µ ) lead to a theoretical prediction of the ranges of R µ (or R e ) and R τ which should in turn match the observed values of R µ (or R e ) and R τ . Conversely, if θ 23 is measured with significant precision in a complementary experiment (e.g. long baseline experiments), the range of each R l can be uniquely predicted for all l, which can again be compared with the observations in IceCube. The horizontal axes in both plots correspond to the numerically obtained ranges of θ in Table2, which is different in NO and IO. For the NO case, η 1 /η 2 is fixed at 1.0. Summary and conclusion In this paper, we propose an invariance of the low energy neutrino Majorana mass term under a mixed µτ -flavored CP symmetry CP µτ θ compounded with a generic Friedberg-Lee (FL) transformation on the left-handed flavor neutrino fields. Both types of mass ordering are allowed with a nondegenerate neutrino mass spectrum and vanishing value for the smallest neutrino mass as a direct consequence of FL invariance. While the atmospheric mixing angle θ 23 is in general nonmaximal (θ 23 = π/4), the Dirac CP phase δ is exactly maximal (δ = π/2, 3π/2) for IO and nearly maximal for NO owing to cos δ ∝ sin θ 13 though the deviation from maximality does not exceed 0.4 • on either side of the maximal value δ = 3π/2. It also turns out that one of the Majorana phases, α, is restricted to lie at its CP conserving values while the other, β, admits a simple linear relation with δ leading to a tiny Majorana CP violation. For the IO, θ 23 is, in general, nonmaximal but δ is maximal irrespective of the value of θ 23 . For the NO, the Majorana CP violation sneaking through the Majorana phase β is numerically insignificant so that the model essentially predicts vanishing Majorana CP violation. Evidently, any large departure of δ from 3π/2, will exclude our model. After fitting the neutrino oscillation global fit data, we also consider a numerical study of ν µ → ν e oscillation which is expected to show up Dirac CP violation in different long baseline experiments. Finally, assuming purely astrophysical sources, we calculate the Ultra High Energy (UHE) neutrino flavor flux ratios at neutrino telescopes such as IceCube. From this we comment on the predictability of the octant of θ 23 in our model.
7,022.4
2018-10-12T00:00:00.000
[ "Physics" ]
Effect of Lithium Chloride on the Fibre Length Distribution, Processing Temperature and the Rheological Properties of High-Yield-Pulp-Fibre-Reinforced Modified Bio-Based Polyamide 11 Composite The aim of this work was to investigate the effect of lithium chloride (LiCl) on the fibre length distribution, melting temperature and the rheological characteristics of high yield pulp fibre reinforced polyamide biocomposite. The inorganic salt lithium chloride (LiCl) was used to decrease the melting and processing temperature of bio-based polyamide 11. The extrusion method and Brabender mixer approaches were used to carry out the compounding process. The densities and fibre content were found to be increased after processing using both compounding methods. The HYP fibre length distribution analysis realized using the FQA equipment showed an important fibre-length reduction after processing by both techniques. The rheological properties of HYP-reinforced net and modified bio-based polyamide 11 “PA11” (HYP/PA11) composite were investigated using a capillary rheometer. The rheological tests were performed in function of the shear rate for different temperature conditions. The low-temperature process compounding had higher shear viscosity; this was because during the process the temperature was low and the mixing and melting were induced by the high shear rate created during compounding process. Experimental test results using the extrusion process showed a steep decrease in shear viscosity with increasing shear rate, and this melt-flow characteristic corresponds to shear-thinning behavior in HYP/PA11, and this steep decrease in the melt viscosity can be associated to the hydrolyse reaction of nylon for high pulp fibre moisture content at high temperature. In addition to the low processing temperature, the melt viscosity of the biocomposite usHow to cite this paper: Cherizol, R., Sain, M. and Tjong, J. (2017) Effect of Lithium Chloride on the Fibre Length Distribution, Processing Temperature and the Rheological Properties of High-Yield-Pulp-FibreReinforced Modified Bio-Based Polyamide 11 Composite. Advances in Nanoparticles, 6, 48-61. https://doi.org/10.4236/anp.2017.62005 Received: July 18, 2016 Accepted: April 10, 2017 Published: April 13, 2017 Copyright © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Introduction Short-fibre reinforced polymer composites are extensively used in manufacturing industries due to their light weight and improved mechanical properties [1] [2]. Hence, HYP has been used not only for its low lignin content, but also for its potential thermal stability and its strong adhesion when it is bonded with hightemperature engineering thermoplastic polymers [3] [4] [5] [6]. Various experimental studies have investigated the effect of flexibility on fluid viscosity. They concurred that the more flexible the fibres are, the more pronounced their effect on the rheological characteristics is [7] [8] [9]. A recent study on the effect of fibre-length distribution on the rheological behavior of castor-oil composite showed that at high fibre length, the shear viscosity becomes more dependent on shear rate [10]. This behavior is due to elastic deformation of the fibres [10]. Recently, various authors have investigated the effect of fibre content on polymer melt rheology [11] [12] [13]. Their study showed an important increase in shear viscosity with increased fibre loading at low shear rates, but only a small increase in viscosity at high shear rates [12] [13]. Another similar study on polypropylene-based long fibre observed an increase in shear viscosity with increased fibre content and fibre length [13]. However, this viscosity rise was very small, which the authors attributed to high shear rates and fibre breakage during the processing [13] [14]. Non-Newtonian fluid characteristics such as shear thinning were also observed in all the studies mentioned above. There is only a very limited literature devoted to experimental studies of the rheology of pulp-fibre-reinforced polymer composites due to the complex nature of these materials and the difficulties encountered during their processing and their rheological characterization [13] [14] [15] [16] [17]. The processing technique and conditions have a significant influence on the rheological and overall properties of pulp-fibre-reinforced polymer composites because they dictate the degree of dispersion and distribution of the fibre in the polymer matrix and the low processing temperature required in order to avoid thermal degradation [18] [19] [20] [21]. Compared to other natural fibres, HYP is more thermally stable in the presence of high-melting-temperature engineer-ing thermoplastics (under 180˚C) such as PA11, PA6, and PA66. The principal objectives of the study described in this chapter were to determine the effect of the addition of the inorganic salt lithium chloride (LiCl) to the bio-based polyamide 11, and the characteristic of the modified bio-based polyamide 11 in the presence of high yield pulp (HYP) fibre. The HYP fibre content, the length distribution, and the density of the composites were measured and analyzed for both processing techniques in order to investigate the effect of LiCl on the composite components. Finally, the rheological results using the Brabender mixer technique and the conical twin extruder respectively were determined and compared. Materials The matrix biopolymer bio-based polyamide 11, density 1.03, MFI 11, was supplied by Arkema (France). Aspen high yield pulp (HYP) fibres were supplied by Tembec (Montreal, QC). The HYP is the type used in wood-free printing, in writing-paper grades and in multiple-coated folding-board grades; fibre length is 0.230 to 0.85 mm. Composites Preparation The experiment was processed using a conical twin screw extruder and a Brabender mixer technique. In both mixing processes, the high yield pulp "HYP" fibre was dried at 80˚C for 6 hours and then added to the corresponding bio-based polyamide PA11 and well mixed before the combination was introduced to the extruder. The average temperature of the barrel was 200˚C. Figure 1 represents the different zones of the conical twin-screw extruder. In addition, in the Brabender mixer technique process, different lithium chloride content "LiCl" was added to the bio-based polyamide 11 at corresponding process temperature prior to adding the pulp fibre. Effect of Processing Conditions Many processing parameters affect the properties of final products. For extrusion, temperature profiles affect the fibre degradation. In addition, screw speed and feeding rate change fibre length, distribution, and orientation. The mechanical properties reflect all these changes, and the processing parameters are optimized to obtain the best properties. Table 1 represents the processing parameters for HYP/PA11 used in this study. However, the Brabender mixer technique was used as the principal compounding process in this study. Inorganic salt lithium chloride (LiCl) was added to bio-based polyamide 11 in order to decrease it melting temperature, and consequently avoid fibre degradation and burning. Different lithium chloride content was used to reduce the melting temperature of polyamide 11 using the Brabender mixer technique. Fibre Content and Length Distribution Analysis after Compounding The composite samples were cut into small pieces and immersed in formic acid for three days. The bio-based PA11 was dissolved by the formic acid and HYP was left. The HYP was filtered and washed with formic acid, then dried in a vacuum oven for four hours. By measuring the weight of composite and pulp fibre, we could calculate the actual fibre content. HYP fibre length was measured with the Fiber Quality Analyzer (FQA). The HYP was diluted with D.I. water. The diluted HYP fibre entered a thin-planar channel. This channel helped to gently orient the fibre 2-dimensionally, so that the fibre could be fully viewed by the camera. The picture taken by the camera was then analyzed by the software to give the HYP fibre length distribution. Actual Density Measurement The density of polyamide 11 reinforced HYP fibre composites was determined by using the ASTM D792 technique. The samples were first weighed both in water and air, and then the density was calculated by: where ρ is the sample density in g/cm 3 , a is the sample weight in air in g, w is the sample weight in water in g, and water ρ is the density of the water in g/cm 3 . Differential Scanning Calorimetry (DSC) The melting temperature and crystallization behavior of the yield pulp fibre reinforced bio-based polyamide 11 composites were investigated using a TA instrument Q1000 differential scanning calorimeter (DSC) attached to a cooling system under a nitrogen atmosphere. The DSC instrument was run from 45˚C to250˚C with a heating rate of 10˚C /min. The sample weight was about 5 mg. The specimens were sealed in aluminum pans by pressing and the prepared samples were placed in the furnace of DSC with an empty reference pan. The heat flow rate as function of temperature was recorded automatically. Melting temperature was identified on the peak point of the DSC curves. The melting of polymer within a composite system would assist to select a suitable temperature profile for the compounding process technique when the fibres and matrix were compounded to produce green composite. Rheological Properties Measurement The rheological measurements of the composites' melt-flow properties of were carried out in a twin-bore Rosand Capillary Rheometer model RH2000. (The standard RH2000 range supports temperatures from −40˚C to 500˚C. The standard maximum force applied is 12 kN.) The composite samples for testing were cut into very small pieces, then placed inside the barrel and forced down into the capillary with the plunger attached to the moving cross-head. Representative steady-shear viscosity versus high shear rate is presented in the figures below for HYP/PA11, which was processed at the average extrusion temperature of 200˚C. The viscosity of the sample was obtained from steady-shear measurements for different temperature profiles, with the rate ranging from 50 to 5000 S −1 . The rheological viscosity data presented in this chapter thus represents an average value of three measurements. Effect of the Lowering the Processing Temperature on the Pulp Fibre Distribution and the Bio-Based Polyamide Density after Processing To avoid the degradation of the HYP fibre during the processing of the composite, decreasing the melting temperature was attempted. The reduction of the melting point of high-temperature-engineering polyamide was realizing by the utilization of inorganic salt during the melt compounding processing in order to lower the melting temperature of polyamide 11 (PA11). Lithium chloride (LiCl) was added to the polyamide 11 during the extrusion process using the Brabender mixer technique. Next, the PA 11 and salt mixture was used as a matrix and HYP fibre was incorporated into the matrix using a Brabendermixer for the compounding. The melting temperatures of the PA 11/LiCl mixture are shown in Table 2. Table 3 shows the heat deflection temperatures of the PA11 polymer-reinforced HYP fibre composites. 3% LiCl in PA11 was chosen in order to keep the concentration of LiCl low. From 3% to 5% LiCl in the PA11, the change in melting temperature is insignificant. The stability of the melting point at high LiCl concentration is due to the lowering of the crystallization temperature and the saturation of the degree of crystallinity of the molecular chains. The heat deflection temperature was investigated for only 3% LiCl content. The addition of LiCl to PA11 decreases the crystallization temperature and degree of crystallinity. Consequently, the heat deflection temperature of PA11 de- Densities and Actual Fibre Contents The fibre contents of composites were controlled by the feeding rate of matrix and fibre. However, the feeding rate cannot be calibrated precisely, especially the feeding rate of HYP fibre. Table 4 shows the densities and actual fibre contents of composites processed by the extrusion compounding method and the Brabender mixer technique. Comparing the densities of bio-based polyamide 11 reinforced HYP composites made by the two procedures, we can see that the composites made by the Brabender mixer technique have higher density than composites made by the extrusion process at the same fibre content. The 30% HYP/PA11 made via the Brabender mixer has a higher density because its actual fibre content is 5% higher than the composite made by the extrusion process. The different densities show that Brabender mixer technique gives samples with fewer voids than the extrusion process. The modified bio-basedPA11 reinforced HYP fibre composites have higher densities than the regular bio-based PA11-reinforced HYP fibre composites at To minimize fibrethermal degradation, the processing temperature was set just below the commercial melting temperature of polyamide. Table 4 shows that the densities and actual fibre contents were proportional to the fibre content for both processing methods. However, with the addition of LiCl to the Brabender mixer, the differences became more pronounced. Effect of Fibre Content on the Length and Shape Distribution on HYP-Reinforced Bio-Based Modified PA11 Composite During the extrusion process, the shear stress applied by the screw breaks the fibres. The resulting fibre lengths affect the ultimate mechanical properties. In spite of the influence of fibre damage and breakage during processing, the initial fibre length in the feedstock determined the final fibre lengths. It was therefore important to analyze the initial fibre-length distribution, which is one of the most significant parameters for natural fibre reinforced polymer composites. After the polymer and fibre for the composite are decided on, fibre length is the adjustable feature used to manage the ultimate properties of bio-composite materials. Table 5 shows the results of HYP fibre length distribution determined using a Fibre Quality Analyzer (FQA). ite. This decrease of fibre length with the pulp fibre content in the polymer melt concentration is due to fibre entanglement and agglomeration within the polymer. Table 6 shows that the HYP fibre length from the green composite produced using the Brabender mixer technique did not decrease very much compared with the HYP fibre length of the composite made using the conical twin extruder method. For modified bio-based PA11-reinforced pulp fibre bio-composites processed using the Brabender mixer technique, the mean fibre length did not decrease a great deal. In the normal bio-based PA11 reinforced HYP fibre composites, the HYP fibre length is shorter than that of the bio-based modified polyamide reinforced pulp fibre after extrusion, probably because the higher temperature caused more thermal degradation of fibres, making them easier to break. In addition, the use of LiCl to decrease the melting temperature of the bio-based PA11 may also have protected the pulp fibre from degradation and entanglement during the slow and controlled process using the Brabender mixer technique, and consequently kept the pulp fibre length constant after the compounding process. Rheological Characteristics of HYP-Reinforced Bio-Based Polyamide As already noted, rheological characteristics of the polymer, fibre, and interphasial phases influence the final characteristics of the resultant microstructure of the composite materials; these characteristics in turn affect the mechanical properties of a multiphase polymer-composite system. As obtained from experiment, the shear viscosity as a function of the steady-shear rate of HYP/PA11 at 200˚C is shown in Figure 2 (As noted, these results are the average of three Effect of the Processing Parameters on the Rheological Property The rheological testing results for the different processing techniques (the extrusion process for the high temperature process, and the Brabender mixer method for the low temperature process) are presented below in Figure 3. The goal of decreasing the process temperature was only realized for 30% HYP/PA11. The Brabender mixer approach was used for the low-temperature compounding, and the process temperature was below the melting point. The rheological properties of the high-and low-temperature compounding are presented in Figure 3. The low-temperature process compounding has higher shear viscosity compared to the high temperature process; this is because a) during the low-temperature process the polymer melting was generated by the high shear rate created during compounding and also because the mixing process was incomplete. Effect of the Inorganic Salt Lithium Chloride on the Rheological Properties of HYP Fibre-Reinforced Bio-Based Polyamide Composite Variations in shear viscosity as a function of the shear rate of HYP/PA11 at various processing temperatures using inorganic salt (LiCl) was realized in order to investigate their effect on the melting point and the rheological properties HYP-fibre-reinforced bio-based PA11 composite. The addition of inorganic salt lithium chloride to PA11 modified it melting point and consequently modified the processing temperature. The rheological test results are presented in Figure 4 and were achieved using the Brabender mixing technique. The processing conditions were 200˚C for the net polyamide without LiCl, 186˚C for 1% LiCl, 182˚C for 2% LiCl, 175˚C for 3%LiCl, 172˚C for 4%LiCl, and 170˚C for 5%LiCl content. In this chapter, LiCl + bio-based PA11 "LiCl + PA11" is referred to as shearing effects decreased as the salt concentration increased: that is, the modi- Effect of HYP Fibre Content on the Rheological Characteristics of Modified Bio-Base (PA11 + 3%Licl) Composite The effect of fibre content on the rheological characteristics of the HYP-fibrereinforced modified bio-based polyamide composite was investigated. Figure 5 shows the experimental results for 10%, 20%, and 30% HYPP reinforced modified PA11 "PA11 + LiCl". These curves are typical of pseudoplastic materials, which show a decrease in viscosity with increasing shear rate. At high fibre content, the material offers higher shear viscosity even for high shear rate. In general, the incorporation of fibres in polymer systems increases the viscosity, which rises further as fibre content is increased. The difference between 10% and 20% fibre at intermediate and high shear rate is not very significant. At low HYP content, shear viscosity is expected to rise rapidly with increasing concentrations of fibre because of the rapidly increasing interactions between particles as they become more closely packed and entangled. Nevertheless, at very high pulp fibre concentration, random anisotropic structures of the fibres in polymer melts were created, and they increased the Conclusions This study demonstrates that it is possible to process HYP fibre with highthermoplastic-engineering bio-based polyamide. For both processing methods and all formulations, fibres showed a length reduction after compounding process. The observed fibre length reduction using the extrusion process method was lower compared to the fibre length reduction using the Brabender mixer technique. However, the highest fibre reduction was observed for 30% pulp fibre in composite. The low-temperature compounding of HYP/PA11 presents higher shear viscosity compared to the high-temperature compounding for the same rheological parameters; this is because during the process the temperature was low and the mixing and melting were produced by the high shear rate created during compounding process. Experimental testing results from HYP/PA11 for the extrusion processing technique showed a steep decrease in shear viscosity with increasing shear rate at high temperature, and this melt-flow characteristic corresponds to shear thinning behavior in HYP/PA11 and due to the high pulp moisture content which tends to degrade polyamide 11. Results also showed high shear-thinning behavior in modified HYP/PA11 associated with a high degree of crystallinity and pseudoplasticity; this was due to the good dispersion of HYP into PA11 and the orientation of the flexible fibre effects in the direction of the molten PA11.
4,298
2017-03-21T00:00:00.000
[ "Materials Science" ]
Parametric Analysis of a Universal Isotherm Model to Tailor Characteristics of Solid Desiccants for Dehumidification Cooling has a significant share in energy consumption, especially in hot tropical regions. The conventional mechanical vapor compression (MVC) cycle, widely used for air-conditioning needs, has high energy consumption as air is cooled down to a dew point to remove the moisture. Decoupling the latent cooling load through dehumidification from the sensible cooling load can significantly improve the energy requirement for air-conditioning applications. Solid desiccants have shown safe and reliable operation against liquid desiccants, and several configurations of solid desiccants dehumidifiers are studied to improve their performance. However, the characteristics of solid desiccants are critical for the performance and overall operation of the dehumidifier. The properties of every desiccant depend upon its porous adsorbing surface characteristics. Hence, it has an optimum performance for certain humid conditions. Therefore, for a better dehumidification performance in a specific tropical region, the solid desiccant must have the best performance, according to the humidity range of that region. In this article, a theoretical methodology has been discussed to help the industry and chemists to understand the porous structural properties of adsorbent surfaces needed to tune the material performance for a particular humidity value before material synthesis. INTRODUCTION Under climate change and weather intensity, air conditioning is becoming inevitable for the comfort of human beings. Air conditioning demand is very high in hot and humid tropical regions with high humidity (Samuel et al., 2013;Fekadu and Subudhi, 2018;Burhan et al., 2021a). Such a humid tropical climate demands a higher latent than sensible cooling load. The mechanical vapor compression (MVC) cycle is widely and conventionally employed across the globe to fulfill such cooling needs. The MVC has to cool down the air at a very low temperature of a dew point level, way below the comfort level needs. This leads to the waste of high-grade electrical energy (Barbosa et al., 2012;Park et al., 2015;Oh et al., 2019). On the other hand, the utilization of CFC refrigerants is harmful to the environment and human beings (Chen et al., 2021c;Rabah Touaibi and Hasan Koten, 2021). To minimize the high energy consumption of the cooling needs, one possible solution is decoupling the air conditioning systems' latent and sensible cooling loads. Desiccant dehumidifiers provide one of the solutions to decouple the latent load from the total cooling load (Oh et al., 2017;Chen et al., 2020a;El Loubani et al., 2021). Furthermore, solid desiccants are proven reliable and compact compared to liquid desiccants, which pose several health risks. They were found to have carryover and are toxic and corrosive (Liu et al., 2019;Gurubalan and Simonson, 2021). In the humid tropical climate, therefore, the air conditioner's performance relies on the performance of the desiccant dehumidification system. The dehumidification system efficiency depends upon two main factors: the properties and performance of the desiccant material and the design of the desiccant system for better heat and mass transfer. Recently, many efforts have been made on the design of desiccant dehumidification systems like rotary wheels and coated heat exchangers (Zhou and Reece, 2019;Venegas et al., 2021). However, the characteristics of the desiccant material are critical (Muthu et al., 2021) to enhance the performance as different characteristics of the desiccant are needed for other tropical locations depending upon the humidity level for the optimum performance of the desiccant. Understanding the adsorption phenomena is very important for many industrial applications (Kawamoto et al., 2016;Burhan et al., 2019a;Chorowski et al., 2019;Chen et al., 2020b;Zakuciová et al., 2020;Chen et al., 2021a;Chen et al., 2021b;Ja et al., 2022), and each adsorbent adsorbate pair has unique characteristics (Kresge et al., 1992;Zhao et al., 1998;Chen et al., 2020c;Chen et al., 2021c). Due to these unique characteristics, six different adsorption isotherms depend upon their shape, as defined by the International Union of Pure and Applied Chemistry (IUPAC) (Chakraborty and Sun, 2014;Ng et al., 2017). Each adsorbent pair has different uptake levels at different pressure levels, either single or double layers. This behavior depends upon the surface topography of the adsorbent and the energy level heterogeneity of the available adsorption sites (Burhan et al., 2021b). In desiccant dehumidification, the space humidity needs to match with the operating range of the adsorbent. Some adsorbents reach their saturation limit at a very low partial pressure or humidity level. Such desiccant materials cannot be suitable for dehumidification as they will always be saturated even in normal conditions as space humidity does not go to such a low level. On the other hand, adsorbents with saturation limits at higher humidity levels may or may not be suitable for desiccant dehumidification, depending on their characteristics or isotherms. The important factors are the point of partial pressure when the uptake starts and when it reaches its saturation limit, the slope of such a rise in uptake, and the difference between these two points. Ideally, for dehumidification purposes, the starting point of the uptake must be according to the humidity level of the space. The saturation point must also be the same as the starting point, making a difference and slope theoretically as zero and infinity, respectively. This is not possible in practical, but efforts are made during material synthesis to go closer to such performance. Chemists adopt many recipes to tailor the performance of the adsorbents during material synthesis (Burhan et al., 2019b). However, the post-processing analysis after material synthesis defines whether the required characteristics have been successfully achieved or not and to what extent the ideal limit is. The objective of this manuscript was to theoretically demonstrate how adsorbent surface topography should be altered during material synthesis to tailor the required response of the material, especially focusing on the desiccant dehumidification point of view. This manuscript will explain how desiccant topography can be altered to enhance and tailor its response for the dehumidification application and how adsorbent surface parameters will be changed to achieve such a response. This methodology will provide information about the required change in the surface parameters before material synthesis to get the desired properties for the optimized dehumidification need. METHODOLOGY To understand the unique interaction of the adsorbent adsorbate pair and the adsorption phenomena, a universal isotherm model was developed based upon the classic adsorption theory. The universal isotherm model follows all six isotherm types and describes the formation of these isotherm shapes. The main advantage of the universal isotherm model is that it clearly shows the characteristics of the adsorption surface for each isotherm type in the form of the distribution of adsorption energy sites and their availability. The adsorption surface is made of tiny pores or adsorption sites having different energy levels. When the adsorbate reaches a certain critical energy level, i.e., ε c −RT ln Kp, it is adsorbed in the adsorption site with an energy level corresponding to the critical energy level of the adsorbate. When the partial pressure of the adsorbate changes, its critical energy level also changes. As a result, the availability of adsorption energy sites for adsorption also changes. However, depending upon the quantity of such availability describes the total uptake at that partial pressure, i.e., θ t ∞ εc X(ε)dε. ( 1 ) Figure 1: shows the characteristics of the adsorption isotherm and its association with the porous adsorbent surface and energy distribution of energy of these adsorption sites. Many factors affect the characteristics of the adsorption isotherm, like its rate of total adsorption uptake over a range of partial pressures, the starting point of the adsorption uptake, the saturation point of the total adsorption uptake, and the total uptake of the adsorbent. To understand the link between these factors on the adsorption isotherms, the universal isotherm model has provided the energy distribution function (EDF) for the adsorption energy sites as ( 2 ) For the two isotherms, brown and blue, the corresponding energy distribution function (EDF) curves are shown in the exact figure at the bottom right corner in Figure 1. The color of the EDF curves corresponds to the same color isotherms. The universal isotherm model successfully defined all isotherm types and understood the formation of such shapes. However, the primary importance of understanding the adsorption phenomena is in material synthesis so that the performance of the material can be tuned as per application. The material scientists can modify its performance by changing its porous surface characteristics or distribution of the adsorption energy sites and their availability. Such changes can be adopted during material synthesis by the pore expansion and surface treatment techniques like heat treatment (Young, 1958;Baker and Sing, 1976;Naono et al., 1980;Shioji et al., 2001;Alrowais et al., 2020;Chen et al., 2021d), chemical action (Ishikawa et al., 1996;Burhan, 2015), and acidification (Toor and Jin, 2012). These changes are directly linked to the parameters associated with the EDF equation of the universal isotherm model. As described earlier, certain factors define the characteristics of the adsorption isotherm, and out of them, the total uptake of the adsorbent material is not linked with the EDF. The increase in the total uptake, as shown in Figure 2, depends on the increased capacity of the adsorbent material. This material capacity can be increased by pore expansion or increased pore volume or surface area. However, the rest of the characteristics, like change in the rate of total uptake and the point of uptake, depend upon the distribution of the adsorption energy site. Adsorption surface heterogeneity 'm', the energy level of the median adsorption site 'ε o ', and its fractional availability 'X(ε)' are the main parameters in the EDF equation defining the shape of the isotherm. By controlling these parameters through pore expansion, the isotherms of the adsorbent can be tuned as per the certain requirement of the application. As the pore size and distribution are associated with the energy of the adsorption site, therefore, by adopting the pore expansion techniques of heat treatment [Young, (1958) (Ishikawa et al., 1996;Burhan, 2015), and acidification (Toor and Jin, 2012), the pore distribution and, as a result, the energy distribution of the material can be tailored for the required isotherms. Figure 1 demonstrates how a change in the isotherm is associated with the EDF curve and how the porous structure of the adsorbent surface changes. The blue isotherm line represents a typical isotherm curve, and the corresponding EDF curve is also shown in Figure 1. The spread or width of the EDF curves is based upon the value of the surface heterogeneity 'm'. The curve's middle point defines the median energy level 'εo', and the curve's height at the median energy level defines its fractional availability 'X(ε)' against the other available adsorption energy sites. The porous surface of the adsorbent is represented by the illustration at the bottom right FIGURE 1 | Schematic diagram representing adsorption isotherm characteristics and its association with the porous adsorbent surface and energy distribution of these adsorption sites. corner of Figure 1. However, with the pore expansion and material synthesis, the porous structure of the adsorbent surface is turned into the illustration at the top right corner of Figure 1. It can be seen that with the pore expansion, the smaller pores with energy levels ε 1 , ε 2 , and ε 3 are expanded to the bigger pores of the energy level ε 4 . As a result, the availability of adsorption sites with energy level ε 4 is higher, which can be seen from the brown EDF curve as the median energy level is shifted toward the left, and its fractional availability 'X(ε)' is high as compared to the blue EDF curve. On the other hand, the brown EDF curve's heterogeneity 'm' or width is reduced. This is because most of the small pores are now expanded to bigger pores and no longer exist. This is why the EDF is taller, and the slope of the brown isotherm increases, resulting in a higher rate of adsorption uptake. Because once the critical energy level reaches the energy level of the pores with the higher fractional availability, there is a sudden uptake increase as compared to the blue isotherm, in which the uptake is gradual as per the gradual availability of the adsorption sites due to high heterogeneity. The uptake point of the brown isotherm curve is also shifted toward the higher partial pressure side due to a shift in the median energy level toward the left or lower energy value side. RESULTS AND DISCUSSION As per the explanation provided in the methodology, in this section, the performance of the adsorbents will be tuned according to the dehumidification point of view. It will be analyzed how the surface characteristics of the adsorbent material are modified to achieve the required isotherm of the material for better performance. From a dehumidification point of view, the best materials are the S-shape isotherms. Therefore, the materials with the S-shape isotherms will only be analyzed. Figure 2 shows the isotherm of MOF 801 (Furukawa et al., 2014) in the green line. However, the uptake starting point is at a very low concentration ratio of humidity levels. This adsorbent is unsuitable for dehumidification in humidity regions because it requires very dry conditions to regenerate. To make it ideal for dehumidification for areas with different humidity levels, the uptake starting point must be shifted toward higher concentration ratios, as depicted by black, red, purple, and brown lines in Figure 2. Their corresponding EDF curves are also shown in Figure 3. From the EDF curves, it can be seen that as the uptake point of the adsorption isotherm is shifting toward a higher pressure ratio, the EDF curve i.e., 'ε o ,' is shifting toward the left, i.e., the lower energy level. It depicts that a higher concentration/pressure ratio must be achieved for pores with very low energy levels. It can also be seen that the width of EDF or heterogeneity 'm' of the porous adsorbent surface is decreasing and the fractional availability 'X(ε)' of the median energy site 'ε o ' is increasing. This is why the EDF curves and the corresponding isotherm curve are getting steep. Therefore, the uptake starting point of the isotherm can be shifted to the high-pressure ratio or the humidity level if the median energy level of the adsorption energy sites is shifted toward the lower value. The shift in the energy level must be according to the required humidity in which the adsorbent will be employed. Moreover, in these isotherms, the main focus is to shift the pressure ratio of the isotherms only. This is why the pore expansion is only shifting the median energy level. However, the total uptake has been kept the same because the increase in total uptake with the pore expansion also requires structural stability and additional parameters and testing to make it feasible for the industrial application. But, here, the pore expansion also increases the surface heterogeneity of the adsorbent and the fractional availability of energy sites. Therefore, the slope or the rate of total uptake grows toward a high-pressure ratio. A universal isotherm model with two terms was used to fit all of the isotherms shown in Figure 2, and from that, the obtained value of the surface heterogeneity is shown in Figure 4. As it was based on two terms, two heterogeneity values are shown because the material has two groups of the adsorption sites, and the share of each adsorption site's groups in these isotherms and the EDF curve is shown in Figure 5. For the first isotherm, the percentage of each group of adsorption sites is almost the same. This is why in Figure 3, we can see two green EDF curves, and their heterogeneity values are shown in Figure 4, where the red line is associated with the EDF curves with the highest 'X(ε)' values, except for the first isotherm. The heterogeneity of the first isotherm is less for the first group of the adsorption site, causing a sharp rise or a higher slope at the initial value of the pressure. However, at higher pressure ratios, the effect of the second group of adsorption sites comes into effect. The heterogeneity increases, causing a gradual increase in the total uptake and a decrease in the slope. For the other four isotherms, only one group of the adsorption sites has a dominant effect which can be seen from the gap of both curves in Figure 5. This is why only one EDF curve is visible in Figure 3 for these results, for which the heterogeneity value is very low and almost constant, the red curve in Figure 4, causing a steep rise in the total uptake. The blue line shows high heterogeneity, representing the part of the isotherm before a sudden rise in the total uptake. In the previous isotherm result, with the shift in the uptake point, the heterogeneity of the porous surface also changed. Figure 6 shows the MOF 841 isotherm (green line) (Furukawa et al., 2014), an ideal S-shape for the dehumidification application but operating in the low-pressure ratio range or the humidity region. Therefore, the need is to shift the uptake point to the highpressure ratio without affecting the original high rate of the uptake, i.e., the heterogeneity of the porous surface. The other two isotherms (black and red lines) with the uptake at highpressure ratio points are shown in Figure 6, and the corresponding EDF curves are demonstrated in Figure 7. From the EDF curves in Figure 7, it can be seen that the width or heterogeneity 'm' of all of the EDF curves is the same along with the fractional availability 'X(ε)' of the median energy level. This is why all of the isotherms depict a similar rate of the total uptake or slope of the isotherm. As the median energy level 'ε o ' shifts toward a lower energy level, the pressure ratio for adsorption uptake shifts to a higher value. Similar to the previous results, the two terms of the universal isotherm model were used to fit these three isotherms, and the obtained value of their heterogeneity 'm' and the probability share of each adsorption sites group are shown in Figures 8, 9. From Figure 8, it can be seen that the heterogeneity value is the same for all the three isotherms. In addition, its value is very low for the red line as it represents the group of the adsorption sites group with a share of more than 85%. Although other groups have higher and similar heterogeneity, due to their low share of less than 15%, they are not significantly contributing to the EDF curve, which is why such a large deviation of results between groups A and B can be seen in Figures 8, 9. These results show that by changing the structural parameters, one can achieve the desired structural characteristics of a porous adsorbent surface. As a result, it can tune the adsorbent's performance for the optimized application. This tool and methodology have significance in the dehumidification industry as the humid conditions vary from region to region. Although the examples of MOF materials are considered and analyzed in this manuscript, the methodology applies to all adsorbents with physical adsorption. A case of silica is already discussed in our previous publication (Burhan et al., 2019b). This also proves the validation of the methodology and the results. Therefore, with the help of this tool, the industry can understand the porous structural properties of the adsorbent surface needed to tune the material performance for a particular humidity value. Thus, by following the results of this methodology, the material can be synthesized and optimized for a specific humidity with significant energy savings. CONCLUSION A theoretical methodology has been presented to understand the adsorbent performance by modifying the surface parameters. The proposed methodology can help the industry and chemists to tune and optimize the material performance for the dehumidification of air, especially in the desired humidity range. The required porous structure in terms of surface heterogeneity, distribution of adsorption energy sites, and the fraction of availability of each adsorption site can be predicted. For dehumidification, the desiccant should have a lower heterogeneity level and a higher fractional availability, i.e.,>85% for the higher rate of the adsorption uptake. At the concentration ratio where a high adsorption uptake is required, the adsorption energy site ε o equivalent to the critical energy level ε c of that pressure ratio point must be in a high fractional availability X(ε), i.e.,>85%. Thus, the adsorbent can be synthesized accordingly to have the tuned isotherm for optimized performance by knowing the required distribution of adsorption energy sites. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS MB wrote the manuscript. MB, QC, MKJ, and KN discussed and analyzed the results and reviewed the manuscript.
4,844
2022-06-28T00:00:00.000
[ "Engineering" ]
Dark energy perturbations and cosmic coincidence While there is plentiful evidence in all fronts of experimental cosmology for the existence of a non-vanishing dark energy (DE) density \rho_D in the Universe, we are still far away from having a fundamental understanding of its ultimate nature and of its current value, not even of the puzzling fact that \rho_D is so close to the matter energy density \rho_M at the present time (i.e. the so-called"cosmic coincidence"problem). The resolution of some of these cosmic conundrums suggests that the DE must have some (mild) dynamical behavior at the present time. In this paper, we examine some general properties of the simultaneous set of matter and DE perturbations (\delta\rho_M, \delta\rho_D) for a multicomponent DE fluid. Next we put these properties to the test within the context of a non-trivial model of dynamical DE (the LXCDM model) which has been previously studied in the literature. By requiring that the coupled system of perturbation equations for \delta\rho_M and \delta\rho_D has a smooth solution throughout the entire cosmological evolution, that the matter power spectrum is consistent with the data on structure formation and that the"coincidence ratio"r=\rho_D/\rho_M stays bounded and not unnaturally high, we are able to determine a well-defined region of the parameter space where the model can solve the cosmic coincidence problem in full compatibility with all known cosmological data. I. INTRODUCTION Undoubtedly the most prominent accomplishment of modern cosmology to date has been to provide strong indirect support for the existence of both dark matter (DM) and dark energy (DE) from independent data sets derived from the observation of distant supernovae [1], the anisotropies of the CMB [2], the lensing effects on the propagation of light through weak gravitational fields [3], and the inventory of cosmic matter from the large scale structures (LSS) of the Universe [4,5]. But, in spite of these outstanding achievements, modern cosmology still fails to understand the ultimate physical nature of the components that build up the mysterious dark side of the Universe, most conspicuously the DE component of which the first significant experimental evidence was reported 10 years ago from supernovae observations. The current estimates of the DE energy density yield ρ exp D ≃ (2.4 × 10 −3 eV ) 4 and it is believed that it constitutes roughly 70% of the total energy density budget for an essentially flat Universe. The big question now is: what is it from the point of view of fundamental physics? One possibility is that it is the ground state energy density associated to the quantum field theory (QFT) vacuum and, in this case, it is traditional to associate ρ D to ρ Λ = Λ/8π G, where Λ is the cosmological constant (CC) term in Einstein's equations. The problem, however, is that the typical value of the (renormalized) vacuum energy in all known realistic QFT's is much bigger than * Electronic address<EMAIL_ADDRESS>† Electronic address<EMAIL_ADDRESS>‡ Electronic address<EMAIL_ADDRESS>the experimental value. For example, the energy density associated to the Higgs potential of the Standard Model (SM) of electroweak interactions is more than fifty orders of magnitude larger than the measured value of ρ D . Another generic proposal (with many ramifications) is the possibility that the DE stands for the current value of the energy density of some slowly evolving, homogeneous and isotropic scalar field (or collection of them). Scalar fields appeared first as dynamical adjustment mechanisms for the CC [6,7] and later gave rise to the notion of quintessence [8]. While this idea has its own merits (specially concerning the dynamical character that confers to the DE) it has also its own drawbacks. The most obvious one (often completely ignored) is that the vacuum energy of the SM is still there and, therefore, the quintessence field just adds up more trouble to the whole fine-tuning CC problem [9,10]! Next-to-leading is the "cosmological coincidence problem", or the problem of understanding why the presently measured value of the DE is so close to the matter density. One expects that this problem can be alleviated by assuming that ρ D is actually a dynamical quantity. While quintessence is the traditionally explored option, in this paper we entertain the possibility that such dynamics could be the result of the so-called cosmological "constants" (like Λ, G,...) being actually variable. It has been proven in [11] that this possibility can perfectly mimic quintessence. It means that we stay with the Λ parameter and make it "running", for example through quantum effects [12,13,14] 1 . However, in [17] it was shown that, in order to have an impact on the coincidence problem, the total DE in this context should be conceived as a composite fluid made out of a running Λ and another entity X, with some effective equation of state (EOS) parameter ω X , such that the total DE density and pressure read ρ D = ρ Λ + ρ X and p D = −ρ Λ + ω X ρ X , respectively. We call this system the ΛXCDM model [17]. Let us emphasize that X (called "the cosmon") is not necessarily a fundamental entity; in particular, it need not be an elementary scalar field. As remarked in [17], X could represent the effective behavior of higher order terms in the effective action (including non-local ones). This is conceivable, since the Bianchi identity enforces a relation between all dynamical components that enter the effective structure of the energy-momentum tensor in Einstein's equations, in particular between the evolving Λ and other terms that could emerge after we embed General Relativity in a more general framework [18,19]. Therefore, at this level, we do not impose a microscopic description for X and in this way the treatment becomes more general 2 . The only condition defining X is the DE conservation law, namely we assume that ρ D = ρ Λ + ρ X is the covariantly self-conserved total DE density. In this paper, we analyze the combined dynamics of DE and matter density perturbations for such conserved DE density ρ D . The present study goes beyond the approximate treatment presented in [23], where we neglected the DE perturbations and estimated the matter perturbations of the ΛXCDM model using an effective (variable) EOS w e for the composite fluid (ρ Λ , ρ X ). The main result was that a sizeable portion of parameter space was still compatible with a possible solution of the cosmic coincidence problem. The "effective approach" that we employed in [23] was based on three essential ingredients: i) the use of the effective EOS representation of cosmologies with variable cosmological parameters [11]; ii) the calculation of the growth of matter density fluctuations using the effective EOS of the DE [24]; and iii) the application of the, so-called, "F-test" to compare the model with the LSS data, i.e. the condition that the linear bias parameter, b 2 (z) = P GG /P MM does not deviate from the ΛCDM value by more than 10% at z = 0, where P MM ∝ (δρ M /ρ M ) 2 is the matter power spectrum and P GG is the galaxy fluctuation power spectrum [4,5] -see [23,25] for details. This three-step methodology turned out to be an efficient streamlined strategy to further constrain the region of the original parameter space [17]. However, there remained to perform a full fledged analysis of the system of cosmological perturbations in which the DE and matter fluctuations are coupled in a dynamical way. This kind of analysis is presented here. The structure of the paper is as follows. In the next section, we outline the meaning of the cosmic coincidence problem within the general setting of the cosmological constant problem. In section III, the basic equations for cosmological perturbations of a multicomponent fluid in the linear regime are introduced. In section IV, we describe the general framework for addressing cosmological perturbations of a composite DE fluid with an effective equation of state (EOS). In section V, we describe some generic features of the cosmological perturbations for the dark energy component. The particular setup of the ΛXCDM model is focused in section VI. In sections VII and VIII, we put the ΛXCDM model to the stringent test of cosmological perturbations and show that the corresponding region of parameter space becomes further reduced. Most important, in this region the model is compatible with all known observational data and, therefore, the ΛXCDM proposal can be finally presented as a robust candidate model for solving the cosmic coincidence problem. In section IX, we offer a deeper insight into the correlation of matter and DE perturbations. In the last section, we present the final discussion and deliver our conclusions. II. THE COINCIDENCE PROBLEM AS A PART OF THE BIG CC PROBLEM The cosmic coincidence problem is a riddle, wrapped in the polyhedric mystery of the Cosmological Constant Problem [9,10], which has many faces. Indeed, we should clearly distinguish between the two main aspects which are hidden in the cosmological constant (CC) problem. In the first place, we have the "old CC problem" (the ugliest face of the CC conundrum!), i.e. the formidable task of trying to explain the relatively small (for Particle Physics standards) measured value of ρ Λ or, more generally, of the DE density [1], roughly ρ exp D ∼ 10 −47 GeV 4 , after the many phase transitions that our Universe has undergone since the very early times, in particular the electroweak Higgs phase transition associated to the Standard Model of Particle Physics, whose natural value is in the ballpark of ρ EW ∼ G −2 F ∼ 10 9 GeV 4 (G F being Fermi's constant). The discrepancy ρ EW /ρ exp D , which amounts to some 56 orders of magnitude, is the biggest enigma of fundamental physics ever! 3 Apart from the induced CC contribution from phase transitions, we have the pure vacuum-tovacuum quantum effects. Since the (renormalized) zero point energy of a free particle of mass m contributes ∼ m 4 to the vacuum energy density [12,13], it turns out that even a free electron contributes an amount more than thirty orders of magnitude larger than the aforementioned experimental value of ρ D . Only light neutrinos m ν ∼ 10 −3 eV , or scalar particles of similar mass, could contribute just the right amount, namely if these particles would be the sole active degrees of freedom in our present cold Universe (see [12]). On the other hand, the cosmic coincidence problem [26] is that second ("minor") aspect of the CC problem addressing the specific question: "why just now?", i.e. why do we find ourselves in an epoch t = t 0 where the DE density is similar to the matter density, ρ D (t 0 ) ≃ ρ M (t 0 )? In view of the rapidly decreasing value of ρ M (a) ∼ 1/a 3 , where a = a(t) is the scale factor, it is quite puzzling to observe that its current value is precisely of the same order of magnitude as the vacuum energy or, in general, the dark energy density ρ D . It is convenient to define the "cosmic coincidence ratio" where (Ω D (a), Ω M (a)) are the corresponding densities normalized with respect to the current critical density ρ 0 c ≡ 3H 2 0 /8πG. For Ω 0 M ≃ 0. 3 and Ω 0 D ≃ 0.7, we have r 0 ≃ 2.3, which is of O(1). However, in the standard cosmological ΛCDM model, where Ω D is constant and Ω M (a → ∞) → 0, the ratio r grows unboundedly with the expansion of the Universe. So the fact that r 0 = O(1) is regarded as a puzzle because it suggests that t = t 0 is a very special epoch of our Universe. One could also consider the inverse ratio r −1 = ρ M (a)/ρ D (a), which goes to zero with the expansion. The coincidence problem can be equivalently formulated either by asking why r is not very large now or why r −1 is not very small. Solving the coincidence problem would be to find either 1) a concrete explanation for r and r −1 being of order one at present within the standard cosmological model, or 2) a modified cosmological model (compatible with all known cosmological data) insuring that these ratios do not undergo a substantial change, say by more than one order of magnitude or so, for a very long period of the cosmic history that includes our time. In a very simplified way, let us summarize some of the possible avenues that have been entertained to cope with the coincidence puzzle: • Quintessence and the like [8,27,28,29,30]. One postulates the existence of a set of cosmological scalar fields φ i essentially unrelated to the rest of the particle physics world. The DE produced by these fields has an effective EOS parameter ω D −1 which causes ρ D to decrease always with the expansion (i.e. dρ D /da < 0), but at a pace slower (on average) than that of the background matter. Thus, it finally catches up with it and ρ D emerges to surface, i.e. the condition ρ D > ρ M eventually holds (presumably near our time). In this framework, there is the possibility of self-adjusting and tracker solutions [8,27], where the DE keeps track of the matter behavior and ultimately dominates the Universe. It requires to take some special forms of the potential, and in some cases the Lagrangian involves non-canonical kinetic terms. For example, in the simple case of a single scalar field and the exponential potential V (φ) ∼ exp (−λφ/M P ) one finds that the coincidence ratio becomes fixed at the value where ω m is the EOS of the background matter (i.e. 0 or 1/3, depending on whether cold or relativistic matter dominates, respectively). So, at the present time, r = 3/(λ 2 − 3), and by appropriate choice of λ one can match the current experimental value. But of course the choice of the potential was rather peculiar and the field φ itself is completely ad hoc. Moreover, it has a mass m φ = V ′′ (φ) ∼ H ∼ 10 −33 eV (as it follows from a self-consistent solution of Einstein's equations); such mass scale is 30 orders of magnitude below the mass scale associated to the DE, which is ρ 1/4 D ∼ 10 −3 eV . In this sense, it looks a bit unnatural to aim at solving the CC problem by introducing a field whose extremely tiny mass creates another cosmological puzzle. On the other hand, within the context of interactive quintessence models [28] (whose main leitmotif is precisely trying to cure the coincidence problem), the coupling of φ i and the matter components makes allowance for energy exchange between the two kinds of fields, and as a result the ratio (1) can be constant or slowly variable, whereas in other implementations one can achieve an oscillatory tracking behavior of r, although the construction is essentially ad hoc [29]. Another generalization leads to k-essence models [30] (characterized by non-canonical kinetic terms), where fine tuning problems in the tracking can be disposed of but the dominant background component can be tracked only up to matter-radiation equality and is lost immediately afterwards (as the DE is immediately prompted into a CC-like behavior). In one way or another, however, all variants of quintessence suffer from several drawbacks, and in particular from the following generic one: they assume (somehow implicitly) that the remaining fields of the particle physics spectrum (i.e. those which were already there from the very beginning) have nothing to do with the CC problem. As a result of such a bold assumption, the (likely real) vacuum problem of the conventional fields in QFT is merely traded for the (likely fictitious) vacuum problem of quintessence, which is no less acute because no real explanation is provided for the smallness of the current ρ D value versus m 4 (where m is any typical mass scale in Particle Physics). Hence we are back to the same kind of CC problem we started with. • Phantom energy [31]. It is motivated by the fact that, observationally speaking, the effective EOS of the DE cannot be excluded to satisfy ω D −1 near our present time. As indicated above, many quintessence-like models in reality are hybrid constructions containing a mixture of fields with a phantom component. The reason is that one wants to give allowance for a "CC-crossing" ω D = −1 near our time. While phantom energy shares with quintessence the use of scalar fields φ i , here the DE produced by these fields is always increasing with the expansion, dρ D /da > 0, even after the relation ρ D > ρ M is fulfilled. The consequence of this ever growing behavior of the dark component is that one ends up with a superaccelerated expansion of the Universe that triggers an eventual disruption of all forms of matter (the so-called "Big Rip"). When computing the fraction of the lifetime of the Universe where the ratio (1) stays within given bounds before the "doomsday", one finds that it can be sizeable. • Non-local theories. There is some renewed interest in this kind of theories in which the emphasis is placed on the existence of possible non-local structures in the effective action [32]. It has recently been emphasized in [16] that the dynamical evolution of the vacuum energy should come from a resummation of terms in the effective action leading to non-local contributions of the form R F (G 0 R), for some unknown function F of dimension 2, where G 0 is the massless Green's function (G 0 ∼ 1/ ). The canonical possibility would be F = M 2 G 0 R, where M is a parameter with dimension of mass. This situation leads to an effective evolution of the CC of the form ∆ρ Λ ∼ M 2 H 2 during the matter dominated epoch, whereas in the radiation era the effective CC would approximately be zero (because R ∼ T µ µ ≃ 0 for relativistic matter, see (12)-(13) below). As a result, the coincidence puzzle could somehow be understood from the fact that the CC may start to be preponderant at some point once the onset of the matter dominated epoch is left behind. • Of course many other ideas have been explored. For instance, one may introduce special fluids with very peculiar EOS, such as the Chaplygin gas [33], which behaves as pressureless matter at early times (ω D ≃ 0) and as vacuum energy at present (ω D ≃ −1). Although there is some connection with braneworld cosmology, this proposal suffers from the same problem as quintessence in that it supersedes the vacuum state of traditional fields by the new vacuum of that peculiar fluid. Finally, let us mention the Anthropic models, which fall in a quite different category, in the sense that one does not look for a solution of the coincidence problem exclusively from first principles of QFT or string theory, but rather through the interplay of the "human factor". Basi-cally, one ties the value of the ratio (1) to the time when the conditions arise for the development of intelligent life in the Universe, in particular of cosmologists making observations of the cosmos. This variant has also a long story, but we shall refrain from entering the details, see e.g. [9,34]. III. COSMOLOGICAL PERTURBATIONS FOR A MULTICOMPONENT FLUID IN THE LINEAR REGIME In the remaining of the paper we concentrate on studying some general properties of the cosmological perturbations, both of matter and DE, and the implications they may have on the coincidence problem within models characterized by having running vacuum energy and other DE components. According to cosmological perturbation theory, all energy density components, including the dark energy, should fluctuate and contribute to the growth of the large-scale cosmological structures. In this section, we discuss the general framework of linear density perturbations in models composed of a multicomponent DE fluid besides the canonical matter. In the following we use the standard metric perturbation approach [35] and consider simultaneous density and pressure perturbations for all the components (N = 1, 2, ...) of the fluid, including matter and all possible contributions from the multicomponent DE part. At the same time, we have metric and velocity perturbations for each component: µν is the background metric. The 4-vector velocity U µ N in the comoving coordinates has the following components and perturbations (U 0 where v i N is the three-velocity of the N th component of the fluid in the chosen coordinate system. As a background space-time, we assume the homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker (FLRW) metric with flat space section, hence where a is the scale factor. In order to derive the set of perturbed equations, let us first introduce Einstein's equations: where G is the Newton constant and T µν is the total energy-momentum tensor of matter and dark energy. Both the background and perturbed metric are assumed to satisfy these equations. The total energy-momentum tensor of the system is assumed to be the sum of the perfect fluid form for each component: The components of T µ ν are the following: where ρ T and p T are the total energy density and pressure, respectively. Perturbations on the metric and on the energymomentum tensor are uniquely defined for a given perturbed space-time, provided we make a gauge choice. The latter means that we choose a specific coordinate system; in this way, four out of the 10 components δg µν ≡ h µν of the metric perturbation can be fixed at will. Here we have adopted the synchronous gauge, widely used in the literature, in which the four preassigned values of the metric perturbations are h 0i = 0 and h 00 = 0. Setting h µν = −a 2 χ µν , in this gauge the perturbed, spatially flat, FLRW metric takes on the form where in the last equality we have expressed the result also in terms of the conformal time η, defined through dη = dt/a. We may compare (9) with the most general perturbation of the spatially flat FLRW metric, consisting of the 10 degrees of freedom associated to the two scalar functions ψ, φ, the three components of the vector function ω i (i = 1, 2, 3), and the five components of the traceless χ ij . Clearly, the synchronous gauge (9) is obtained by setting ψ = 0, ω i = 0 and absorbing the function φ into the trace of χ ij . In this way, χ ij in (9) contributes six degrees of freedom. As we will see in a moment, in practice only the nonvanishing trace of the metric disturbance will be necessary to perform the analysis of cosmic perturbations in this gauge. To within first order of perturbation theory, such a trace is given by where g ij is the inverse of g ij = −a 2 (t) (δ ij +χ ij ), and it is understood that the repeated Latin indices are summed over 1, 2, 3. For the physical interpretation, notice that the synchronous gauge is associated to a coordinate system in which the cosmic time coordinate is comoving with the fluid particles (g 00 = 1, i.e. ψ = 0), which is the reason for its name and also explains why this gauge does not have an obvious Newtonian limit. In fact, this gauge choice is generally appropriate for the study of fluctuations whose wavelength is larger than the Hubble radius (λ ≫ d H ≡ H −1 ). Actually, any mode satisfies this condition at sufficiently early epochs, and in this regime the effects of the space-time curvature are unavoidable. Next we wish to compute the perturbations in the synchronous gauge. To start with, it is convenient to rewrite Einstein's equations (5) as follows, where T = T µ µ is the trace of (6), hence For the calculation of the perturbations, we can use any of the components of Einstein's equations. However, since we are going to use the conservation law ∇ µ T µ ν = 0 to derive additional fluctuation equations, it is convenient to perturb the (00)-component of (12) because the other components are well-known not to be independent of the conservation law. Thus, using (6), (12) and (13) we obtain On the other hand, a straightforward calculation shows that the perturbation of the (00)-component of the Ricci tensor can be written as follows: where we have used (11) and defined the "hat variable" The overhead circle ( • ) indicates partial differentiation with respect to the cosmic time (i.e. • f ≡ ∂f /∂t, for any f ), in order to distinguish it from other differentiations to be used later on. Therefore, H = • a /a is the ordinary expansion rate in the cosmic time t. Since the fluctuations δS 00 and δR 00 from (14) and (15) are constrained by (12), we find If we substitute (16) in the previous expression, a second order differential equation in the original variable (11) ensues. In terms of the conformal time η, it can be written as follows: where the dot (˙) indicates differentiation with respect to η (i.e.ḟ ≡ df /dη) and H ≡ȧ/a = H a is the expansion rate in the conformal time. The Friedmann equation can be written in terms of the normalized densities as where H 0 is the present value of the Hubble parameter and Ω N ≡ ρ N /ρ 0 c are the normalized densities with respect to the current critical density ρ 0 c ≡ 3H 2 0 /8πG. The subsequent step is to perform perturbations on the conservation law ∇ µ T µν = 0, as this will provide the additional independent equations. Using (6), the previous law reads explicitly as follows: For any four-velocity vector, we have U µ N U N µ = 1 and, therefore, we have the orthogonality relation U N ν ∇ µ U ν N = 0. In this way, by contracting Eq. (20) with U N ν we find the simpler result Let us emphasize that the sum N in this equation need not run necessarily over all the terms of the cosmic fluid. It may hold for particular subsets of fluid components that are overall self-conserved. In particular, it could even hold for each component, if they would be individually conserved. In our case, it applies to the specific matter component and also, collectively, to the multicomponent DE part. It is straightforward to check that, in the FLRW metric, we have Using this relation, it is immediate to see that, in the co-moving frame, Eq.(21) boils down to Moreover, perturbing (22) in the synchronous gauge, we find (using δΓ µ µ 0 = −ĥ/2 for the perturbed Christoffel symbol involved in the covariant derivative) the useful result where we have introduced the notation θ N ≡ ∇ µ (δU µ N ) = ∇ i (δU i N ) (with δU 0 N = 0) for the covariant derivative of the perturbed three-velocity δU i N = v i N . Equipped with these formulas, the perturbed Eq. (23) immediately leads to The previous result could have equivalently been obtained by setting ν = 0 in (20) and perturbing the corresponding equation. An independent relation can be obtained by setting ν = i in (20) within the co-moving frame and carrying out the perturbation. Using the relevant Christoffel symbol Γ i 0j = H δ ij and Eq.(22) we obtain, after some calculations, The final step is obtained by computing the divergence ∇ i on both sides of this equation. To within first order of perturbation theory, we obtain where it is furthermore understood that we have used the Fourier decomposition for all the perturbation variables: where δf = (ĥ, δρ N , δp N , θ N ). In Fourier space, the perturbation variables are denoted with the same notation, but they are the Fourier transforms of the original ones, so their arguments are t and k because the space variable x has been traded for the wave number k ≡ |k|. The latter will be measured in units of h Mpc −1 , where h ≃ 0.7 is the reduced Hubble constant -not to be confused with the trace of the synchronous perturbed metric, Eq. (11). In these units, the linear regime corresponds to length scales ℓ ∼ k −1 with wave numbers k < 0.2 h Mpc −1 , i.e. ℓ > 5 h −1 M pc. Notice that, if desired, one can easily rewrite the above perturbation equations (25) and (27) in conformal time simply by using • f =ḟ /a for any f . We have obtained three basic sets of perturbation equations (17), (25) and (27) for the four kinds of perturbation variables (ĥ(t, k), δρ N (t, k), δp N (t, k), θ N (t, k)). It is thus clear that the evolution of the cosmic perturbations can be completely specified only after we assume some relation of the pressure perturbation δp N and the density perturbation δρ N for each fluid. If the perturbations are adiabatic, then that relation is simply where c 2 a,N is the adiabatic speed of sound for each fluid, defined as: where the dot differentiation here is with respect to whatever definition of time. Notice that if the various components would have an equation of state (EOS) of the form p N = w N ρ N , with constant EOS parameter w N , then c 2 a,N = w N . However, even in this case the mixture has a variable effective EOS parameter, as we will see in the next section. On the other hand, if the perturbations are nonadiabatic, there is an entropy contribution to the pressure perturbation [36]: where Γ N ≡ (δp N ) non−adiabatic /p N is the intrinsic entropy perturbation of the N th component, representing the displacement between hypersurfaces of uniform pressure and uniform energy density [37]. For covariantly conserved components, a gauge-invariant relationship between δp N and δρ N for a general non-adiabatic stress is given by [37]- [40]: where c 2 s,N can be regarded as a rest frame speed of sound. We will refer to c 2 s,N as the effective speed of sound, in the sense that we treat the cosmic fluid effectively as hydrodynamical matter. Since (32) is gauge invariant, the perturbed quantities in this expression can be computed, in particular, within the synchronous gauge. In this way, we can consistently substitute (32) in the equations (17), (25) and (27) to eliminate the perturbation δp N . This allows us, finally, to solve for the three basic sets of perturbations IV. PERTURBATIONS FOR A COMPOSITE DE FLUID WITH A VARIABLE EFFECTIVE EQUATION OF STATE In this section, we apply the linear matter and dark energy density perturbations to a general class of models in which the DE fluid is a composite and covariantly selfconserved medium and matter is also canonically conserved. From Eq. (23), in the matter dominated epoch (p M = 0), the matter component ρ M satisfies Here we found convenient to trade the differentiation with respect to the cosmic time ( • ) for the differentiation with respect to the scale factor. The latter is denoted by The scale factor is related to the cosmological redshift z by a(z) ≡ 1/(1 + z), where we define a(0) ≡ a 0 = 1 at the present time. It follows that the normalized matter density evolves as where Ω 0 M is the corresponding current value. Since the total DE is also globally conserved, from Eq. (23) we also obtain where w e is the effective equation of state (EOS) parameter and ρ D = ρ 1 + ρ 2 + ... is the total density of the multicomponent DE fluid. For a composite DE model, in which the DE is a mixture of fluids with individual EOS p i = ω i ρ i (i = 1, 2, ..., n), the effective EOS parameter is defined as The Hubble expansion in terms of the normalized densities, in the matter dominated period, follows from (19): where Ω D is the normalized total DE density Ω D (a) ≡ ρ D (a)/ρ 0 c . The non-adiabatic perturbed pressure (32) for the total DE component can be written in terms of the effective EOS as 4 where we have omitted for simplicity the subindex 'D' from the adiabatic and effective speeds of sound of the DE (i.e. c 2 a ≡ c 2 a,D , c 2 s ≡ c 2 s,D ; this convention will be used throughout the text). The units are taken to be the light speed c = 1 and = 1 such that the Planck scale is defined by M P = G −1/2 = 1.22 × 10 19 GeV. In these units, usually 0 ≤ c 2 s ≤ 1 for a general DE model. In this range, one can show that for constant EOS parameter there is small suppression on the DE fluctuation δρ D as c 2 s increases [37]. Near-zero (but not vanishing) sound speed today is possible in models like k-essence, for example, in which the EOS parameter is positive until the matterdominated triggers a change to negative pressure; in this kind of models, it is even possible to have c 2 s > 1, regime for which the growth of the DE density perturbations is suppressed [42]. In a non-perfect fluid, spatial inhomogeneities in T µν imply shear viscosity in the fluid. In this case, a possible contribution to shear through a "viscosity parameter" c 2 vis should also be taken into account [43]. In principle, c 2 s is an arbitrary parameter. Nevertheless, the limit where (c 2 s , c 2 vis ) → (1, 0) corresponds exactly to a scalar field component with canonical kinetic term [44]. Using the total DE conservation law (36), we can write the total DE adiabatic sound speed (30) as The perturbed equations (25) and (27) for the (conserved) matter component (for which p M = δp M = 0) can be written as differential equations in the scale factor: where δ M ≡ δρ M /ρ M is the relative matter fluctuation (density contrast). According to Eq.(42), the matter velocity gradient is decaying (θ M ∝ a −2 ). Assuming the conventional initial condition θ 0 M ≡ θ M (a = 1) = 0, we have θ M (a) = 0 (∀a). So, the perturbed matter set of coupled equations (41) and (42) yields the simple relationĥ Let us also define the relative fluctuation of the DE component, δ D ≡ δρ D /ρ D . Using the non-adiabatic perturbed pressure (39) and the DE conservation law (36), we can write the perturbed equations (25) and (27) for the self-conserved DE fluid in the following way where in the last equation we have used (40) to eliminate c 2 a . Moreover, from (45) one can see that a negligible DE sound speed (c 2 s ≈ 0) causes the velocity gradient to decay (θ D ∝ a −2 ), as in the case of matter [Eq. (42)]. If we assume the conventional initial conditions θ 0 M = θ 0 D = 0, we have θ M = θ D = 0 (∀a). In this case, the total DE fluid is comoving with the matter as long as the Universe and perturbations evolve, which is a very particular case. Actually, for this case, the k (scale) dependence disappears from the equations. On the other hand, from Eq. (45) one can see that, if we neglect the DE perturbations, δρ D ≈ 0, for a constant c 2 s we obtain again θ M = θ D = 0 and the scale independence. However, δρ D modifies the evolution of the metric fluctuations according to the perturbed Einstein equation (17); and, in turn, this causes the corresponding evolution of the matter perturbations through Eq. (43). We can write down the appropriate form of the perturbation equation as follows. First, we define the "instantaneous" normalized densities at a cosmic time t, Next we use Eq.(43) to eliminateĥ from (46). With the help of andΩ where r(a) is the "cosmic coincidence ratio" (1) between the DE and matter densities, we may finally rewrite (46) as follows If we would neglect the DE fluctuations (δ D = 0, θ D = 0), the r.h.s. of the previous equation would vanish. Under these conditions, one would be left with a decoupled, second-order, homogeneous differential equation that determines the matter perturbations δ M . As could be expected, the resulting equation coincides with Eq. (2.16) of Ref. [23], where the approximation of neglecting the DE perturbations was made right from the start as an intermediate procedure to investigate the amount of linear growth of the matter perturbations and put constraints on the parameter space of the ΛXCDM model. This procedure was called "effective" in that reference, since all the information about the DE is exclusively encoded in the non-trivial EOS function w e = w e (a). Let us write the homogeneous equation as follows, and let us assume a time interval not very large such that Ω D and w e remain approximately constant. Looking for a power-law solution of (51) in the limitΩ D ≪ 1, we find, for the growing mode, Since w e < 0 for any conceivable form of DE, this equation shows very clearly that we should expect a growth suppression of matter perturbations (i.e. δ M ∼ a n , with n < 1) whenever a (positive) DE densityΩ D is present within the horizon. Physically speaking, we associate this effect to the existence of negative pressure that produces cosmological repulsion of matter. However, being the DE density non-constant in general (δ D = 0), the DE perturbations themselves (and not only the value of the background DE density) should act as a source for the matter fluctuations. This effect is precisely encoded in the inhomogeneous part of Eq. (50), i.e. in its r.h.s which is, in general, non-zero for δ D , θ D = 0. In order to better appreciate this effect, let us consider another simplified situation where the analytical treatment is still possible, namely, let us assume an adiabatic regime (c 2 s = c 2 a ) with roughly constant EOS (w e ≃ const.) at very large scales (for which k in Eq. (45) is very small, and hence the θ D component becomes negligible). Under these conditions, Eq. (44) greatly simplifies as follows: where in the second step we have used (43). The rates of change of the perturbations for matter and DE, therefore, become proportional in this simplified setup. To be more precise, we see from (53) that for w e −1 (quintessencelike behavior of the composite DE fluid) the matter fluctuations that are growing with the expansion (δ ′ M > 0) trigger fluctuations in the DE also growing with the expansion (δ ′ D > 0), whereas for w e −1 (phantom-like behavior of the DE) we meet exactly the opposite situation, i.e., increasing fluctuations in the matter density (δ ′ M > 0) lead to decreasing fluctuations in the DE (δ ′ D < 0). Note that upon trivial integration of (53) one finds δ D = (1 + w e ) δ M + C, where C is a constant determined by the initial conditions. For C = 0 one obtains a result that fits with the well-known adiabatic initial condition relating the density contrasts of generic matter and DE components [35], where, for non-relativistic matter, we have w M = 0, and ω D is, in this case, the effective EOS w e of the composite DE fluid. Since a positive DE density always leads to cosmological repulsion, it follows from (53) that one should expect some inhibition (resp. enhancement) of the matter growth for the quintessence-like (resp. phantom-like) case. Although the previous example illustrates the impact of the DE fluctuations on the matter growth for a simple situation, a more complete treatment is required in the general case. In practice, this means that we have to numerically solve the system (43)- (46) or, if desired, replace the last equation with the second order inhomogeneous Eq. (50) whose r.h.s. depends on the density contrast and the velocity gradient for the DE, δ D , θ D = 0. Notice that the presence of overdensity DE perturbations (δ D > 0) does not necessarily imply the inhibition of the corresponding matter perturbations since the coefficient 1 + 3c 2 s in front of δ D on the r.h.s. of Eq. (50) is positive for non-adiabatic DE perturbations. Only for c 2 s = c 2 a we meet the aforementioned possibility because c 2 a ≃ w e is usually negative, unless w e is rapidly decreasing with the expansion, see Eq. (40). In this sense, the discussion above, based on Eq. (53), is only valid at very large scales, specifically for k-modes whose length scale ℓ ∼ k −1 is outside the sound horizon (cf. section V) 5 . However, at smaller scales, specially at scales inside the sound horizon, and for a general non-adiabatic regime, we need to solve the aforesaid complete system of equations for the basic set of perturbation variables for the metric, matter and DE: In this way, we have extended the effective treatment of the DE perturbations presented in Ref. [23], and we are now ready to better assess the scope of its applicability. In section VII, we will apply this general formalism to the ΛXCDM model. V. SOME GENERIC FEATURES OF THE DE PERTURBATIONS In the present section, we summarize some characteristic features of the DE perturbations. Many properties which are, in principle, common to any model with a self-conserved DE, will be later exemplified in section IX within the non-trivial context of the so-called ΛXCDM model [17,23]. A. Divergence at the CC boundary In general, the EOS of the DE will be a dynamical quantity, w e = w e (a). In many models, the EOS may change from quintessence-like (−1 < w e < −1/3) to phantom (w e < −1) behavior or vice versa, acquiring therefore the value w e = −1 (also referred to as the 'CC boundary') at some instant of time. This is problematic since, as we shall see next, the equations for the perturbations diverge at that point. The divergence at the CC boundary is common to any DE model and has been thoroughly studied in the literature (see e.g. [45,46]). The problem can be readily seen by direct inspection of Eqs. (44) and (45). Note that, even though c 2 a diverges at the crossing (cf. (40)), the combination (1 + w e ) c 2 a remains finite and, therefore, Eq.(44) is well-behaved. Thus, the problem lies exclusively in the (1 + w e ) factor in the denominator of (45). One might think that the divergence can be absorbed through a redefinition of the variables, but this is not the case. Getting around this difficulty is not always possible. It is well-known that there is no way for a single scalar field model to cross the CC boundary [45]. The simplest way to avoid this problem is to assume two fields (Q, P ), e.g. one that works as quintessence w Q > −1 and dominates the DE density until the CC-crossing point, and beyond it the other field retakes the evolution with a phantom behavior w P < −1, or the other way around; see [46,47] for a detailed discussion and specific parameterizations of w e . As we will see, in the ΛXCDM model the additional restrictions needed to avoid this divergence will further constrain the physical region of the parameter space. B. Unbounded growth for adiabatic DE perturbations Another well-known problem is the unbounded growth of the DE perturbations for a negative squared speed of sound c 2 s . As already mentioned in the previous section, in the adiabatic case we have c 2 a ≃ w e , which is in general negative as long as the EOS parameter is not varying too fast. As a result, the adiabatic DE perturbations may lead to explosive growth unless extra degrees of freedom are assumed (see e.g. [44] for discussion). In order to better see the origin of the problem, let us rewrite Eqs. (44) and (45) in terms of conformal time η, which is easily done by making use ofḟ = a 2 H f ′ (for any f ): As in section III, we have defined H =ȧ/a ≡ Ha. If we use the two equations above and Eq. (46) to get a second order differential equation for δ D , we arrive aẗ where the second term on the r.h.s. represents other terms linear in these variables. Assuming that the various perturbations are initially more or less of the same order, we see that the first term on the r.h.s. of (58) will dominate provided Notice that, for constant sound velocity, this condition simply tells us that the wavelength of the modes satisfies Eq. (59) is a generalization of this condition for arbitrary sound speed, in which case the sound horizon is given by and constitutes a characteristic scale for the DE perturbations. As we will see next, the DE is expected to be smooth for scales well below it [43,48]. For scales well inside the sound horizon, (58) becomes the equation of a simple harmonic oscillator, whose solution is (in what follows, we assume constant c 2 s for simplicity): where C 1 and C 2 are constants. We see that, for c 2 s < 0 (i.e. imaginary c s ) and neglecting the decaying mode, the DE perturbations grow exponentially. Obviously, this situation is unacceptable for structure formation 6 . On the other hand, if c 2 s > 0 the DE density contrast oscillates. Since δ M grows typically as the scale factor a, this ensures that the ratio δ D /δ M ∼ δ D /a → 0 with the expansion. In other words, this tells us that the DE is going to be a smooth component (as it is usually assumed) as long as we are well inside the sound horizon. This feature is treated in more depth in the following section. C. Smoothness of DE below the sound horizon As a matter of fact, Eq.(58) is an oversimplification. In addition of having a term proportional to δ D , we also have one depending on its first derivative. So we can write that equation more precisely as follows: which gives us not just a simple, but a damped harmonic oscillator. The coefficients D 1 and D 2 are, in general, functions of the conformal time η. So the DE density contrast does not only oscillate, but its amplitude decreases with time. Indeed, it was shown in [43] that the quantity which corresponds to the density contrast in the DE rest frame, oscillates with an amplitude A that decreases according to The damped oscillations of the DE density contrast are clearly seen in the ΛXCDM model, as we will show in section IX. Finally, we may ask ourselves whether the scales relevant to the LSS surveys [4] are well inside the sound horizon or not. Note that, in a matter dominated Universe with negligible CC term and constant c s , we have H 2 = H 2 0 Ω 0 M a −3 and the sound horizon (60) takes the simple form Thus, in general, we expect that the size of the sound horizon at present (a 0 = 1) should be roughly of the order of the Hubble length H −1 0 . On the other hand, the observational data concerning the linear regime of the matter power spectrum lie in the range 0.01hMpc −1 < k < 0.2hMpc −1 [4], which corresponds to length scales ℓ ∼ k −1 comprised in the interval (600 H 0 ) −1 < ℓ < (30 H 0 ) −1 , hence well below the sound horizon (at least for c 2 s not too close to 0). Therefore, according to the previous discussions, we expect the DE density to be smooth at those scales, and indeed it will be so for the ΛXCDM model. Nevertheless, as we will see in section IX, the larger the scale ℓ or the smaller the speed of sound c s , the more important the DE perturbations are, because then (59) is not such a good approximation. VI. THE ΛXCDM MODEL AS A CANDIDATE TO SOLVE THE COSMIC COINCIDENCE PROBLEM The ΛXCDM model [17] provides an interesting way of explaining the so-called "cosmological coincidence problem" (cf. section II). The idea is related to the possibility of having a dynamical component X, called the "cosmon" 7 , which interacts with a running cosmological constant Λ. If the matter components are canonically conserved, the composite DE "fluid" made out of X and the running Λ will be a self-conserved medium too. The dynamics of the ΛXCDM universe is such that its composite DE may enforce the existence of a stopping point after many Hubble times of cosmological expansion. As a result, this modified FLRW-like universe can remain for a long while in a situation where the coincidence ratio (1) does not change substantially from the time when the DE became significant until the remote time in the future when the stopping point is attained. Subsequently, the Universe reverses its motion till the Big Crunch. The total DE density and pressure for the ΛXCDM universe are obtained from the sum of the respective CC and X components: The evolving CC density ρ Λ (t) = Λ(t)/8π G of the model is motivated by the quantum field theory formulation in curved space-time by which the CC is a solution of a renormalization group equation. Following [13,15,16,19,49], the CC density emerges in general as a quadratic function of the expansion rate: where ρ 0 Λ = ρ Λ (H = H 0 ) is the present value. The dimensionless parameter ν is given by where M is an effective mass parameter representing the average mass of the heavy particles of the Grand Unified Theory (GUT) near the Planck scale, after taking into account their multiplicities. Depending on whether they are bosons, or fermions, σ = +1, or σ = −1, respectively. For example, for M = M P one has |ν| = ν 0 , where On physical grounds, we expect that this value of |ν| should be the upper bound for this parameter. In the next section, we will see if we can pinpoint a region of parameter space compatible with this expectation. The energy density associated to the cosmon component X is obtained from the total DE conservation law (36) and the composite form (66), where w X is the effective EOS parameter of X, In principle, w X could be a function of the scale factor. However, a simpler assumption that allows us to perform a completely analytic treatment, is to consider that the X component behaves as a barotropic fluid with a constant EOS parameter in one of the following two ranges: ω X −1 (quintessence-like cosmon) or ω X −1 (phantom-like cosmon). On the other hand, the EOS parameter for the running Λ component still remains as the cosmological constant one, w Λ = −1, i.e. From these assumptions, it is easy to find the following relation between the effective EOS parameter of the total DE fluid (37) and the EOS parameter of the cosmon w X : (1 + w e (a)) ρ D (a) = (1 + w X ) ρ X (a) . The normalized density of the cosmon component, Ω X (a) = ρ X (a)/ρ 0 c , can be obtained from the previous relations after solving the differential equation (70). In this equation, we have ρ ′ Λ (a) = (3ν/8π) M 2 P dH 2 /da from (67), and the derivative dH 2 /da = 2 H(a) H ′ (a) can be explicitly computed from (47) upon using (73) and (35). One finally obtains the differential equation With the boundary condition that the current value of ρ X is ρ 0 X , the solution of (74) can be written in the following way: 8 8 The fact that the evolution of the cosmon X is completely determined by the dynamics of the running ρ Λ (67), together with the hypothesis of total DE conservation (70), implies that X cannot be generally assimilated to a scalar field, which has its own dynamics. In fact, as we have already mentioned, X is to be viewed in general as an effective entity within the context of the effective action of QFT in curved space-time. where in the above equations we have used the notations As we will discuss in more detail below, the parameter ǫ must remain small (|ǫ| < 0.1) in order to be compatible with primordial nucleosynthesis. For ν = 0 the CC density (67) becomes constant. In this case, the two parameters (76) and (77) vanish and Eq. (75) boils down to the simplest possible form, which is characteristic of a self-conserved monocomponent system, It is only in this particular situation where the cosmon X could be a self-conserved scalar field with its own dynamics. But in general this is not so because in QFT in curved space-time we have good reasons to expect a running ρ Λ [13,15,16], and hence ν = 0. Therefore, if the total DE is to be conserved, the dynamics of X is not free anymore and becomes determined as in (75). The normalized total DE density Ω D = ρ D /ρ 0 c is given by where the various current normalized densities satisfy the relation Ω 0 M + Ω 0 D = Ω 0 M + Ω 0 Λ + Ω 0 X = 1, which may be called the "ΛXCDM cosmic sum rule". Using (73), the effective EOS of the DE in the ΛXCDM model can now be obtained explicitly, with Ω X (a) and Ω D (a) given by (75) and (79), respectively. The total DE density (79) varies in such way that the ratio (1) can remain under control, which is the clue for solving the coincidence problem in a dynamical way [17]. Indeed, the explicit computation of such ratio yields and it can be bounded due to the existence of a maximum (triggered by the ∼ a −3(wX −ǫ) term in the previous formula). Moreover, r(a) given above stays relatively constant (typically not varying more than one order of magnitude) for a large fraction of the history of the Universe and for a significant region of the parameter space [17]. In contrast, in the standard concordance ΛCDM model, the CC density remains constant, ρ Λ = ρ 0 Λ , and the coincidence ratio grows unstoppably with the cubic power of the scale factor, r(a) = Ω 0 Λ a 3 /Ω 0 M . In this scenario, it is difficult to explain why the constant ρ Λ = ρ 0 Λ is of the same order of magnitude as the matter density right now: ρ 0 M . Let us point out that the standard model ratio is just that particular case of (81) for which ν = 0 (no running CC) and Ω 0 X = 0 (no cosmon). Before closing this section, we would like to make a remark and some discussion concerning the quadratic evolution law (67) for the cosmological term. This equation was originally motivated within the framework of the renormalization group (RG) of QFT in curved spacetime [12,13,14,18,49] (see also [15] for a short review). We point out a criticism against this approach that recently appeared in the literature [50]. While the nature of this criticism was amply rejected in [16] (see below for a summary), it is fair to say that the question of whether a rigorous RG approach in cosmology is feasible is still an open question and remains a part of the CC problem itself. Although it is not our main aim here to focus on this fundamental issue, let us briefly sketch the situation along the lines of Ref. [16], to which we also refer the reader for a summary of the rich literature proposing different RG formulations of cosmology both in QFT in curved space-time and in Quantum Gravity. The RG method in cosmology treats the vacuum energy density as a running parameter and aims at finding a fundamental differential relation (renormalization group equation) of the form which is supposed to describe the leading quantum contributions to it, where β Λ is a function of the parameters P of the effective action (EA) and µ is a dimensional scale. The appearance of this arbitrary mass scale is characteristic of the renormalization procedure in QFT owing to the intrinsic breaking of scale invariance by quantum effects. The quantity ρ Λ in (82) is a (µ-dependent) renormalized part of the complete QFT structure of the vacuum energy. Depending on the renormalization scheme, the scale µ can have a more or less transparent physical meaning, but the physics should be completely independent of it. Such (overall) µ-independence of the observable quantities is actually the main message of the RG; but, remarkably enough, the µ-dependence of the individual parts is also the clue of the RG technique to uncover the leading quantum effects. Essential for the RG method in cosmology is to understand that, in order for the vacuum energy to acquire dynamical properties, we need a nontrivial external metric background. The dynamical properties of this curved background (e.g. the expanding FLRW space-time, characterized by the expansion rate H) are expected to induce a functional dependence ρ Λ = ρ Λ (H). The latter should follow from parameterizing the quantum effects with the help of the scale µ and then using some appropriate correspondence of µ with a physical quantity, typically with H in the cosmological context, although there are other possibilities [12,13,14]. In this way, one expects to estimate the subset of quantum effects reflecting the dynamical properties of the non-trivial background. Although µ cancels in the full EA, the RG method enables one to separate the relevant class of quantum effects responsible for the running. The procedure is similar to the RG in a scattering process in QCD; the parametrization of the quantum effects in terms of µ is the crucial strategy to finally link them with the energy of the process through the correspondence µ → q (where q is a typical momentum of the scattering process) at high energy. One can also proceed in the same way in QED and electroweak theory (although here one can adopt more physical subtraction schemes, if desired). The RG technique can actually be extended to the whole Particle Physics domain. In cosmology, however, the situation is more complicated, partly because (as remarked above) the physical scale behind the quantum effects is not obvious. Still, one expects that it should be related with the expanding metric background and hence the expansion rate H can be regarded as a reasonable possibility. On this basis, the heuristic arguments exhibited in [13,14,16,18,19,49] combined with the general covariance of the EA suggest that the solution of the RG equation (82) should lead to the kind of quadratic law (67) that we have used. According to [16], the point of view of Ref. [50] is incorrect on two main accounts: first, because they try to disprove the running through the overall cancelation of the arbitrary scale µ in the EA; and second, because they neglect the essential role played by the non-trivial metric background. As emphasized in [16], the cancelation of µ in the EA cannot be argued as a valid criticism because this fact is a built-in feature of the RG and it was never questioned. If this would be a real criticism, it would also apply to QED, QCD or any other renormalizable QFT, and nevertheless this is no obstacle for using the RG method in these theories as an extremely useful strategy to extract the dependence of the quantum effects on the physical energy scale of the processes, in particular the so-called running coupling constants g s = g s (q) and e = e(q) of the strong and electromagnetic interactions. Moreover, in the absence of a non-trivial metric background, there is no physical running of the vacuum energy, even though there is still µ-dependence of the various parts of the EA and in particular of the CC, see e.g. [51]. Therefore, at the end of the day such criticism seems to go against the essence of the RG method and its recognized ability to encapsulate the leading quantum effects on the physical observables. In cosmology, the principles of the RG should be the same, but there are two main stumbling blocks that prevent from straightforwardly extending the method in practice [16], to wit: i) the aforesaid lack of an obvious/unique correspondence of µ with a cosmological scale defining the physical running, and also (no less impor-tant) ii) the huge technical problems related with the application of the RG within a physical (momentumdependent) renormalization scheme in a curved background. These difficulties are unavoidable here because we are dealing with QFT in the infrared regime and moreover the metric expansions cannot be performed on a flat background; indeed, there cannot be a flat background in the presence of a cosmological term! While these two problems remain unresolved in a completely satisfactory manner, it is legitimate to use the phenomenological approach and the educated guess (e.g. the general covariance of the EA) to hint at the running law. This is the guiding principle followed in the aforementioned references and that led to Eq. (67). Finally, let us emphasize that irrespective of whether such law can be substantiated within the strict framework of the RG, the present study remains perfectly useful simply treating (67) as an acceptable type of a phenomenological variation law and keeping also in mind that adding the cosmon may contribute to the resolution of the pressing cosmic coincidence problem. VII. DARK ENERGY PERTURBATIONS IN THE ΛXCDM MODEL In this section, we further elaborate on the conditions to bound the ratio r = r(a) and discuss the constraints on the parameter space, in particular the impact of the DE perturbations on these constraints. In section V, we discussed analytically some generic features about DE perturbations. In principle, those results should apply to any model in which the DE is self-conserved. The ΛXCDM model, given its peculiarities (a composite DE which results in a complicated evolution of the effective EOS), constitutes a non-trivial example of that kind of models. In this sense, it is interesting to use the ΛXCDM to put our general predictions to the test. At the same time, this will allow us to impose new constraints on the parameter space of the model, improving its predictivity. The parameter space of the ΛXCDM model was already tightly constrained in [23]. In that work, the matter density fluctuations were analyzed under the assumption that the DE perturbations could be neglected. As a first approximation this is reasonable since, as we have discussed in section V, the DE is expected to be smooth at scales well below the sound horizon. Thus, we will take the results of [23] as our starting point and will check numerically the goodness of that approximation. Finally, we will further constrain the parameter space using the full approach presented in the present work. Let us summarize the constraints that were imposed in [23]: 1. Nucleosynthesis bounds: As already commented, the ratio (81) between DE and matter densities should remain relatively small at the nucleosynthesis time, in order not to spoil the Big Bang model predictions on light-element abundances. Requiring that ratio to be less than 10%, it roughly translates into the condition |ǫ| < 0.1, where the parameter ǫ was defined in Eq. (77). 2. Solution of the coincidence problem: In Ref. [17], where the ΛXCDM model was originally introduced as a possible solution to the coincidence problem, it was shown that there is a large sub-volume of the total ΛXCDM parameter space for which the ratio r(a) remains bounded and near the current value r 0 (say, |r(a)| 10 r 0 , where r 0 ∼ 7/3) during a large fraction of the history of the Universe. Thus, the fact that the matter and DE densities are comparable right now may no longer be seen as a coincidence. Such solution of the coincidence problem is related to the existence of a future stopping (and subsequent reversal) of the Universe expansion within the relevant region of the parameter space. 3. Current value of the EOS parameter: Recent studies (see eg. [2]) suggest that the value of the DE effective EOS should not be very far from −1 at present. Although these results usually rely on the assumption of a constant EOS parameter (and thus are not directly applicable to the ΛXCDM model), we adopted a conservative point of view and stuck to them by enforcing the condition |1 + w e (a = 1)| ≤ 0.3 on the EOS function (80). 4. Consistency with LSS data: As said before, in [23] we studied the growth of matter density fluctuations under the assumption that the DE was smooth on the scales relevant to the linear part of the matter power spectrum. From the fact that the standard ΛCDM model provides a good fit to the observational data, we took it as a reference and imposed that the amount of growth (specifically, the matter power spectrum), of our model did not deviate by more than 10% from the ΛCDM value ("F-test" condition). This condition can be also be justified from the observed galaxy fluctuation power spectrum, see [23] for more details. The upshot of that analysis was that there is still a big sub-volume of the three dimensional ΛXCDM parameter space (ν, ω X , Ω 0 Λ ) satisfying simultaneously the above conditions 9 . The projections of that volume onto the three perpendicular planes (ν, Ω 0 Λ ), (ν, ω X ) and (Ω 0 Λ , ω X ) are displayed in Figs. 1 and 2 (shaded regions). These regions where already determined in Ref. [23]. In the next section, we will discuss how the final set of allowed points 9 In Ref. [23], we took a prior for the normalized matter density at present, specifically Ω 0 M = 0.3. This means that Ω 0 D = 0.7 for a spatially flat Universe. For better comparison with those results, we keep this prior also in the present work. becomes further reduced when we take into account the analysis of the DE perturbations. A. Divergent behavior at the CC boundary As discussed in section V, if the effective EOS of the model crosses the CC boundary (w e = −1) at some point in the past, the perturbation equations will present a real divergence. Obviously, this circumstance makes the numerical analysis unfeasible at the points of parameter space affected by the singularity. In the absence of an apparent mechanism to get around this singularity, we are forced to restrict our parameter space to the subregion where the solution of the perturbation equations (43)-(46) is regular, namely by removing those points of the parameter space that present such a crossing in the past, because these points can not belong to a well defined history of the Universe. In the absence of a more detailed definition of the cosmon entity X, this new constraint is unavoidable. This should not be considered as a drawback of the model, for even in the case when one uses a collection of elementary scalar fields to represent the DE, one generally meets the same kind of divergent behavior as soon as the CC boundary is crossed, unless some special conditions are arranged. In other words, even if the components of the DE are as simple as, say, elementary scalar fields with smooth behavior and well-defined dynamical properties (including an appropriately chosen potential), there is no a priori guarantee that the CC boundary can be crossed safely [45]. It is possible to concoct ingenious recipes, see e.g. [46], such that the perturbation equations become regular at the CC boundary, but the procedure is artificial in that one must introduce new fields (one quintessence-like and another phantom-like) satisfying special properties such that their respective EOS behaviors match up continuously at the CC-crossing. Apart from the fact that fields with negative kinetic terms are not very welcome in QFT, one cannot just replace the original fields with the new ones without at the same time changing the original DE model! As we will see below, in the ΛXCDM case the absence of CC-crossing projects out a region of the parameter space which is significantly more reduced, and therefore the predictive power of the model becomes substantially enhanced. In section 6 of the first reference in [17], it was shown that the necessary and sufficient condition for having a CC boundary crossing in the past within the ΛXCDM model is that the parameter b given in (76) is positive. As can be readily seen, this will happen whenever ν and Ω 0 X have the same sign (where we use the fact that ω X < 0 and |ǫ| ≪ |ω X | in the relevant region of parameter space). [23], see also section VII of the present work. Striped area: points that are not affected by the divergence at the CC boundary discussed in Sect. VII A. The final allowed region is the one both shaded and stripped. As a result of considering the DE perturbations, the possible values of the parameters become strongly restricted, which implies a substantial improvement in the predictive power of the model. each plane, which is just the corresponding intersection of the shaded area and the striped one. At the end of the day, it turns out that most of the points in the shaded area in Figs. 1 and 2 (viz. those allowed by the conditions stated in the previous section and the analysis of [23]) are ruled out by the new constraint emerging from the DE perturbations analysis, and hence we end up with a rather definite prediction for the values of the ΛXCDM parameters. In particular, we find from these figures that only small positive values of ν are allowed, at most of order ν ∼ 10 −2 . Let us emphasize that this is in very good agreement with the theoretical expectations mentioned in section VI. Recall that, from the point of view of the physical interpretation of ν in Eq. (68), we expected ν in the ballpark of ν 0 ∼ 10 −2 at most -see Eq. (69) -since the masses of the particles contributing to the running of the CC should naturally lie below the Planck scale 10 . Let us mention that the interesting bounds on ν obtained in Ref. [52] on the basis of the so-called generalized Second Law of gravitational 10 Let us clarify that the tighter bounds on ν determined in Ref. [41] are possible only because, in the latter work, the DE is not conserved and there is no cosmon. As we have shown in [23], a running cosmological constant model without a self-conserved DE cannot solve the coincidence problem in a natural way because the required values of ν are too large and, hence, incompatible with the physical interpretation of this parameter. [23]. Striped: points that, in addition, are not affected by the divergence at the CC boundary discussed in Sect. VII A. The final allowed region is the one both shaded and stripped. thermodynamics would suggest that only the effective mass near the Planck mass is allowed. However, let us point out that such study has been performed without including the non-trivial effect from the cosmon. A very important consequence of the dark energy perturbative constraint is that the effective EOS of the DE can be quintessence-like only, i.e. −1 < w e < −1/3. To prove this statement, let us start from Eq. (73). For the current values of the parameters, this equation can be rewritten as where w 0 e ≡ w e (a = 1) is the value of the effective EOS parameter at the present time. Looking at Figs. 2a and 2b, we realize the following two relevant features: first, the cosmon component is necessarily phantom-like (ω X < −1) in the allowed region by the DE perturbations; and, second, its energy density at present is negative; namely, Ω 0 X = 0.7 − Ω 0 Λ < 0 because from Fig. 2b we have Ω 0 Λ > 0.7. Therefore, since the r.h.s. of (83) is constrained to be positive and Ω 0 D = 0.7 > 0, we are enforced to have w 0 e > −1. However, the fulfilment of this condition at present implies its accomplishment in the past, i.e. w e (a) > −1 (∀a 1), otherwise there would have been a crossing of the CC boundary at some earlier time, which is excluded by the analysis of the DE perturbations. The upshot is that the EOS of the DE in the ΛXCDM model can only appear effectively as quintessence (q.e.d ). In reality, it only mimics quintessence, of course, as its ultimate nature is not ; such DE medium is a mixture made out of running vacuum energy and a compensating entity that insures full energy conservation of the compound system. Dark energy components X with negative energy density are peculiar in cosmology since, in contrast to standard DE components, they satisfy the strong energy condition (like ordinary matter), and as a result the gravitational behavior of X is attractive rather than repulsive. Due to this double resemblance with matter and phantom DE (although with the distinctive feature ρ X < 0), such components can be called "phantom matter" [17]. Being X in general an effective entity, such "phantom matter" behavior is actually non-fundamental. B. Adiabatic speed of sound in the ΛXCDM In the equations (43)-(46), we assumed the most general case in which the perturbations could be nonadiabatic. Moreover, we have shown that the adiabatic case usually leads to an unphysical exponential growth of the perturbations as a result of c 2 a in (40) being negative. Next we will check that, indeed, the most common situation in the ΛXCDM model is to have c 2 a < 0. Notwithstanding, adiabatic perturbations are not completely forbidden in the present framework, as there is a small region of the parameter space for which c 2 a could be positive. From Eqs. (40) and (73), and making use of the DE conservation law (36), we find that the adiabatic speed of sound for the ΛXCDM model can be cast as follows, With the help of (75), we can rewrite the last expression as We want to find out the condition for this expression to be positive. As (ω X − ǫ) < 0 (remember that ω X < 0 and |ǫ| < 0.1, due to nucleosynthesis constraints), that condition simply reads The cosmon energy density Ω X (a) cannot vanish because, in such case, the perturbation equations would diverge. Indeed, Ω X (a) = 0 corresponds to a CC-boundary crossing at some value of the scale factor in the past, cf. Eq. (80). Thus, being Ω X (a) a continuous function, it must have the same sign in the past as that of its present value, i.e. Ω 0 X /Ω X (a) > 0 (∀a 1)). In short, the final condition that ensures that c 2 a > 0 is If this condition would be satisfied, then c 2 a > 0 would hold for the entire past history of the Universe and, under these circumstances, the adiabatic equations may be used. It turns out that the relation (87) can be satisfied in the ΛXCDM model, although only in a narrow range of the parameter space. In fact, from the definition of the parameter b in (76), the expectation that |ω X | = O(1), and neglecting ǫ, we see that (87) is approximately equivalent to Given the fact that ν was found to be positive and small (cf. Fig. 1-2) and, at the same time, Ω 0 X < 0 (see previous section), the above condition does not leave much freedom within the allowed parameter space, roughly −ν Ω 0 X < 0. This narrow strip is, however, not necessarily negligible; e.g. if we take ν of order of ν 0 ∼ 10 −2 (cf. Eq. (69)), this possibility is still permitted in the parameter space, see Figs. 1-2. In such case, the present cosmon density could still be of the order or larger (in absolute value) than, say, the current neutrino contribution to the energy density of the Universe (Ω 0 ν ∼ 10 −3 ). No matter how tiny is (in absolute value) a negative cosmon contribution to the energy density, it suffices to take care of the cosmic coincidence problem along the lines that we have explained. Therefore, the adiabatic contribution is perfectly tenable, but the numerical analysis of the subsequent sections remains essentially the same (as we have checked) independently of whether the sound speed of the DE medium is adiabatic or not. For this reason, in what follows we will assume the more general situation of non-adiabatic perturbations, with the understanding that adiabatic ones can do a similar job in the corresponding region of the parameter space. VIII. THE MATTER POWER SPECTRUM In this section, we compare the matter power spectrum predicted by the ΛXCDM model with the observed galaxy power spectrum measured by the 2dFGRS survey [4]. The ΛXCDM matter power spectrum is found by evolving the perturbation equations (43)-(46) from a = a i to the present (a 0 = 1), where a i ≪ 1 is the scale factor at some early time, but well after recombination. In these equations, we must of course use the expansion rate (38) with the full DE density (79). In order to set the initial conditions at a = a i , we use the prediction from the standard ΛCDM model. Indeed, the standard ΛCDM model provides a good analytical fit to the 2dFGRS observed galaxy power spectrum. Taking this fit as our starting point, we compute analytically the values of the ΛCDM perturbations at an arbitrary scale factor. Since the DE does not play an important role until very recently, we may assume that the initial matter and metric perturbations at a = a i for the ΛXCDM model are the same as for the ΛCDM model. and for D(a) = a it renders the initial condition for the metric fluctuation. Later on, when the DE (i.e. ρ 0 Λ > 0 in the ΛCDM) starts to play a role, the matter (and metric) fluctuations become suppressed. The suppression is given by the value of the growth factor D(a), which is no longer proportional to the scale factor 11 . From (91) it is clear that δ M (a)/D(a) is a constant, which can be written as δ M (a i )/a i at early times (when D(a i ) = a i ) and as δ M (a 0 )/D(a 0 ) at the present time. Therefore, where D 0 ≡ D(a 0 ) and the subindex in H Λ (a i ) has been added to emphasize that the initial value of the Hubble parameter is to be computed within the ΛCDM model. Note that, instead of setting the value of h(a i ), we could have chosen to put initial conditions on the derivative of the density contrast, δ ′ M . In that case, as it is evident from (94), we would have that δ ′ M (a i ) = δ M (a 0 )/D 0 . Then the initial value of the metric fluctuation is constrained by Eq. (43) to bê h(a i ) = 2H ΛX (a i )a i δ M (a 0 )/D 0 , where now H ΛX (a i ) is the ΛXCDM value of the Hubble function. Note that this value ofĥ(a i ) is not exactly the same as that in (95), since H Λ (a i ) and H ΛX (a i ) are not identical. However, being the difference rather small (as we have checked numerically), the behavior of the perturbations does not depend significantly on that choice. 11 Notice that ifΩ Λ is small and essentially constant, then the growth factor D(a) takes the approximate form D(a) ∼ a n , with n = 1 − 6Ω Λ /5 < 1, as it follows from (52) for we = −1, or from (89). This demonstrates, if only roughly, the suppression behavior in an explicit analytic way. In the general ΛCDM case, however, the solution for the growing mode is given by (91), in which H is the full expansion rate of the standard model. The equations (94) and (95) give us the initial conditions at a i = 1/500 for the matter and metric perturbations in terms of the density contrast today δ M (a 0 ). We associate the latter with the 2dFGRS observed galaxy power spectrum fitted in the ΛCDM model, as detailed below. The matter power spectrum of the ΛCDM model can be approximated as [41]: where Ω 0 T = Ω 0 M + Ω 0 Λ . It assumes a scale-invariant (Harrison-Zeldovich) primordial spectrum, as generically predicted by inflation. This primordial spectrum is modified when taking into account the physical properties of different constituents of the Universe, in particular the interactions between them. All these effects are encoded into the scale-dependent transfer function T (k), which describes the evolution of the perturbations through the epochs of horizon crossing and radiation/matter transition. The growth at late times which, in the ΛCDM model, is independent of the wavenumber, is described by the growth function g(Ω). Finally, A is a normalization factor. The transfer function can be accurately computed by solving the coupled system formed by the Einstein and the Boltzmann equations. Although a variety of numerical fits have been proposed in the literature, here we use the so-called BBKS transfer function [54]: where and Γ is the Sugiyama's shape parameter [35,55] On the other hand, for the growth function we assume the following approximation [56]: where Q rms−P S is the quadrupole amplitude of the CMB anisotropy (see below for more detailed explanations), l H ≡ H −1 0 ≃ 3000h −1 Mpc is the Hubble radius and T 0 ≃ 2.725K, the present CMB temperature. Therefore, the value of the normalization factor A could in principle be inferred from measurements of the CMB. However, we have obtained it by fitting the power spectrum (96) to the 2dFGRS observed galaxy power spectrum [4], as discussed below. We assume h = 0.7 for the reduced Hubble parameter and a spatially flat Universe with Ω 0 M = 0.3 (hence Ω 0 Λ = 0.7 for the flat ΛCDM model) in order to be consistent with our assumption in previous analyses [17,23]. The fit to the 2dFGRS observed galaxy power spectrum P 2dF (k) is obtained assuming the matter budget composed of a baryonic part Ω 0 B = 0.04 and a dark matter contribution Ω 0 DM = 0.26. In order to calculate the best fit, we use the formula (96) to minimize the χ-square distribution in terms of the normalization A. There are 39 values of k in the 2dFGRS data, so the number of degrees of freedom is n dof = 38. We find, as best fit, the value with χ 2 = 0.43. From (101) we see that this value of A implies Q rms−P S ≃ 20.85 µK. Let us now clarify that Q rms−P S is not the observed quadrupole CMB anisotropy (usually denoted Q rms ), but rather the value derived from a fit to the entire CMB power spectrum (PS). For a power-law spectrum with n = 1 (i.e. for a scale-invariant PS), the COBE team obtained Q rms−P S = 18 ± 1.6 µK [57], and moreover they found that the observed Q rms is smaller than the fitted Q rms−P S . Whether this is a chance result of cosmic variance or reflects the physical cosmology is not known [57]. Our fitted value for Q rms−P S falls within the 2σ range of the corresponding COBE value (although when the quadrupole itself is not used in the fit, the COBE uncertainties become larger [57]). As several authors have noted [55,58], such a normalization may be inadequate for models with a cosmological constant, given the fact that the CMB spectra of Λ-dominated models is quite different from a simple power-law, specially at large scales (low multipoles). For instance, in [58] it is proposed an alternative normalization for the ΛCDM model, which for h = 0.8 and Ω 0 Λ = 0.7 yields Q rms−P S = 22.04 µK, with an error of the order of 11%, which is in agreement with our result. In general one can find a number of different values for Q rms−P S in the literature depending on the kind of analysis performed or the data set used, and this is why we preferred to compute the normalization directly from a fit to the matter power spectrum. Finally, let us emphasize that our value for Q rms−P S lies within the 95% confidence interval for the observed quadrupole anisotropy (Q rms ) by both COBE [57] and WMAP [59]. Therefore, we will assume (94) and (95) as the initial conditions for the matter and metric perturbations, identifying δ M (a 0 ) with δ M (k) from the formula (96) using the fitted coefficient (103) . B. The ΛXCDM matter power spectrum The procedure discussed above helps us to set initial conditions for the matter and metric perturbations in the ΛXCDM model. However, since the ΛCDM model does not include DE perturbations, we should set independent initial conditions on δ D (a i ) and θ D (a i ). As already discussed, the scales relevant to the matter power spectrum remain always well below the sound horizon (65) and we expect negligible DE perturbations at any time. Thus, the most natural choice for the initial values of the DE perturbations is: Indeed, this is not the only reasonable choice. For instance, we could have also assumed the adiabatic initial condition (54) for the DE density contrast, i.e. δ D (a i ) = (1+w e (a i ))δ M (a i ), with δ M (a i ) given by Eq.(94). Again, it has been checked that the evolution of the perturbations does not depend significantly on the particular initial condition used. Assuming the initial conditions (94),(95) and (104) for the matter, metric and DE perturbations at a i , respectively, we can solve the perturbed equations (43)- (46). Equivalently, we can solve (44), (45) to obtain (δ D , θ D ) and then (50) to get the matter density fluctuations today δ M (k, a = 1) for any dark energy model. In particular, we can (as a consistency check) solve the perturbation equations for the ΛCDM model (in that case δ D (a) = θ D (a) = 0, so the only equations needed are (43) and (46)). In doing so, we recover exactly the spectrum P Λ defined in (96). Now we proceed to compute the spectrum of the ΛXCDM, P ΛX (k). In order to better compare the shape of the different spectra and the goodness of their fit to the 2dFGRS observed galaxy power spectrum, we will normalize them at the smallest length scale considered ℓ ∼ k −1 , i.e. at k = 0.2, taking the ΛCDM spectrum (96) as reference. To this purpose, we introduce a normalization factor A ΛX in the matter power spectrum: Notice that the normalization factor A ΛX gives us the difference in the matter power spectrum of the model with respect to that of the ΛCDM at the specific scale k = 0.2. Let us clarify that the reason for choosing this scale for the normalization is that, as discussed in section V, the smaller the length scale (i.e. the higher the value of k) the less important the DE perturbations are. Therefore, at k = 0.2, the matter power spectrum of the model Figs. 1-2), Ω 0 Λ = 0.8, ν = ν0 ≡ 2.6 × 10 −2 and wX = −1.6. The corresponding curves PΛ(k) and PΛX(k) coincide in this case; (b) for a set of parameters not allowed by the F-test [23] (points in the stripped, but non-shaded, region in our Fig. 1), Ω 0 Λ = +0.35, ν = −0.2 and wX = −0.6. In this case, PΛX(k) presents a slight deviation as compared to PΛ(k) at large scales (i.e. at small k). The lower set of curves in (b) displays the real (unnormalized) growth, see the text. should not depend significantly on whether we consider the effect of the DE perturbations; in particular, it should be independent of the speed of sound c s . For larger scales, however, the DE perturbations can be more significant and, as we shall see below, they may introduce some differences in the shape of the power spectrum, which are nevertheless small in the linear regime. The ΛXCDM power spectrum was calculated for two fiducial values of the DE speed of sound, c 2 s = 1 and c 2 s = 0.1 and several combinations of the parameters ν, ω X and Ω 0 Λ . For values of the parameters allowed in Figs. 1-2 (shaded and striped region) we find that A ΛX ≈ 1 (within ∼ 10% of accuracy). In Fig. 3a, we put together the 2dFGRS observed galaxy power spectrum, the ΛCDM spectrum and the normalized and unnormalized ΛXCDM one, for the set of parameters Ω 0 Λ = 0.8, ν = ν 0 ≡ 2.6 × 10 −2 and w X = −1.6, which are allowed in Figs. 1-2. For these values, we have obtained a normalization factor A ΛX ∼ = 1.1 and an accurate agreement between the ΛXCDM power spectrum and P Λ (k). This was expected since we are assuming allowed values of the parameters, i.e., values already consistent with LSS data according to the 'effective' approach used in [23] (cf. the discussion in section VII). Therefore, the predicted power spectrum from the ΛXCDM ought to be very close to the ΛCDM one, which is in fact what we have substantiated now by explicit numerical check. However, for values of the parameters out of the allowed region in Fig. 1 the predicted matter power spectrum can differ significantly from the ΛCDM one, P Λ (k). This occurs mainly for points that do not satisfy the "Ftest" condition [23], even if the other observable constraints (namely the ones related to nucleosynthesis and the present value of the EOS (cf. section VII)) are fulfilled. Let us remind that the F-test consists in requiring that the matter power spectrum of the model under consideration (in this case, the ΛXCDM model) differs from that of the ΛCDM in less that a 10%, under the assumption that DE perturbations can be neglected. Given the fact (explicitly analyzed here) that the DE perturbations should not play a very important role, it is reasonable to expect that the F-test should be approximately valid even when we do not neglect the DE perturbations. Thus, the ΛXCDM model should exhibit a large deviation in the amount of growth with respect to the ΛCDM precisely for those points failing the F-test. Points of this sort are those located in the striped region, but outside the shaded one in Fig. 1. For these points, we should expect an anomalously large normalization factor A ΛX (namely, the factor that controls the matching of the two overall shapes) and, at the same time, we may also observe an evident scale dependence in the power spectrum, i.e. some significant difference in the predicted shape as compared to the ΛCDM one. Such potentially relevant scale dependence (or k-dependence) is introduced by the DE perturbations themselves through the last term on the r.h.s of (39) and is eventually fed into equations (44)- (46). As a concrete example, let us consider Fig. 3b where we compare the 2dFGRS observed galaxy power spectrum and P Λ (k) with the ΛXCDM matter power spectrum P ΛX (k) for the following set of parameters: Ω 0 Λ = +0.35, ν = −0.2 and w X = −0.6. These values fulfill the nucleosynthesis bound (constraint No. 1 in section VII), specifically, we have |ǫ| = 0.08 for these parameters (meaning that DE density at the nucleosynthesis time represents roughly only 8% of the total energy density); and satisfy also the current EOS constraint (No. 3 in section VII): w 0 e = −0.8. However, this choice of parameters largely fails to satisfy the constraint No. 4, i.e. the Ftest: indeed, we find F = 2.06, which implies that the discrepancy in the amount of growth with respect to the ΛCDM when we neglect DE perturbations is more than 200%! As expected, for such set of parameters we encounter a large normalization factor for the two fiducial DE sound speeds c 2 s = 1 and c 2 s = 0.1 that we are using in our analysis (on average A ΛX ≃ 2.7). This is reflected in the evident gap existing between the upper and lower set of curves in Fig. 3b. The lower set reflects the real growth |δ M (k)| 2 of matter perturbations before applying the normalization factor. Such normalization consists in the following: for the smallest scale available in the data, the ΛXCDM curves have been shifted upwards until they match up with the standard ΛCDM prediction. Apart from the overall gap between the two set of curves, we also find a significant shape deviation with respect to the standard ΛCDM model at large scales, as it is patent in Fig. 3b. This feature is more clearly seen at small sound speeds, see next section. IX. MATTER AND DARK ENERGY DENSITY FLUCTUATIONS As we have discussed in section V, the DE fluctuations δ D should oscillate and become eventually negligible as compared to the matter fluctuations δ M , specially at small scales (inside the sound horizon). However, as also noted above, for values that significantly violate the F-test [23], the power spectrum and its shape can be noticeably different from that of the ΛCDM model (cf. Fig. 3b). This suggests that, under such circumstances, the DE density perturbations are not completely negligible owing to the fact that the term which depends on k in the perturbation equations is also proportional to δ Dsee Eq. (45). In addition, there appears a suppression of the growth of matter fluctuations in comparison with the growth predicted by the ΛCDM model. This inhibition of matter growth is characteristic of cosmologies where the DE behaves quintessence-like, i.e. when the DE density decreases with the expansion, whereas phantom-like DE (increasing with the expansion) would cause the opposite effect (an enhancement of the power). A similar situation was also observed in Ref. [41] for models with pure running Λ, where in the case ν > 0 (in which Λ decreases with the expansion) there is an inhibition of growth while for ν < 0 (when Λ increases with the expansion) there is an enhancement -see also [60,61] and [62] for other studies. Let us clarify that these differences in the amount of growth are present even if we neglect the DE perturbations, see [23]. In fact, the effect of the latter is very small, specially for allowed values of the parameters, and becomes noticeable only at large scales. At these scales, we find that the DE perturbations tend to compensate the suppression produced at the background level. This slight enhancement is greater the smaller is the DE sound speed. Such feature can be appreciated in Fig. 4, where we compare the growth of the matter fluctuations at a large scale ℓ ∼ k −1 (with k = 0.01) predicted by both the ΛCDM and ΛXCDM models for the two fiducial sound speeds of the DE considered before and for the same values of the parameters as in Fig. 3. The growth of matter density fluctuations for the ΛXCDM model is in agreement with the predicted one by the ΛCDM model (the dot-dashed and black line) in Fig. 4a, for the set of parameters in the allowed region, whereas in Fig. 4b we see the previously commented suppression for the set of parameters not satisfying the Ftest. The former case can be compared with Fig. 5a in which we have assumed the same set of allowed parameters; as expected, we find completely negligible DE fluctuations today and in the recent past, in agreement with the F-test assumption [23] which means completely negligible DE fluctuations at large scales and a maximum 10% of deviation from the ΛCDM growth of matter density fluctuations. On the other hand, values of the parameters not satisfying the F-test present not only suppression on the growth of matter density fluctuations, as shown in Fig. 4b, but also larger DE fluctuations today and in the recent past, as shown in Fig. 5b. Furthermore, as discussed in Section V, the growth of DE fluctuations is expected to oscillate at small scales and rapidly decay, what legitimate our assumptions for the initial conditions of the DE perturbations. We show these oscillations for an allowed set of parameters in Fig. 6. Similar behavior is obtained for values of the parameters not allowed by the F-test and for both DE sound speeds c 2 s = 1 and c 2 s = 0.1. The amplitude of the DE growth starts negligible (∼ 10 −3 ) and rapidly decay to zero, as shown in Fig. 6. In Fig. 7 we plot the present value of the DE perturbations as a function of the wave number for the two sets of allowed (Fig. 7a) and non-allowed (Fig. 7b) parameters used in the previous plots. We see that the DE perturbations are negligible at small scales (large k), whereas they become larger at larger scales. This is because by increasing the scale we are getting closer to the sound horizon, as discussed in section V. We also see that the DE perturbations are larger for the parameters not allowed by the F-test, which explains why the shape of the matter power spectrum differs from that of the ΛCDM in this case (cf. Fig. 3b). However, when comparing with Fig. 4, we see that even at the largest explored scale (k = 0.01) the ratio δ D /δ M remains rather small, staying at the level of 10 −3 . Finally, let us comment that the DE density contrast can become negative with the evolution, as in this case happens for c 2 s = 1. As discussed in section V, the decay of the DE perturbations takes place once the term proportional to k 2 in (45) becomes dominant. This same term is also responsible for the exponential growth of the (DE) perturbations in the adiabatic case or, more generally, whenever c 2 s is negative. In order to better appreciate its influence, it is useful to compare the evolution of the DE perturbations in the adiabatic case (for the most common situation where c 2 a < 0) and the non-adiabatic one (c 2 s > 0) with the scenario in which c 2 s = 0, since in the latter the term proportional to k 2 disappears from the equations. This is precisely what has been done in Fig. 8a for the allowed set of parameters used throughout this section. In that figure, it is shown the evolution of the DE density contrast at a sufficiently large scale ℓ ∼ k −1 for which the DE perturbations can be sizeable (namely at k = 0.01). We illustrate the effect for three different regimes of the speed of sound: c 2 s = c 2 a (with c 2 a < 0), c 2 s = 0 and c 2 s = 0.1. The gray lines represent the evolution of δ D when the last term in Eq.(44) is neglected, showing indeed that the qualitative behavior of the perturbations in the adiabatic and c 2 s = 0.1 cases does not stem from that term. For c 2 s = 0.1, the scale considered is initially (i.e. at a i = 1/500) larger than the sound horizon, and thus the term proportional to k 2 is negligible at the beginning of the evolution. The same is true for the adiabatic case because, in the asymptotic past, the effective EOS of the ΛXCDM model resembles that of matterradiation (w e (a) → w m for a → 0) [17] and, thus, we have c 2 s = c 2 a ≃ w e ≃ 0 in the matter dominated epoch. (We recall that in all our discussion we remain in the matter epoch, equality being at a ∼ 10 −4 ). Therefore, the term proportional to k 2 is initially unimportant in all the three cases and this makes the perturbations to evolve in a nearly identical fashion at these first stages, as it can be clearly seen from Fig. 8a. As the evolution continues, the curve corresponding to c 2 s = 0.1 begin to depart from the others. This occurs mainly from the instant when the sound horizon is crossed, i.e. when the wavelength of the k-mode gets comparable to the sound horizon; such instant can be defined through the condition kλ s = π, similarly as in [43]. Then, the term proportional to k 2 begins to dominate, which in turn makes the DE velocity gradient θ D to rapidly increase and the DE perturbations to decay. Later on, the term proportional to k 2 becomes important also in the adiabatic case. Due to the different sign (c 2 a < 0), the effect that it triggers is now opposite to the one observed in the c 2 s = 0.1 case: thus, instead of getting stabilized, the DE perturbations initiate an exponential growth. The previous features can be further assessed in a quantitative way by comparing the numerical importance (in absolute value) of the two terms inside the curly brackets in (44). In Fig. 8b, we plot the ratio between . For the latter, the DE perturbations begin to decay after the sound horizon crossing (characterized by the condition kλs = π), whereas in the adiabatic case δD starts to grow exponentially at a ≃ 0.2. The last term in Eq. (44) may be neglected (gray lines) without altering the qualitative behavior, which is triggered by the θD-term and, ultimately, by the one proportional to k 2 in (45); (b) Comparison of the two terms inside the curly brackets in (44) for the c 2 s = 0.1 and adiabatic cases. When the θD-term becomes important, the stabilization (resp. unbounded growth) of the non-adiabatic (resp. adiabatic) perturbations becomes manifest. these two terms for both the adiabatic case (with c 2 a < 0) and the non-adiabatic situation (c 2 s = 0.1) (note that the term proportional to H 2 /k 2 may be neglected for the sub-Hubble perturbations we are dealing with). Comparison with Fig. 8a reveals that it is precisely when the term proportional to θ D stops being negligible that the evolution of the perturbations begins to depart from the c 2 s = 0 case. The absolute value of the adiabatic speed of sound is also shown, in order to illustrate the ultimate reason for the start-up of the exponential growth in the adiabatic mode: it is only when c 2 a begins to depart significantly from 0 that the term proportional to k 2 becomes important, which in turn triggers a rocket increase of the velocity gradient θ D . As we have discussed in connection to Fig. 8a, the initial evolution of the perturbations in the ΛXCDM model is nearly the same for any of the three values of the speed of sound. In fact, in the adiabatic case, and given the behavior of the effective EOS in the asymptotic past, the conditions that lead to the simplified setup (53) hold. Therefore, that is the equation initially controlling the evolution of δ D . For c 2 s = 0 we arrive at exactly the same equation, whereas for positive c 2 s the resulting equation only differs by the last term in (44), which, as it has been previously discussed, happens to be negligible at least during the first stages of the evolution. Notice that for w e ≃ const., Eq. (53) integrates to δ D (a) = (1 + w e ) δ M + C, where C is a constant determined by the initial conditions. In section IV, we pointed out that C = 0 corresponds to the adiabatic initial condition (54). However, for the alternate initial condition (104), and taking into account that w e (a i ) ≃ w M = 0 for the ΛXCDM model in the early matter dominated epoch [17], we have C = −δ M (a i ) and thus δ D (a) = δ M (a) − δ M (a i ). From here we find that the ratio between DE and matter perturbations in the early times of the evolution reads: This simple predicted behavior is confirmed from the numerical analysis in Fig. 9, where again the allowed set of parameters has been used. We see that the ratio δ D /δ M starts being 0, and subsequently as the matter perturbations grow the last term in (106) diminishes, until the asymptotic value δ D /δ M = 1 is reached. This value is maintained until the conditions leading to (53) cease to be valid. In the adiabatic and c 2 s > 0 cases, this happens when the term proportional to k 2 on the r.h.s. of Eq. (45) can no longer be neglected. On the other hand, when c 2 s = 0, the δ D /δ M ≃ 1 regime is abandoned at the point when the effective EOS starts acquiring sizable negative values. Moreover, being the term proportional to k 2 absent, it is now the last term on the r.h.s. of Eq. (44) -which was irrelevant for the other two cases -the one that tends to stabilize the DE perturbations (see also Fig. 8a). From the detailed analysis that we have presented here, we conclude that the approximation of neglecting the DE perturbations can be justified [23]. But this does not mean that the computation of these perturbations is useless. Indeed, the issue at stake here is not so much the quantitative impact of the DE fluctuations upon the matter power spectrum -which is actually negligible, as we have seen -but rather the fact that the DE perturbations may be consistently defined in a certain subregion of the parameter space only. Of course this subregion (53) is seen to be approximately realized during the first stages of the evolution, for any of the three considered speeds of sound. Since we → 0 for small a in the ΛXCDM model, we expect δD/δM ≃ 1 (cf. Eq. (106)) until the conditions leading to (53) no longer hold. One can clearly confirm this situation in the figure. cannot be detected within the context of the effective approach [23]. Therefore, in general, the computation of the DE perturbations may have a final quantitative bearing on this kind of analyses since it may further restrict the physical region of the parameter space in a very significant way. In short, even though the simultaneous account of the DE perturbations has a small numerical effect on the matter power spectrum within the domain where the full system of cosmological perturbations is well-defined, it may nevertheless prove to be a highly efficient method for excluding large regions of parameter space where that system is ill-defined. The upshot is that the combined analysis of the DE and matter perturbations may significantly enhance the predictivity of the model, as we have indeed illustrated in detail for the non-trivial case of the ΛXCDM model of the cosmic evolution. X. DISCUSSION AND CONCLUSIONS In this paper, we have addressed the impact of the cosmological perturbations on the coincidence problem. In contrast to the previous study [23], where this problem was examined in a simplified "effective approach" in which the dark energy (DE) perturbations were neglected, in the present work we have taken them into account in a full-fledged manner. We find that the results of the previous analysis were reasonable because the DE perturbations generally tend to smooth at scales below the sound horizon. However, the inclusion of the DE perturbations proved extremely useful to pin down the physical region of the parameter space and also to put the effective approach within a much larger perspective and to set its limitations. First of all, we have performed a thorough discussion on the coupled set of matter and DE perturbations for a general multicomponent fluid. This has prepared the ground to treat models in which the DE is a composite medium with a variable equation of state (EOS). We have concentrated on those cases in which the DE, despite its composite nature, is described by a self-conserved density ρ D . Notice that if matter is covariantly conserved, the covariant conservation of the DE is mandatory. In particular, this is the situation for the standard ΛCDM model, although in this case the self-conservation of the DE appears through a trivial cosmological constant term, ρ Λ = ρ 0 Λ , which remains imperturbable throughout the entire history of the Universe. One may nevertheless entertain generalized frameworks where the DE is not only self-conserved, but is non-trivial and dynamical. This is not a mere academic exercise; for instance, in quantum field theory in curved space-time we generally expect that the vacuum energy should be a running quantity [13,15,16]. Therefore, in such cases, the CC density becomes an effective parameter that may evolve typically with the expansion rate, ρ Λ = ρ Λ (H), and constitutes a part of the full (dynamical) DE of the composite cosmological system with variable EOS. In these circumstances, if the gravitational coupling G is constant, the running CC density ρ Λ = ρ Λ (H) cannot be covariantly conserved unless other terms in the effective action of this system compensate for the CC variation. We have called the effective entity that produces such compensation "X" or "cosmon", and denoted with ρ X its energy density. Therefore, ρ D = ρ Λ + ρ X is the self-conserved total DE density in this context, which must be dealt with together with the ordinary density of matter ρ M . A generic model of this kind is what we have called the ΛXCDM model [17,23]. Furthermore, from general considerations based on the covariance of the effective action of QFT in curved spacetime [13,15,16], we expect that the running CC density ρ Λ = ρ Λ (H) should be an affine quadratic law of the expansion rate H, see Eq. (67). Using this guiding principle and the ansatz of self-conservation of the DE, we find that the evolution of ρ X , and hence of ρ D , becomes completely determined, even though its ultimate nature remains unknown. In particular, X is not a scalar field in general. The ΛXCDM model was first studied in [17] as a promising solution to the cosmic coincidence problem, in the sense that the coincidence ratio r = ρ D /ρ M can stay relatively constant, meaning that it does not vary in more than one order of magnitude for many Hubble times. The main aim of the present paper was to make a further step to consolidate such possible solution of the coincidence problem, specifically from the analysis of the coupled system of matter and DE perturbations. Let us remark that this has been a rather non-trivial test for the ΛXCDM model. Indeed, after intersecting the region where the DE perturbations of this model can be consistently defined, with the region where the coincidence problem can be solved [17,23], we end up with a significantly more reduced domain of parameter space where the model can exist in full compatibility with all known cosmological data. The main conclusion of this study is that the predictivity of the model has substantially increased. Therefore, it can be better put to the test in the next generation of precision cosmological observations, which include the promising DES, SNAP and PLANCK projects [63]. Interestingly enough, we have found that the final region of the parameter space is a naturalness region which is more accessible to the aforementioned precision experiments. For example, we have obtained the bound 0 ν ν 0 ∼ 10 −2 for the parameter that determines the running of the cosmological term. This bound is perfectly compatible with the physical interpretation of ν from its definition (68). Moreover, our analysis indicates that the cosmon entity X behaves as "phantom matter" [17], i.e. it satisfies ω X < −1 with negative energy density. This result is a clear symptom (actually an expected one) from its effective nature. It is also a welcome feature; let us recall [17] that "phantom matter", in contrast to the "standard" phantom energy, prevents the Universe from reaching the Big Rip singularity. Finally, perhaps the most noticeable (and experimentally accessible) feature that we have uncovered from the analysis of the DE perturbations in the ΛXCDM model, is that the overall EOS parameter w e associated to the total DE density ρ D behaves effectively as quintessence (w e −1) in precisely the region of parameter space where the cosmic coincidence problem can be solved. In other words, quintessence is mimicked by the ΛXCDM model in that relevant region, despite that there is no fundamental quintessence field in the present framework. A detailed confrontation of the various predictions of the ΛXCDM model (in particular, the kind of dependence w e = w e (z)) with the future accurate experimental data [63], may eventually reveal these features and even allow to distinguish this model from alternative DE proposals based on fundamental quintessence fields. To summarize: we have demonstrated that the set of cosmological models characterized by a composite, and covariantly conserved, DE density ρ D in which the vacuum energy ρ Λ is a dynamical component (specifically, one that evolves quadratically with the expansion rate, see Eq. (67)), proves to be a distinguished class of models that may provide a consistent explanation of why ρ D is near ρ M , in full compatibility with the theory of cosmological perturbations and the rest of the cosmological data. Remarkably, such class of models is suggested by the above mentioned renormalization group approach to cosmology. We conclude that the ΛXCDM model can be looked upon as a rather predictive framework that may offer a robust, and theoretically motivated, dynamical solution to the cosmic coincidence problem.
27,088.6
2008-09-20T00:00:00.000
[ "Physics" ]
A Numerically Robust Sequential Linear Programming Algorithm for Reactive Power Optimization . A robust sequential primal-dual linear programming formulation for reactive power optimization is developed and discussed in this paper. The algorithm has the characteristic that no approximations or complicate control logic are required in the basic Sequential Linear Programming (SLP) formulation as used by other SLP algorithms reported in the literature. Transmission loss minimization is used as the primary objective. A secondary feasibility improvement objective is used which results in better feasible solution in comparison with the loss minimization objective especially when the initial base case has over voltages. Modification in the proposed method to obtain the limited amount and limited movement of controller solution for real time application is also presented. The algorithm has been tested on Ward and Hale 6-Bus system. Introduction Proper reactive power dispatch is required for maintaining an acceptable level of the bus voltages, reduction in transmission losses, and an increase in static voltage stability margin. It is essential that existing reactive power controls viz., generator excitations, transformer taps, switchable shunt reactive power compensation are judiciously used to achieve the aforesaid objective. A new solution based on successive linear approximation has been used for power flow equations, and the quality of initial points regarding voltage magnitude is relatively low in the first few iterations [1]. An optimization method using Dynamical Thermal Rating (DTR) and linear programming (LP) to minimize generation costs or transmission losses derived from a spatially resolved thermal model of the transmission system based on actual weather conditions along the line [2]. A linear power flow model involving tap changers and phase shift considering transmission loss minimization is one of the common objective used in Linear Program (LP) formulations and implementation of expert system in solving the voltage stability with tap changers and generation controls [3,4]. The following difficulties are encountered in the LP formulation with this minimization objective, (i) Zig-zagging in convergence characteristic of the sequential LP formulation, and (ii) Inability to remove over voltages with loss minimization objective. To overcome the above difficulties and to avoid zig-zagging of the convergence characteristic, the authors in [3,4] restricted the controller movements by using progressively smaller controller ranges in each power flow-LP optimization cycles. An efficient approach for solving the optimal reactive power dispatch problem with a nonlinear constraint optimization to find the control variable settings which minimize transmission active power losses and load bus voltage deviations [5]. Reference [6] presents a novel methods to approximate the nonlinear AC optimal power flow (OPF) into tractable linear/quadratic programming (LP/QP) based OPF problems that can be used for power system planning and operation. A development of a linear programming approach into a truly general purpose with computational of optimal power flow. A linearprogramming models that incorporate reactive power and voltage magnitudes in a linear power flow approximation has been presented [7]. In this paper a numerical robust sequential primal-dual sequential linear programming formulation for reactive power optimization is developed. The algorithm has the following features. i). The algorithm does not require modified controller limits to control zig-zagging of the solution. Actual controller limits are used without any modification. ii). The solution for the control variables is always within the specified limits and may be implemented directly for the power flow solution without any approximation. iii). The number of power flow-optimization cycles are very small. Usually, an accurate minimum loss solution is obtained in two cycles to three cycles. iv). Since modified controller ranges are not used, and controllers are allowed to move within their entire specified range, the number of controllers shifted from their initial position is small. When restricted control ranges are used as reported in other work [3,4], the loss minimization is restricted due to insufficient control ranges. This results in activating more number of controller as well as more number of power flowoptimization cycles to achieve minimum transmission losses. v). A secondary voltage feasibility improvement objective allows the algorithm to correct the over/under voltages efficiently. The transmission loss minimization objective is inefficient to correct the over voltages with standard LP formulation. The implemented algorithm in a production grade program does not use any restriction on the magnitude of the floating point variables. Even the smallest possible pivot or the different possible floating point ratios computed in the primal-dual sequential linear programming algorithm are considered. The algorithm Standard LP formulation solves an optimization problem either as maximization or as minimization problem. The minimization problem is a dual of the maximization problem and essentially gives the same optimum results as the maximization problem. The LP algorithm for the maximization problem is the primal (simplex) algorithm and for the minimization problem is the dual (dual simplex) algorithm. A primal algorithm requires a sub optimal but feasible tableau. A dual algorithm requires optimal tableau with infeasibilities [8]. A primal pivot improves the feasibility while attempting to maintain optimality. When the initial tableau is neither optimal nor feasible a straight forward implementation of the primal or the dual algorithm is not possible. Under these conditions a primal-dual algorithm may be used. A primal and dual pivots in terms of their influence on the objective and accordingly selects either the primal or dual pivot. The following basic difference between the primal and the dual algorithm is of importance to the transmission loss minimization problem. i) When the initial tableau is upper bound feasible, a primal algorithm will always provide feasible controller solutions that may be implemented directly for the subsequent power flow solution. Further, the lower bound infeasibilities, such as low voltage will improve with the improvement in the objective. Hence, there is no need to use restricted or modified controller ranges as used in other reported works. ii) There is no guarantee that a straightforward implementation of the dual algorithm will result in feasible controller solutions. This appears to be the main reason for the use of the approximations on controller limits as reported in the earlier literature. In the primal-dual algorithm presented in this paper a check is introduced to see whether a given pivot will result in infeasible controller solution. If so, this particular pivot is discarded and the next possible pivot is considered. This ensures that the final solution for the controller variables will always be within the specified range. Problem statement The transmission loss minimization problem may be stated as follows, in Equation (1) to Equation (5). The power flow equation constraints defined in Equation (2) to be satisfied at any operating point. The vectors u and x represent set of control variables (generator excitations, transformer taps etc.) and dependent variables (bus voltage magnitudes). Constraints defined in Equation (3) and Equation (4) are dependent variable permissible control limits. Constraints defined in Equation (5) are security constraints with the limitation of MVAR loading of generators and MVA loading of transmission lines in the system. The algorithm presented in this paper minimizes active power of the slack generator. This is equivalent to transmission loss minimization, when the active power generations of the remaining generators are determined from economic dispatch. Reduced formulation Linearizing the power flow equations around its solution [9], it describes in Equation (6) to Equation (8) that is obtained: (8) gives the sensitivity of dependent bus voltage magnitude and phase angles as a function of specified control variables, in Equation (9). Equation (9) gives the sensitivity of slack generation as a function of the specified variables (Equations 10 and 11). Equation (11) gives the sensitivity of system security monitoring variables such as generator reactive power limits, line loading as functions of specified control variables. Simplex tableau formulation The transmission loss minimization LP problem [10] can be stated in Equation (12) to Equation (16). (16) where Equation (13) and Equation (14) include the linearized sensitivity relations Equation (7) and Equation (11). In the actual implementation, negative of the objective function (12) is maximized and the sign of the inequalities in Equation (13) to Equation (16) is reversed. With these modifications a condensed simplex tableau can be readily formed and is shown in the tableau Equation (17), Where r is a column vector representing the negative of the right hand side of the inequalities Equation (13) to Equation (16) and A is the coefficient matrix of control variables representing the negative of the left-hand side of the inequalities Equation (13) to Equation (16). Sensitivities with respect to dependent bus voltage magnitudes and angles To obtain the sensitivities of the injection buses with respect to the voltages magnitude and angles of the dependent bus (|V|,) and the voltage magnitude of the independent bus (generator excitations), a partial derivatives in the formulation of the Jacobian power flow are used Equation [11]. The following terms Equation (18) to Equation (20) are used in the partial derivatives. The partial derivatives when k  m are given by Equation (21) and Equation (22) The partial derivatives when k = m are given by Equation (23) to Equation (26) The sensitivities is ignored with respect to the slack bus. Sensitivities with respect to shunt reactive power compensation If Bsh and Vsh are defined as the reactive power compensation susceptance and the voltage at a bus respectively, the reactive power absorbed by the shunt component is given by Equation Sensitivities with respect to transformer tap Let p and q are defined as the transformer terminal buses with the off nominal turn ratio T:1. With the relation T = 1/, the sensitivities of the transformer power flow with respect to the transformer tap are given by the following Equations (29) to Equation (32): Where pq pq pq jB G y + = , is the series admittance between the buses p and q. pq is the phase angle of ypq. Primal-dual algorithm The pivot selection in the primal -dual algorithm is explained with reference to Equation (17). Let A(p,q) represents the pivot. r(p) is the corresponding entry in the vector r and C(q) is the corresponding entry in the objective row. Then a primal pivot must satisfy the following conditions. First, C(q) is the most negative entry in the objective row. Second, The ratio r(p)/A(p,q) is the smallest positive ratio for all possible pivots in column q. A dual pivot must satisfy the following conditions: i) r(p) is the most violated basis variable. ii) The ratio -(C(q)/A(p,q)) is the smallest positive ratio among all possible pivots in row p. Once a primal and a dual pivot are found, whichever pivot influences the objective most is chosen as the pivot. Implementation In the actual implementation the following two restrictions are added. i). The pivot should not result in any control infeasibility. ii). After pivoting, a new tableau obtained and more feasible compare the previous one. This requires the simulation of the effect of the pivot on the vector r. The first condition is always satisfied with a primal pivot, provided that the initial state has feasible controller positions. A dual pivot does not necessarily satisfy the two restrictions stated above. There is no guarantee that it will result in feasible controller solution. Hence a check is required to ensure the same. Although the dual pivot forces the most violated variable to its limit, there is no guarantee that the overall feasibility of the tableau improves. When the two restrictions stated above are implemented, it is guaranteed that the algorithm will result in implementable solution for the control variables with improved optimality feasibility. For practical large systems, the final tableau will be usually optimal with some infeasibility. Test case and results The proposed algorithm is tested on Ward and Hale 6 Bus system. This case system is taken from reference [12]. The sensitivity matrix Sx is shown in Table 1. The sensitivity information is obtained from coupled load flow Jacobian formulation. The first two columns correspond to generator excitation controls |V1| and |V2|. The next two columns correspond to the shunt reactive power controls at buses 4 and 6. The last two columns correspond to the transformer tap controls. The first two rows of Table 1 correspond to the generator reactive power sensitivity with respect to the specified controllers. The last four rows give the sensitivity of the dependent bus voltages (magnitudes) or the buses 3 to 6 concerning the specified controllers. The convergence characteristic of the algorithm is listed in Table 2. The last column of the tableau (Sv) represents the absolute sum of voltage infeasibilities. Accurate convergence is obtained in two load flow optimization cycles. Further improvement in the loss reduction was not possible since two of the bus voltages reached upper bound limits. The algorithm does not experience any oscillations with further power flow optimization cycles. Conclusion A numerically robust primal-dual sequential LP algorithm for transmission loss minimization is presented in this paper. The algorithm does not use any approximations on the controller limits or intricate control logic as suggested by other previous algorithms presented in the literature. The algorithm has excellent convergence characteristics towards minimum losses with improved feasibility. Accurate minimum loss solutions were obtained from point 2 and 3 in LP load flow cycles. While minimizing losses, overvoltages are seldom introduced. For practical large scale systems, only marginal over voltages were present at the point of convergence. The algorithm has the basic characteristic of curtailing significant number of controller movement. Modification to the basic algorithm to reduce the number of controllers are easier and straight forward to implement. The algorithm strictly respects any specified ranges for the control variables movement to any desired degree by specifying appropriate controller ranges. The algorithm, when used with a secondary feasibility improvement objective, results in better loss reduction with improved feasibility. Further over voltages are effectively removed by the algorithm. Operator's control priorities may be very easily incorporated in the algorithm, while arriving at the effective subset of the controllers.
3,214.4
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Hot Deformation Behavior of Cu–Sn–La Polycrystalline Alloy Prepared by Upcasting In this study, the hot deformation of a Cu–0.55Sn–0.08La (wt.%) alloy was studied using a Gleeble-3180 testing machine at deformation temperatures of 400–700 °C and various strain rates. The stress–strain curve showed that the hot deformation behavior of the Cu–0.55Sn–0.08La (wt.%) alloy was significantly affected by work hardening, dynamic recovery, and dynamic recrystallization. The activation energy Q was 261.649 kJ·mol−1 and hot compression constitutive equation was determined as ε˙=[sinh(0.00651σ)]10.2378·exp(33.6656−261.649RT). The microstructural evolution of the alloy during deformation at 400 °C revealed the presence of both slip and shear bands in the grains. At 700 °C, dynamic recrystallization grains were observed, but recrystallization was incomplete. In summary, these results provide the theoretical basis for the continuous extrusion process of alloys with promising application prospects in the future. Introduction Copper alloys are structural and functional materials with excellent electrical and mechanical properties. These features make them suitable for applications in designing the frame of large-scale integrated circuits, contact wires of electrified railways [1], lining of molds, and conductors for high pulse magnetic fields and traction motor rotors [2], among others. Many studies have so far been published on the thermal deformation behavior of copper alloys including the Cu-Fe alloy [3,4], Cu-Ni-Si alloy [5,6], Cu-Ag alloy [7], and Cu-Al 2 O 3 composites [8], Cu-Cr-Zr alloy [9,10], Cu-Cr-Zr-Ce, Nd, Y alloy [11][12][13], Cu-Mg alloy [14,15], and Cu-Al alloy [16,17]. However, only a few studies have been published on Cu-Sn alloys used in the contact wires of electrified railways. On the other hand, the continuous extrusion process is advantageous in terms of low energy consumption and high yield, thereby widely used in the production of contact wires for electrified railways. However, contact wires based on copper alloys still suffer from limitations such as high deformation temperatures, large deformation resistances, and complex thermal deformation behaviors. As a result, the optimization process of continuous extrusion remains extremely complex, and studies dealing with the hot deformation behavior of Cu-0.55Sn alloy should help optimize the deformation behavior of Cu-Sn alloys via the continuous extrusion process. Furthermore, the addition of small amounts of rare earth elements into copper alloys could purify the matrix and grain boundary, improve the conductivity as well as improve the softening temperature and strength of the alloy [18]. For instance, the performance of Cu-0.55Sn alloys could be improved by adding 0.08% La. The isothermal compression tests were conducted using a Gleeble-3180 simulator at deformation temperatures ranging from 400 to 700 • C (400, 500, 600, and 700 • C) and strain rates from 0.01 to 10 s −1 (0.01, 0.05, 0.1, 1, and 10 s −1 ). Under each condition, compression tests were carried out once. Before isothermal compression, all specimens were heated to the deformation temperature at a heating rate of 5 K·s -1 and maintained at the deformation temperature for 180 s. The specimen was then compressed to 40% of the original height. Before testing, the two ends of each specimen were lubricated to prevent uneven deformation during hot compression deformation. After compression testing, the specimens were immediately quenched in water to maintain the state of the deformed tissue. The un-deformed and deformed specimens were then sectioned parallel to the compression axis ( Figure 1). The specimens were mechanically polished and then etched in a solution containing FeCl 3 (3 g), HCl (2 mL), and C 2 H 6 O (96 mL). The microstructures were examined by optical microscopy (OM, LEICA DM2500M, Wetzlar, Germany) and scanning electron microscope (SEM, EDAX-TSL, Burgen, KS, USA). Materials 2020, 13, x FOR PEER REVIEW 2 of 12 In this study, the hot deformed behavior of a Cu-0.55Sn-0.08La alloy was studied in an effort to provide the theoretical basis for optimizing the continuous extrusion process. The results indicated that the hot deformed behavior of Cu-0.55Sn-0.08La (wt.%) alloy was significantly affected by work hardening, dynamic recovery, and dynamic recrystallization. The activation energy Q and constitutive equation of hot deformation were determined by examining the relation among hot compression flow stress and strain, strain rate, and deformation temperature of Cu-0.55Sn-0.08La alloy. Materials and Methods First, electrolytic copper (purity 99.99%), Sn (purity 99.95%), and pure block La (purity 99.5%) were together melted in a power frequency induction furnace. The molten copper liquid was then continuously cast into a Cu-Sn-La alloy rod billet (diameter 20 mm) using an up-casting machine for further use. The mass fractions of alloy elements were determined as 0.55% Sn, 0.08% La, and Cu balance. Next, the continuous casting rod (diameter 20 mm) was cut by Lathe and Wire Electrical Discharge Machining (WEDM, Taizhou, China) into samples with a size of Ф 8 mm × 12 mm. The specification of the properties of Cu-Sn-La alloy are shown in Table 1. The isothermal compression tests were conducted using a Gleeble-3180 simulator at deformation temperatures ranging from 400 to 700 °C (400, 500, 600, and 700 °C) and strain rates from 0.01 to 10 s −1 (0.01, 0.05, 0.1, 1, and 10 s −1 ). Under each condition, compression tests were carried out once. Before isothermal compression, all specimens were heated to the deformation temperature at a heating rate of 5 K·s -1 and maintained at the deformation temperature for 180 s. The specimen was then compressed to 40% of the original height. Before testing, the two ends of each specimen were lubricated to prevent uneven deformation during hot compression deformation. After compression testing, the specimens were immediately quenched in water to maintain the state of the deformed tissue. The un-deformed and deformed specimens were then sectioned parallel to the compression axis ( Figure 1). The specimens were mechanically polished and then etched in a solution containing FeCl3 (3 g), HCl (2 mL), and C₂H₆O (96 mL). The microstructures were examined by optical microscopy (OM, LEICA DM2500M, Wetzlar, Germany) and scanning electron microscope (SEM, EDAX-TSL, Burgen, KS, USA). Stress-Strain Behaviors The stress-strain behaviors of the Cu-0.55Sn-0.08La alloy at various strain rates and deformation temperatures are displayed in Figure 2. The mechanical energy of the specimen is converted into heat energy during compression; therefore, the temperature rise of the sample is large at a high strain rate, thus it is necessary to modify the experimental data of the stress-strain curve for a strain rate of 10 s −1 with temperature [19], and Figure 2e exhibits the modified curve. At fixed deformation temperature, both the flow stress and peak stress increased with strain rate, indicating the positive strain rate sensitivity of the alloy; at fixed strain rate, both the flow stress and peak stress declined with temperature, suggesting the heat-sensitive nature of the alloy [20]. Stress-Strain Behaviors The stress-strain behaviors of the Cu-0.55Sn-0.08La alloy at various strain rates and deformation temperatures are displayed in Figure 2. The mechanical energy of the specimen is converted into heat energy during compression; therefore, the temperature rise of the sample is large at a high strain rate, thus it is necessary to modify the experimental data of the stress-strain curve for a strain rate of 10 s −1 with temperature [19], and Figure 2e exhibits the modified curve. At fixed deformation temperature, both the flow stress and peak stress increased with strain rate, indicating the positive strain rate sensitivity of the alloy; at fixed strain rate, both the flow stress and peak stress declined with temperature, suggesting the heat-sensitive nature of the alloy [20]. The shape of the flow curves exhibited dependence on the initial grain size and steady DRX (dynamic recrystallization) grain size [21]. Figure 3 is initial grain of uncompressed specimen and Figure 7 is the partial recrystallized grain of the compressed deformation alloy. Compared to the initial grain size (1.40 mm) of the test sample was significantly large, the recrystallized grain size (0.06 mm) after dynamic compression. As a result, no peak or only one peak appeared in the stress-strain curve. Microstructure The macrostructure of an uncompressed specimen is presented in Figure 3. The morphology was generated by different orientations of casting grains, where a single-phase microstructure was etched into different colors. At the edge of the ingot, oblique columnar crystals were formed attributed to the horizontal cooling direction and upward vertical movement. The cooling rate of the ingot center decreased and a few grains with smaller sizes appeared. The average length of the grains on the right side was estimated to be about 3 mm and width was around 0.5 mm. The average length of the grains on the left side was about 10 mm and the width was around 1 mm. Furthermore, the grains on the left and right sides showed obvious boundaries. This was caused by the different cooling rates on both sides of the ingot. The microstructures of the deformed Cu-Sn-La alloy at 400 °C under different strain rates are illustrated in Figure 4. Due to the large initial grain size, macro coordinated deformation is difficult. The deformation of each grain appeared to be extremely uneven. Moreover, many slip bands and adiabatic shear bands were present in some grains [24], which were terminated at the grain boundary [25,26]. Under compression deformation, the grains rotated to become gradually perpendicular to the compression direction. Compared to Figure 4a, the slip bands in the grains became denser in Figure 4b as strain rate increased. The microstructures of the alloy deformed at 500 °C and different strain rates are presented in Figure 5. At low strain rate (ε = 0.01 s −1 ), shear bands still existed and dynamically recrystallized grains appeared in the shear bands ( Figure 5a). Thus, the dynamic recrystallization occurred locally with the formation of large numbers of fine dynamic recrystallized grains at the grain boundary. This led to of the formation of large numbers of "necklace structures". As strain rate increased, at compression direction Moreover, the shape of each flow curve strongly depended on the solution concentration [22]. The flow stress of the copper alloy was always higher than that of the pure copper under the same temperature and strain rate. The mass fractions of Sn and La in the specimens were determined as 0.55 and 0.08%, respectively. Both the Cu-La intermetallic compounds and Sn solute elements increased the dislocation movement difficulty in the copper matrix, thereby increasing the flow stress of the copper alloy. At the strain rates of 0.01-1 s −1 and 400 • C, the flow stress first increased rapidly with strain and then tended to increase slowly, showing typical work hardening features. At strain rates of 10 s −1 , the flow stress increased faster than the low strain rate due to the obvious work hardening effect. The final stage of the curve still displayed an upward trend, indicating the dominance of the work hardening. As the strain rate increased, the peak of flow stress increased slowly from 0.01 to 1 s −1 . For strain rates exceeding the critical value (1 s −1 ), the peak of flow stress increased significantly. When the strain rate increased rapidly, the plastic deformation occurred in a short time, the deformed grains could not recover or recrystallize in time, the work hardening effect was significant, the dislocation density in the alloy increased, and the flow stress peak value increased significantly. At 500-600 • C, the flow stress first of all increased rapidly with strain and then tended to stabilize without obvious flow stress peaks and sharp softening trend. With the increase of strain rate from 0.01 to 0.05 s −1 at 500 • C, the peak value of flow stress increased rapidly and then tended to stabilize from 0.1 to 1 s −1 . At strain rates exceeding 1 s −1 , the peak value of the flow stress enhanced significantly. The peak flow stress would enhance significantly, once the strain rate exceeded 1 s −1 . At 600 • C, from 0.01 to 1 s −1 , the peak flow stress further increased. As the strain rate reached 1 s −1 , the peak flow stress hardly increased. At 700 • C, the flow stress increased rapidly with strain and then tended to stabilize, indicating the important role of dynamic softening in the process. As strain rate incremented, the peak value of the flow stress increased gradually. At 0.01-0.05 s −1 , the flow stress tended to stabilize as the strain increased. From 0.1 to 10 s −1 , the flow stress tended to increase slightly with further increase in strain. Above all, in the range of 400-700 • C, 0.01-10 s −1 , during the compression experiment, strain hardening and dynamic recovery occurred simultaneously at lower temperature or higher strain rate. With dislocation proliferation, accretion, recombination, and annihilation, the dislocation distribution was observed first to be uneven and then gradually evolved into an independent cellular structure in different dislocation tangled areas [21]. This led to the formation of dislocation cells and reduction in dislocation density. Consequently, the stress-strain curve increased slowly at 400 • C (Figure 2a,b). At high deformation temperatures or low strain rates, the deformation process was accompanied by the formation and growth of recrystallized crystal nuclei, and the softening rate of the alloy appeared greater than or equal to the deformation hardening rate [23]. Thus, the stress-strain curve tended to stabilize (Figure 2a,b at 700 • C). At a deformation hardening rate equivalent to dynamic recovery and dynamic recrystallization rate, the stress-strain curve was stable (Figure 2a,b at 600 • C). The reason for the above-mentioned different laws can be attributed to the competition between dynamic hardening and dynamic softening phenomena. On the other hand, since the alloy was characterized with low stacking fault energy, its extended dislocations were very wide. Moreover, the dislocations were difficult to extricate from the dislocation network as well as challenging to offset each other through cross slip and climb. At the beginning of deformation, the recovery of the sub-structure was very slow. This led to very high dislocation density in the sub-structure, a very small structure of the sub crystal, and many dislocation tangles in the cell wall. Microstructure The macrostructure of an uncompressed specimen is presented in Figure 3. The morphology was generated by different orientations of casting grains, where a single-phase microstructure was etched into different colors. At the edge of the ingot, oblique columnar crystals were formed attributed to the horizontal cooling direction and upward vertical movement. The cooling rate of the ingot center decreased and a few grains with smaller sizes appeared. The average length of the grains on the right side was estimated to be about 3 mm and width was around 0.5 mm. The average length of the grains on the left side was about 10 mm and the width was around 1 mm. Furthermore, the grains on the left and right sides showed obvious boundaries. This was caused by the different cooling rates on both sides of the ingot. The microstructures of the deformed Cu-Sn-La alloy at 400 • C under different strain rates are illustrated in Figure 4. Due to the large initial grain size, macro coordinated deformation is difficult. The deformation of each grain appeared to be extremely uneven. Moreover, many slip bands and adiabatic shear bands were present in some grains [24], which were terminated at the grain boundary [25,26]. Under compression deformation, the grains rotated to become gradually perpendicular to the compression direction. Compared to Figure 4a, the slip bands in the grains became denser in Figure 4b as strain rate increased. Microstructure The macrostructure of an uncompressed specimen is presented in Figure 3. The morphology was generated by different orientations of casting grains, where a single-phase microstructure was etched into different colors. At the edge of the ingot, oblique columnar crystals were formed attributed to the horizontal cooling direction and upward vertical movement. The cooling rate of the ingot center decreased and a few grains with smaller sizes appeared. The average length of the grains on the right side was estimated to be about 3 mm and width was around 0.5 mm. The average length of the grains on the left side was about 10 mm and the width was around 1 mm. Furthermore, the grains on the left and right sides showed obvious boundaries. This was caused by the different cooling rates on both sides of the ingot. The microstructures of the deformed Cu-Sn-La alloy at 400 °C under different strain rates are illustrated in Figure 4. Due to the large initial grain size, macro coordinated deformation is difficult. The deformation of each grain appeared to be extremely uneven. Moreover, many slip bands and adiabatic shear bands were present in some grains [24], which were terminated at the grain boundary [25,26]. Under compression deformation, the grains rotated to become gradually perpendicular to the compression direction. Compared to Figure 4a, the slip bands in the grains became denser in Figure 4b as strain rate increased. The microstructures of the alloy deformed at 500 °C and different strain rates are presented in Figure 5. At low strain rate (ε = 0.01 s −1 ), shear bands still existed and dynamically recrystallized grains appeared in the shear bands (Figure 5a). Thus, the dynamic recrystallization occurred locally with the formation of large numbers of fine dynamic recrystallized grains at the grain boundary. This led to of the formation of large numbers of "necklace structures". As strain rate increased, at The microstructures of the alloy deformed at 500 • C and different strain rates are presented in Figure 5. At low strain rate ( . ε = 0.01 s −1 ), shear bands still existed and dynamically recrystallized grains appeared in the shear bands (Figure 5a). Thus, the dynamic recrystallization occurred locally with the formation of large numbers of fine dynamic recrystallized grains at the grain boundary. This led to of the formation of large numbers of "necklace structures". As strain rate increased, at higher strain rate ( . ε = 10 s −1 ), numerous fine recrystallized grains and annealing twins appeared (Figure 5b). In Figure 5b, the wave-like grain boundaries are usually observed under DRX conditions; noteworthy, Figure 5b exhibits that annealing twins were evolved in dynamic grains, although their density was lower than that for statically annealed grains [22]. Materials 2020, 13, x FOR PEER REVIEW 6 of 12 higher strain rate (ε = 10 s −1 ), numerous fine recrystallized grains and annealing twins appeared (Figure 5b). In Figure 5b, the wave-like grain boundaries are usually observed under DRX conditions; noteworthy, Figure 5b exhibits that annealing twins were evolved in dynamic grains, although their density was lower than that for statically annealed grains [22]. The microstructures of the alloy deformed at 600 °C and different strain rates are provided in Figure 6. At low and high strain rates (ε = 0.01 s −1 and ε = 10 s −1 ), dynamically recrystallized grains were noted and became obvious as strain rate increased (Figure 6b). Furthermore, the grain boundary preferentially nucleated (Figure 6), and dynamically recrystallized grains gradually expanded and grew around by devouring the surrounding deformed matrix. The latter was due to the grain boundary, which possessed basic conditions of recrystallization nucleation of large-angle interface with high-density defects and superior deformation energy. At this place, recrystallization exhibited priority for nucleation and growth, forming fine and equiaxed recrystallization structures. The microstructures of the alloy deformed at 700 °C at different strain rates are displayed in Figure 7. Fine recrystallization in the center of Figure 7b was observed, A and B were coarse original grains, and the boundary between the recrystallized grain and original grain appeared to be clear. During dynamic recrystallization, the La rich phase prevented the grain boundary from migrating, thereby reducing the size of the dynamic recrystallization grain. On the other hand, the recrystallized grain sizes at high strain rates were larger at the same temperature since higher deformation temperatures led to higher thermal activation energy. Furthermore, more complete thermal activation processes led to less storage energy after The microstructures of the alloy deformed at 600 • C and different strain rates are provided in Figure 6. At low and high strain rates ( . ε = 0.01 s −1 and . ε = 10 s −1 ), dynamically recrystallized grains were noted and became obvious as strain rate increased (Figure 6b). Furthermore, the grain boundary preferentially nucleated (Figure 6), and dynamically recrystallized grains gradually expanded and grew around by devouring the surrounding deformed matrix. The latter was due to the grain boundary, which possessed basic conditions of recrystallization nucleation of large-angle interface with high-density defects and superior deformation energy. At this place, recrystallization exhibited priority for nucleation and growth, forming fine and equiaxed recrystallization structures. Materials 2020, 13, x FOR PEER REVIEW 6 of 12 higher strain rate (ε = 10 s −1 ), numerous fine recrystallized grains and annealing twins appeared (Figure 5b). In Figure 5b, the wave-like grain boundaries are usually observed under DRX conditions; noteworthy, Figure 5b exhibits that annealing twins were evolved in dynamic grains, although their density was lower than that for statically annealed grains [22]. The microstructures of the alloy deformed at 600 °C and different strain rates are provided in Figure 6. At low and high strain rates (ε = 0.01 s −1 and ε = 10 s −1 ), dynamically recrystallized grains were noted and became obvious as strain rate increased (Figure 6b). Furthermore, the grain boundary preferentially nucleated (Figure 6), and dynamically recrystallized grains gradually expanded and grew around by devouring the surrounding deformed matrix. The latter was due to the grain boundary, which possessed basic conditions of recrystallization nucleation of large-angle interface with high-density defects and superior deformation energy. At this place, recrystallization exhibited priority for nucleation and growth, forming fine and equiaxed recrystallization structures. The microstructures of the alloy deformed at 700 °C at different strain rates are displayed in Figure 7. Fine recrystallization in the center of Figure 7b was observed, A and B were coarse original grains, and the boundary between the recrystallized grain and original grain appeared to be clear. During dynamic recrystallization, the La rich phase prevented the grain boundary from migrating, thereby reducing the size of the dynamic recrystallization grain. On the other hand, the recrystallized grain sizes at high strain rates were larger at the same temperature since higher deformation temperatures led to higher thermal activation energy. Furthermore, more complete thermal activation processes led to less storage energy after The microstructures of the alloy deformed at 700 • C at different strain rates are displayed in Figure 7. Fine recrystallization in the center of Figure 7b was observed, A and B were coarse original grains, and the boundary between the recrystallized grain and original grain appeared to be clear. Figure 8a show the grain boundary map of the deformed alloy at 700 °C and 10 s −1 , where the green frame is the recrystallized structures, and the blue frame represents the substructures and deformed structures. Figure 8b shows the misorientation distribution diagram of the deformed alloy at 700 °C and 10 s −1 , where the fraction of misorientations below 3° was 85%, the fraction of misorientations ranging from 3 to 15° was 4%, and the fraction of misorientations over 15° was 10%. In the microstructure, when the misorientation was less than 3°, the grains were considered as a deformed structure; in the misorientation range (between 3° and 15°), the grains were regarded as substructures; when the misorientation exceeded 15°, the grains were considered as a recrystallized structure [28]. This result explains the shape change of the stress-strain curve ( Figure 2) Constitutive Equations Under hot processing conditions, constitutive equations are often used to calculate forces during processing at certain setting rates. The modeling of the processing stage must consider the uneven distribution of strain, strain rate, and temperature as well as their variations with time. The model may require several constitutive functions depending on the complexity of the flow curves [29]. The Arrhenius equation can be used to describe the constitutive relation of flow stress behavior during hot deformation. This type of constitutive relation is justified when strain hardening can be ignored. At 400 °C, the strain hardening effect is significant, in particular, in the case of high strain rate (10 s −1 ). At 500-600 ℃, the strain hardening effect is slight. Therefore, the calculation process of the constitutive equation was mainly carried out at 500, 600, and 700 °C. By the Zener-Holloman parameter (Z), that is, the temperature-compensated strain rate, the relationship between the temperature and the strain rate corresponding to plastic deformation can On the other hand, the recrystallized grain sizes at high strain rates were larger at the same temperature since higher deformation temperatures led to higher thermal activation energy. Furthermore, more complete thermal activation processes led to less storage energy after deformation, thus delaying recrystallization and forming smaller recrystallized grain sizes at low strain rates [27]. Figure 8a show the grain boundary map of the deformed alloy at 700 • C and 10 s −1 , where the green frame is the recrystallized structures, and the blue frame represents the substructures and deformed structures. Figure 8b shows the misorientation distribution diagram of the deformed alloy at 700 • C and 10 s −1 , where the fraction of misorientations below 3 • was 85%, the fraction of misorientations ranging from 3 to 15 • was 4%, and the fraction of misorientations over 15 • was 10%. In the microstructure, when the misorientation was less than 3 • , the grains were considered as a deformed structure; in the misorientation range (between 3 • and 15 • ), the grains were regarded as substructures; when the misorientation exceeded 15 • , the grains were considered as a recrystallized structure [28]. This result explains the shape change of the stress-strain curve ( Figure 2 Figure 8a show the grain boundary map of the deformed alloy at 700 °C and 10 s −1 , where the green frame is the recrystallized structures, and the blue frame represents the substructures and deformed structures. Figure 8b shows the misorientation distribution diagram of the deformed alloy at 700 °C and 10 s −1 , where the fraction of misorientations below 3° was 85%, the fraction of misorientations ranging from 3 to 15° was 4%, and the fraction of misorientations over 15° was 10%. In the microstructure, when the misorientation was less than 3°, the grains were considered as a deformed structure; in the misorientation range (between 3° and 15°), the grains were regarded as substructures; when the misorientation exceeded 15°, the grains were considered as a recrystallized structure [28]. This result explains the shape change of the stress-strain curve ( Figure 2) Constitutive Equations Under hot processing conditions, constitutive equations are often used to calculate forces during processing at certain setting rates. The modeling of the processing stage must consider the uneven distribution of strain, strain rate, and temperature as well as their variations with time. The model may require several constitutive functions depending on the complexity of the flow curves [29]. The Arrhenius equation can be used to describe the constitutive relation of flow stress behavior during hot deformation. This type of constitutive relation is justified when strain hardening can be ignored. At 400 °C, the strain hardening effect is significant, in particular, in the case of high strain rate (10 s −1 ). At 500-600 ℃, the strain hardening effect is slight. Therefore, the calculation process of the Constitutive Equations Under hot processing conditions, constitutive equations are often used to calculate forces during processing at certain setting rates. The modeling of the processing stage must consider the uneven distribution of strain, strain rate, and temperature as well as their variations with time. The model may require several constitutive functions depending on the complexity of the flow curves [29]. The Arrhenius equation can be used to describe the constitutive relation of flow stress behavior during hot deformation. This type of constitutive relation is justified when strain hardening can be ignored. At 400 • C, the strain hardening effect is significant, in particular, in the case of high strain rate (10 s −1 ). At 500-600°C, the strain hardening effect is slight. Therefore, the calculation process of the constitutive equation was mainly carried out at 500, 600, and 700 • C. By the Zener-Holloman parameter (Z), that is, the temperature-compensated strain rate, the relationship between the temperature and the strain rate corresponding to plastic deformation can be analyzed. According to Equation (1) [30], the respective constitutive equation can be expressed as follows: ε· exp(Q/RT) In Equation (1), the parameters A, α, and n are constants, which are independent of temperature; the σ represents the true stress in MPa; . ε represents the strain rate in s −1 ; the apparent activation energy of deformation is represented by Q in J·mol −1 ; R is equal to 8.314 J·mol −1 ·K −1 ; and T stands for thermodynamic temperature in K. Equation (2) [31] can adequately express the influence of temperature and strain rate on flow stress, as follows: Notably, according to different stresses, Equation (2) can be transformed into three different forms: when ασ < 0.8, Equation (2) can be transformed into power function Equation (3), when ασ > 1.2, Equation (2) can be transformed into exponential function Equation (4) [32], and the hyperbolic sine function (Equation (2)) is suitable for given any stress. . where A 1 and A 2 are material constants, and n 1 and β are related to the strain rate sensitivity index. Equations (2)-(4) can be written as: The parameters n 1 , β, and n were calculated by plotting ln ε versus ln σ with lower peak stress. β was determined by the average of the three slopes of the linear fits of ln . ε versus σ with higher peak stress and α was equal to β divided by n 1 . The slope of the linear fit between ln . ε and ln[sinh(ασ)] is represented by the constant n. Accordingly, the value of n 1 was estimated to be 13.7935, β was 0.0898, α was 0.00651, and n was 10.2378. and growth of recrystallization, thereby limiting the recrystallization process. On the other hand, rare earth La and impurity atoms in liquid copper formed high melting point compounds, which dispersed on the grain boundary. During compression deformation, the dispersed phase was pinned at the grain boundary of the copper alloy, thus hindering the migration of the grain boundary. Moreover, the activation energy of the Cu-0.55Sn-0.08La alloy was found to be higher than that of pure copper. In order to calculate n and lnA, ln[sinh(ασ)] was fitted linearly as a function of lnZ and the results are provided in Figure 10. The value of n was estimated to be 9.76008 and lnA was 33.66562. Equation (8) represents the apparent activation energy Q of plastic deformation. . ε is the slope of linear fit in Figure 9d. Accordingly, K was calculated as 3.073796 and Q was 261.649 k·J·mol −1 . However, the apparent activation energy of the thermal deformation of pure copper with different impurity contents is around 208-245 k·J·mol −1 [22,33]. Moreover, higher impurity contents should yield greater activation energies of thermal deformation. The activation energy of the thermal deformation of Cu-0.55Sn-0.08La alloy was estimated to be 261.649 k·J·mol −1 , indicating that the addition of Sn and La to the copper matrix increased the flow stress of the alloy. This result may be attributed to the interaction between solute atoms Sn and dislocations and grain boundaries, which hindered the dislocation sliding, climbing, and grain boundary migration. These features were unfavorable to the nucleation and growth of recrystallization, thereby limiting the recrystallization process. On the other hand, rare earth La and impurity atoms in liquid copper formed high melting point compounds, which dispersed on the grain boundary. During compression deformation, the dispersed phase was pinned at the grain boundary of the copper alloy, thus hindering the migration of the grain boundary. Moreover, the activation energy of the Cu-0.55Sn-0.08La alloy was found to be higher than that of pure copper. In order to calculate n and lnA, ln[sinh(ασ)] was fitted linearly as a function of lnZ and the results are provided in Figure 10. The value of n was estimated to be 9.76008 and lnA was 33.66562. In order to calculate n and lnA, ln[sinh(ασ)] was fitted linearly as a function of lnZ and the results are provided in Figure 10. The value of n was estimated to be 9.76008 and lnA was 33.66562. Based on the above-mentioned analyses, the constitutive equation of Cu-0.55Sn-0.08La at high temperatures was determined as: . ε = A·[sin h(ασ)] n · exp − Q RT . As a result, Equation (9) can be deduced as: . ε = [sin h(0.00651σ)] 10.2378 · exp(33.6656 − 261.649/RT) (9) Conclusions Hot deformation of Cu-0.55Sn-0.08La (wt.%) alloy was successfully studied using a Gleeble-3180 testing machine at deformation temperatures (400-700 • C) and various strain rates. The following conclusions were drawn: 1. The flow stress of the Cu-0.55Sn-0.08La alloy decreased with the deformation temperature and increased with strain rate. At low temperature (400 • C) or high strain rates (1 and 10 s −1 ), the stress-strain curve increased with deformation. At high deformation temperature (700 • C) or low strain rates (0.01 and 0.05 s −1 ), the deformation process was accompanied by the formation and growth of recrystallized nuclei. Furthermore, the softening rate of the alloy was equal to the deformation hardening rate, and the stress-strain curve tended to stabilize. 2. At 500-700 • C and 0.01-10 s −1 , the relationship between peak flow stress of the Cu-0.55Sn-0.08La alloy and strain rate was determined as: . ε = A·[sin h(ασ)] n · exp(−Q/RT) and thermal activation energy Q was 261.649 k·J·mol −1 . Thus, the constitutive equation can be expressed as: 3. The microstructure of Cu-0.55Sn-0.08La alloy showed the presence of slip bands and shear bands in the grains at the deformation temperature of 400 • C. Recrystallization grains were noticed near the shear band at the grain boundary as the deformation temperature increased. At 700 • C, the dynamic recrystallization appeared to be relatively complete with the growth of recrystallization grains.
7,975.8
2020-08-24T00:00:00.000
[ "Materials Science" ]
ISL1 Protein Transduction Promotes Cardiomyocyte Differentiation from Human Embryonic Stem Cells Background Human embryonic stem cells (hESCs) have the potential to provide an unlimited source of cardiomyocytes, which are invaluable resources for drug or toxicology screening, medical research, and cell therapy. Currently a number of obstacles exist such as the insufficient efficiency of differentiation protocols, which should be overcome before hESC-derived cardiomyocytes can be used for clinical applications. Although the differentiation efficiency can be improved by the genetic manipulation of hESCs to over-express cardiac-specific transcription factors, these differentiated cells are not safe enough to be applied in cell therapy. Protein transduction has been demonstrated as an alternative approach for increasing the efficiency of hESCs differentiation toward cardiomyocytes. Methods We present an efficient protocol for the differentiation of hESCs in suspension by direct introduction of a LIM homeodomain transcription factor, Islet1 (ISL1) recombinant protein into the cells. Results We found that the highest beating clusters were derived by continuous treatment of hESCs with 40 µg/ml recombinant ISL1 protein during days 1–8 after the initiation of differentiation. The treatment resulted in up to a 3-fold increase in the number of beating areas. In addition, the number of cells that expressed cardiac specific markers (cTnT, CONNEXIN 43, ACTININ, and GATA4) doubled. This protocol was also reproducible for another hESC line. Conclusions This study has presented a new, efficient, and reproducible procedure for cardiomyocytes differentiation. Our results will pave the way for scaled up and controlled differentiation of hESCs to be used for biomedical applications in a bioreactor culture system. Introduction Cardiomyocytes derived from human embryonic stem cells (hESCs) potentially offer large numbers of cells for biomedical and industrial applications. Current protocols for differentiation of cardiomyocytes from hESCs are time consuming, have low yield, and lack reproducibility (for review see, ref [1]). However, for the applicability of these cells in biomedicine it is necessary to produce sufficient numbers of functional cardiomyocytes or their progenitors. This requires the development of large-scale expansion of hESCs and their controlled differentiation protocols. In recent years technologies for the suspension expansion of hESCs and application of bioreactors have been introduced [2,3,4,5]. For example, we recently expanded hESCs as carrier-free suspension aggregates for an extended period of time [6]. On the other hand, the differentiation of cardiomyocytes from hESCs has progressed rapidly through a growth factor-mediated approach. Although the efficiency of differentiation protocols has increased over time, a desirable efficiency has not been attained by these methods. It has been shown that the forced expression of instructive transcription factors such as Tbx5 and Nkx2.5 successfully increased the differentiation efficiency toward cardiomyocytes [7,8]. There is strong evidence that cardiomyocyte specification and differentiation is controlled by transcription factors such as the LIM-homeodomain transcription factor, Islet 1 (ISL1). ISL1 is a marker of myocardial lineage during mammalian cardiogenesis and marks a common population of progenitors in the heart that can differentiate into cardiomyocytes, smooth muscle, and endothelial cells [9,10]. It has been demonstrated that approximately 97% of cells within the outflow tract, 92% of cells within the right ventricle, 65% of cells within the left atria, 70% of cells within the right atria, and approximately 20% of cells within the left ventricle of a normal heart are ISL1-positive. Thus, two-thirds of the cells within the entire heart originate from ISL1-positive progenitor cells [11]. It has also been shown that ISL1 is required for survival, proliferation, and migration of progenitor cells into the cardiac tube [12]. Cells differentiated from Isl1 knockdown ESCs have shown severely reduced beating frequencies and compromised expression of cardiac sarcomeric genes (Myh6, Myh7, Mlc2a, and Mlc2v). On the other hand, over-expression of Isl1 during spontaneous differentiation of mouse ESCs into EBs resulted in a higher expression level of cardiac muscle genes compared with the control. A 2-fold over-expression of Isl1 led to a 25% increase in the number of cardiac cells [13], and the expression level of Nkx2.5 (a cardiovascular progenitor marker) increased after over-expression of Isl1 in hESCs [14]. These and other data have proven that ISL1 acts at the top of a cascade of cardiac transcription factors in the myocardial lineage [12]. Although these reports represent a critical step forward in determining the potential of ISL1 in cardiac differentiation, genetic alteration of cells continues to raise safety concerns due to transgene reactivation and insertional mutagenesis [15]. Ultimately, derivation of cardiomyocytes without viral integration is essential for the generation of safe cells for therapeutic applications. Protein transduction has been shown to be an alternative approach for the over-expression of a desired gene in the absence of genetic manipulation [16]. However, because of eukaryotic cell membrane structure the directed intracellular delivery of proteins is less efficient in these cells. A significant exception to this rule is the application of protein transduction domains (PTDs), also known as cell-penetrating peptides (CPPs) that are capable of transporting cargo across the membrane and delivering biologically active proteins inside the cell. The initial discovery of CPPs originated from the observation that the HIV TAT transactivator could translocate across the plasma membrane by its 11 basic amino acids (residues 47-57), the TAT PTD. It has been shown that TAT has a higher efficiency for protein delivery into the cells when compared to other PTD signals [16,17,18]. The positive charges allow the protein to interact with lipid rafts in a negatively charged membrane and overcome the cell membrane barrier by different mechanisms, including macropinocytosis [19,20,21]. Recent studies have demonstrated that protein transduction of transcription factors can stimulate over-expression of its genes and initiate the specific pathway that is needed for differentiation toward a particular cell fate. By transduction of TAT-PDX-1 protein into hESCs, insulin protein production was induced [22]. In another experiment, Stock et al. succeeded in doubling the efficiency of oligodendroglial differentiation of mouse ESC-derived neural stem cells by NKX2.2 protein transduction [23]. In this study, we successfully applied a TAT-base protein transduction system to deliver the TAT-ISL1 protein into hESCs with the intent to improve the cardiomyocyte differentiation rate under a suspension culture condition. We have demonstrated that the application of TAT-ISL1 increased the differentiation of cardiomyocytes (2-3 folds) without genetic modification. Cloning of ISL1 cDNA Total RNA from hESC-derived cardiac precursor cells was extracted using TRIzol reagent (Invitrogen, CA) and treated with RNase-free Dnase (Invitrogen, Carlsbad, CA, USA). Reverse transcription was performed under the conditions recommended by the manufacturer using Super ScriptIII reverse transcriptase (Invitrogen, Carlsbad, CA, USA) and Oligo dT primer. Next, the Isl1 fragment was amplified by PCR using (pfx) DNA polymerase (Invitrogen, Carlsbad, CA, USA) and cloned into the pENTR-D/ TOPO Gateway entry vector according to the supplier's instructions (Invitrogen, Carlsbad, CA, USA). PCR forward and reverse primers were 5' CACC TGC GGA CCG GGC AGG GG 3' and 5' TTA GCC TCC CGA TTT GGC 3', respectively. Forward primers included the 4 base pair sequences (CACC) necessary for directional cloning on the 59 end of the forward primer. Construction of pDest17/ISL1 expression vector cDNA from the pENTER D-TOPO/ISL1 entry clone was transferred into the pDest17 gateway expression vector using an LR clonase recombination according to the manufacturer's instructions (Invitrogen, Carlsbad, CA, USA). The expression vector was transformed to E. coli strain BL21 (DE3; Novagen, Madison, WI, USA) by the heat shock method according to the supplier's manual (User Protocol TB009 Rev. F 0104). The sequence of Isl1 was verified by DNA sequencing. Recombinant fusion protein expression and purification For recombinant fusion protein expression, the selected clones were grown until the OD600 reached 0.8. Recombinant fusion protein expression was then induced by the addition of isopropyld-thiogalactopyranoside (IPTG). The expressed His6-TAT-ISL1 fusion proteins (rISL1) were purified by immobilized metal affinity chromatography (IMAC) and eluted with 8 M urea (pH 3.5), then desalted by Tris (5 mM) that contained 50% glycerol and maintained at -20uC until use. Identical volumes of elution fractions were mixed with 1/5 volume of 5x loading buffer [1 M Tris-HCl (pH 6.8), 10% w/v SDS, 0.05% w/v bromophenol blue, 50% glycerol, and 200 mM b-mercaptoethanol], heated at 95uC for 5 min and then analyzed by SDS-PAGE on a 12% (w/v) separating gel. This was followed by staining with 0.1% Coomassie brilliant blue (CBB) R-250. CBB tainted protein bands of interest were excised from the SDS-PAGE gel and samples analyzed by a Matrix-assisted laser desorption/ionization-tandem time of flight mass spectrometry (MALDI TOF/TOF MS). Gel shift assay To investigate the interaction of rISL1 with DNA, we applied a gel shift assay with some modifications. Genomic DNA was extracted from the hESCs. A total of 50 mg of extracted DNA was digested by Bam HI enzyme for 2 h at 37uC. We incubated 5 mg native purified rISL1 with 2.5, 5, and 10 mg of digested DNA for 1 h at 37uC. Formaldehyde was added to the reaction and allowed to incubate for 5 min in order to cross-link reacted DNA and proteins. To terminate the cross-linking reaction, 1 M glycine was used. In order to check for any nonspecific interactions between the DNA and protein, some of the bacterial proteins were allowed to remain in the final elution. Samples were then analyzed with SDS-PAGE. Suspension cell culture and differentiation protocol Royan H5 and Royan H6 hESC lines [24] were used in this study. Suspension culture of hESCs was performed according to a recently published protocol [6]. Briefly, cells were treated with 10 mM ROCK inhibitor Y-27632 (Sigma-Aldrich, Y0503) 1 h prior to dissociation from Matrigel. Cells then were washed by Ca 2+ -and Mg 2+ -free phosphate buffered saline (PBS; Gibco, 21600-051) and incubated with 0.05% trypsin at 37uC for 4-5 min. Dissociated cells were transferred into non-adhesive bacterial plates (60 mm; Griner, 628102) at 15610 4 viable cells/ ml in hESC medium that had been conditioned on mouse embryonic fibroblasts (MEFs) [25], which contained 10 mM ROCK inhibitor. After 2 days, half of the medium was replaced by the hESC medium conditioned on MEFs. The medium was changed every other day. Differentiation of the cells into cardiomyocytes in suspension was performed according to the Laflamme et al. protocol [26] with some modifications. Briefly, 6-day old spheres were treated by 100 ng/ml Activin A for 1 day in RPMI medium (Gibco, 51800-035) supplemented with 2% B27 without vitamin A, followed by 4 days of 10 ng/ml BMP4. At day 5, the spheres were plated on gelatin-coated plates in RPMI/B27 medium without cytokines. Beating clusters were observed 5 days post-plating. In the rISL1 treated group the recombinant protein was added from days 1-8 after initiation of differentiation induction. All experiments with hESCs were performed under supervision of the Institutional Review Board and Institutional Ethical Committee of Royan Institute. Stability and penetration of the rISL1 protein To analyze the stability and penetration of recombinant proteins, 40 mg/ml rISL1 and elution buffer (as control) were added to the aggregated hESC differentiation media one day after differentiation initiation. Cells and culture media were collected after 2, 6, 12, 24, 36, and 48 h. The quantity of rISL1 in the cell extracts and media were analyzed by Western blot and qRT-PCR as described below. Penetration was further confirmed by immunostaining analysis of adherent and aggregated hESC colonies treated with 40 mg/ml rISL1 protein and elution buffer (as control) for 2 h. Cells were washed 3 times by PBS/tween to ensure the removal of all rISL1 proteins that were loosely bound to the cell surfaces. The penetration of rISL1 was then investigated using anti-ISL1 and TAT antibodies. ISL1-GFP reporter assay A 4kb fragment that contained the Isl1 promoter was isolated from human genomic DNA extracted from Royan H5 using the Expand Long Template PCR System (Roche, 10201179). The PCR forward primer was 5' CATGCAAGATC-TAATCGTCTGTTCCTGGTAC 3' and the reverse was 5' CTCGATCTTAAGGGGCTGTTCTGGCTCTGG 3'. The isolated fragment was then cloned into the pIRES2-EGFP vector. For transfection hESCs, cells were plated at 200,000 cells/6 cm diameter dish, and transfected 24 h later. 3 mg of plasmid DNA was mixed with 4 ml of X-tremeGENE 9 DNA Transfection Reagent (Roche, 06 365 787 001) and 300 ml of DMEM/F12 for 15-30 minutes and then applied to cells for a total volume of 1 ml of hESCs culture medium. After 48 h, the medium was replaced by fresh media that contained 100 mg/ml G418 (active concentration). After 1-2 weeks, transfected colonies taken up and cultured. In order to examine the ability of rISL1 to induce its own gene expression, we treated undifferentiated aggregated ISL1-GFP cells with 40 mg/ml rISL1 and elution buffer (as a control); cells were examined after 5 days for GFP expression by flow cytometry. Western blotting Proteins were separated by 12% SDS-PAGE electrophoresis at 100 V for 2 h using a Mini-PROTEAN 3 electrophoresis cell (Bio-Rad, Hercules, CA, USA) then transferred to a PVDF membrane by wet blotting (Bio-Rad, Hercules, CA, USA). Membranes were blocked for 1 h with 5% BSA and incubated for 1.5 h at room temperature (RT) with the respective primary antibodies [anti-ISL1 (Abcam, ab86472, 1:5000) and anti-b-TUBULIN 49 (Millipore, P07437, 1:5000)]. At the end of the incubation period membranes were rinsed 3 times (15 min each) with PBS-tween-20 (0.05%) and incubated with the peroxidase-conjugated secondary antibody [anti-mouse (Millipore, 1:6000)], as appropriate for 1 h at RT. The blots were visualized with Sigma detection reagents (Sigma, C9107) and films were scanned with a densitometer (GS-800, Bio-Rad, Hercules, CA, USA). Quantification of immunological signals was performed by Image Master software. The volume of each band was analyzed by dividing the volume percent of ISL1 by the housekeeping gene (ISL1/b-TUBULIN49) in order to assure uniformity of the protein amounts loaded on the gels. Flow cytometric analysis In order to quantify cardiac protein expression, cells were dissociated by trypsinization and centrifuged for 5 min at 1500 rpm to remove cell debris. After supernatant removal, cell viability was determined by trypan blue exclusion. Cells were then washed twice in PBS and fixed in 4% paraformaldehyde for 30 min at 4uC. For permeabilization, 0.1% (v/v) Triton X-100 was used for 10 min. Nonspecific antibody binding was blocked for 30 min at RT with 10% heat-inactivated serum. The cells were incubated overnight at 4uC with the appropriate primary antibodies followed by 45 min at 37uC for the secondary antibody. Antibodies used were the same as those for immunofluorescence staining in addition to CONNEXIN43 Quantitative reverse transcriptase PCR (qRT-PCR) Gene expression was assessed by qRT-PCR for genes of interest. The PCR mix in each well included 10 ml of SYBR H Premix Ex Taq TM II (RR081Q, Takara Bio, Inc.), 6 ml dH 2 O, 1 ml each of the forward and reverse primers (5 pmol/ ml), and 2 ml of single strand cDNA (16 ng/ ml) in a final reaction volume of 20 ml. Primer sequences are given in Table S1. PCR was performed on a Rotor-Gene TM 6000 Real-Time PCR System (Corbett Life Science) using the following program of 95uC for 10 min (stage 1) and 95uC for 10 s, 60uC for 20 s, and 72uC for 20 s (stage 2), for 40 cycles. qRT-PCR was conducted using 3 biologically independent replicates. Thermal conditions for all genes were the same. The annealing temperature was 60uC. Amplification specificity was verified by the melting curve method. Relative gene expression was calculated by the DDCT method [27]. Target genes normalized by the reference gene Gapdh; CT did not vary under different experimental conditions when equal amounts of RNA were used. PCR efficiencies for different primer pairs were ,1 as determined by a standard curve method on serially diluted templates. All data are represented as log2-linear plots. Statistical analysis All quantitative experiments including Western blot, qRT-PCR, and flow cytometry were performed using 3 biologically independent replicates. Significant differences between groups were examined by the student's t-test. P,0.05 was considered statistically significant. Data were presented as mean6SD. Results and Discussion Direct protein transduction into the cells both in vitro and in vivo is an efficient alternative to genetic manipulation, which leads to the production of safe cells required for cell therapy. Using this method, the concentration and duration of proteins in the cells can be easily controlled [28,29]. Undifferentiated aggregates of hESCs were treated by Activin A for 1 day and then for 4 days by BMP4. At day 5, the aggregates were plated without cytokines. The data show the maximum expression of the mesoendodermal marker, Brachyury, one day after Activin A treatment (day 2 after differentiation initiation). By continuing differentiation with BMP4 for the next 4 days Isl1, a marker of precardiac mesoderm, and Actinin were reached to their highest expression level. Isl1 expression was remained at high level for the next 3 days and by decreasing its expression, Mef2c, a cardiac progenitor marker showed its maximum expression and after that, other cardiac progenitor genes, Gata4 Nkx2.5 and Tbx5 reached to their highest expression level respectively. Finally, the expression of MHC and cTnT, which are structural cardiomyocytes markers, got to maximum level (Fig. 1). These data shows that 3-dimentional structures of the cells are very important for cardiac differentiation and aggregated differentiation method enhances cardiac differentiation and functionality.Target genes were normalized by the reference gene Gapdh. The relative expression was calculated by dividing the normalized target gene expression of the treated sample with that of the undifferentiated state (day 0). All data represented as log2-linear plots. All data are statistically significant otherwise marked with ''ns'' (P.0.05 We have demonstrated that the transduction of rISL1 protein enhanced hESCs differentiation into the beating cardiomyocytes phenotype. Establishment of a cardiac differentiation protocol hESCs were expanded in feeder and serum-free conditions on Matrigel-coated plates. hESCs (Royan H5) were induced to differentiate under adherent conditions and as aggregates in the absence of bFGF, with the addition of Activin A for one day and BMP4 for four days, and subsequently in growth factor-free medium in the presence of 2% B27, as described previously [26]. In adherent conditions none of the colonies were able to beat during 40 days of culture (Fig. S1A). RT-PCR analysis of differentiated cells during days 0-14 showed the onset of the expression of cardiac markers, Brachyury, Isl1, Mef2c, Tbx5, Gata4, Nkx2.5, aMHC, bMHC, and MLC2v (Fig. S1B). This showed that despite morphological changes and expression of some cardiac genes in the differentiated cells they are not functional, therefore they were unable to beat. In aggregate conditions, hESCs initially expanded in suspension aggregates as previously described [6]. Next, 6-day aggregates were induced to differentiate as described above and plated on gelatin-coated plates. In this way, 2062.5% of the aggregates showed evidence of beating 5 days post-plating. qRT-PCR analysis of aggregate differentiation of hESCs showed the maximum expression of the mesoendodermal marker, Brachyury, one day after Activin A treatment (day 2 after differentiation initiation) (Fig. 1). By continuing differentiation with BMP4 for the next 4 days Isl1, a marker of precardiac mesoderm, and Actinin were reached to their highest expression level. Isl1 expression was remained at high level for the next 3 days and by decreasing its expression, Mef2c, a cardiac progenitor marker showed its maximum expression level. After that, other cardiac progenitor genes, Gata4, Nkx2.5 and Tbx5 reached to their highest expression level respectively. Finally, the expression of MHC and cTnT, which are structural cardiomyocytes markers, got to maximum level (Fig. 1). These data showed that in aggregate differentiation, the 3-D structure of cells enhanced cardiac differentiation and functionality. Expression pattern of ISL1 during differentiation ISL1 is necessary for the proliferation and survival of ISL1 + progenitor cells and the inhibition of further cardiac differentiation, therefore a decrease in its expression level is necessary before ISL1 + progenitor cells can differentiate into cardiac cells [11,12]. It is important to define the optimum time for addition and removal of the rISL1 protein in order to achieve the highest number of ISL1 + progenitor and cardiac cells, respectively. To address this, we have analyzed the expression pattern of Isl1 during the first 15 days of differentiation using qRT-PCR. Isl1 expression was detected after Activin A treatment and reached a maximum level at days 7-8 ( Fig. 1). Therefore, we decided to add rISL1 protein to the differentiation medium from days 1-8 after the initiation of differentiation. In this condition, rISL1 treatment was started from the first day of Isl1 gene expression and none of the Isl1 expressing cells were missed. We did not continue rISL1 treatment after day 8, because it inhibited further differentiation. Generation of cell-permeable rISL1 protein A 522 bp of the Isl1 native sequence was amplified from human cardiac precursor cell mRNA with specific primers and its DNA sequence was confirmed by sequencing. The right orientation of the primary cloning of Isl1 was demonstrated by a PCR that utilized T7 forward and Isl1 reverse primers for the pENTER D-TOPO/ISL1 entry clone. A fusion protein (rISL1 protein) that consisted of the TAT transduction domain for protein transduction, a N-terminus histidine tag for protein purification, and the ISL1 protein was then generated using a bacterial pDest17/ISL1 expression vector ( Fig. 2A). Since we used the Isl1 native sequence for protein expression, there was no exogenous nuclear localization signal included. To produce rISL1 protein in the bacterial host, we used the pDest17 expression vector system which is one of the most frequently employed ways to efficiently and effectively synthesize heterologous proteins in prokaryotic cells. This system possesses the characteristics of having an exceptionally strong promoter allowing high-level production of recombinant proteins. The rISL1 protein (42 kDa) was successfully purified from the bacterial expression system as demonstrated by SDS-PAGE and Western blot analysis using anti-ISL1antibody (Figs. 2B and C). The identity of the expressed protein was also confirmed by mass spectrometry (data not shown). We confirmed that the rISL1 protein was bound to digested genomic DNA by using a gel shift assay (Fig. 2D) which showed that not only the purified protein was rISL1, but that it was functional and had the ability to bind with DNA in vitro. Penetration and stability of the rISL1 protein Cellular uptake and stability of the rISL1 protein was confirmed by Western blot analysis of cell lysates from control or rISL1treated hESCs (Fig. 2E). Our results showed a higher abundance of ISL1 in the rISL1-treated group compared to the control, which suggested efficient penetration of rISL1protein into the cells (Fig. 2F). Temporal analysis showed that the rISL1 protein was detectable in medium up to 48 h in the presence or absence of cells; no significant decrease in the amount of protein was observed (Fig. 2E). rISL1 protein stability was further confirmed by monitoring Isl1 gene expression at different time points after the addition of the protein. Quantitative RT-PCR results indicated that during 48 h after addition of the protein expression of the Isl1 gene was 16-to 32-fold higher than the control group (Fig. 2G). To examine the ability of rISL1 to regulate its own expression, we used an ISL1 reporter assay and added rISL1 protein on the ISL1-GFP reporter cell line. Our data showed that after rISL1 treatment, undifferentiated cells expressed 20.5963.67% GFP vs. 4.8261.25% GFP in the control group (Fig. 2H ). This result has suggested that rISL1 acts in a positive feedback loop, enhancing its gene expression. Based on these results, we decided to add rISL1 protein into the differentiation media every other day during medium replacement. Cell penetration of the rISL1 protein was further confirmed by immunostaining analysis of adherent and aggregated cells using both anti-TAT and ISL1 antibodies. Two hours following transduction of hESCs, most cells were positive for the labeled TAT or ISL1 protein (Fig. 2I). rISL1 proteins were detected around the nucleus in adherent and aggregated cells, which suggested the ability of the recombinant protein to penetrate deep inside aggregated cells (Fig. 2I). This finding was consistent with the prevailing view that TAT can promote cellular uptake via endocytosis [30,31,32] Defining rISL1 treatment conditions Previous studies have demonstrated that discontinuous or continuous addition of recombinant protein into cell culture media is also an important factor that should be considered [23,33]. In order to optimize protein transduction, the cells were either discontinuously (2 h/day) or continuously (from days 1-8) treated by rISL1. qRT-PCR analysis of differentiated cells at day 8 showed higher endogenous Isl1 expression in hESCs in the continuous protocol (P,0.05, Fig. 3A). The following experiments were performed by continuous protein treatment. To study the dose dependency of rISL1 transduction, hESCs were exposed to different concentrations of the purified protein (10,20,30, and 40 mg/ml) in the continuous treatment of hESCs during days 1-8 after differentiation initiation. We observed that the concentration greater than 40 mg/ml was lethal (data not shown). The differentiating cells at concentrations of 10 and 20 mg/ml of the rISL1 protein were morphologically similar to hematopoietic and endothelial progenitors, while 30 and 40 mg/ ml rISL1 protein showed cardiomyocytes and muscular appearances (Fig. 3B). According to qRT-PCR analysis, 40 mg/ml rISL1 protein induced more endogenous Isl1 and less expression of Mef2c and Nkx2.5 (Fig. 3C). These data were consistent with previous data which has shown that ISL1 marks a common population of progenitors in the heart that can differentiate into cardiomyocytes, smooth muscle, and endothelial cells [9,10]. It seems that different levels of ISL1 protein direct cells towards specific lineages. However, more experiments are needed to find the exact amount of ISL1 expression required for each lineage differentiation. Increasing cardiac differentiation using rISL1 protein Based on the above mentioned experiments, we continuously added 40 mg/ml of rISL1 protein into differentiation medium from days 1-8 after differentiation initiation. The effect of rISL1 protein on the expression of endogenous Isl1 was analyzed using qRT-PCR at 1, 2, 3, 5, and 8 days after differentiation initiation. Our results showed that treated cells expressed higher endogenous Isl1 than the untreated control (P,0.05, Fig. 3D). We further continued differentiation to obtain beating clusters. The beating areas appeared at day 5 post-plating of aggregates and the percent of beating areas were significantly higher in rISL1-treated cells compared to the control (Fig. 3E). The difference was more pronounced at 14 days after plating when the percent of beating areas reached 75610% in rISL1-treated cells compared to 2062.5% in the control when more than 1000 embryoid bodies were assessed in each group (Fig. 3E). Therefore, rISL1 treatment resulted in a 3.260.05 fold increase in the number of beating areas (Fig. 3F). In order to check reproducibility of this protocol the same experiments were performed using another hESC line, Royan H6. Our data indicated that rISL1 treatment could also cause a 2.260.4 fold increase in the number of beating areas in Royan H6. Temporal expression of cardiac genes showed the highest levels of Isl1, Mef2c, Hand1, Nkx2.5, Actinin, MHC, cTnT, Mlc2a and MLC2v at day 14 in both hESC lines, Royan H5 and Royan H6 (Fig. 3G). Our data showed that MLC2v expression increased in the rISL1-treated groups while MLC2a decreased. These results suggested that differentiated cells were directed toward ventricular cardiomyocytes. This observation was consistent with previous reports in which approximately 92% of cells within the right ventricle and about 20% of cells within the left ventricle of a normal heart were ISL1-positive [11]. Based on the Isl1 gene expression profile, rISL1 was added to the cell culture media from days 1-8 of differentiation, when the expression level of endogenous Isl1 was first detected (day 1) and reached its maximum level (day 8). The addition of rISL1 with muscular appearances. It seems that 30 and 40 mg/ml rISL1 protein are better concentrations for cardiac differentiation. )* : P,0.05( (C) qRT-PCR analysis of differentiated cells at day 8 by different concentrations of rISL1 also showed that 40 mg/ml of the rISL1 protein induced more endogenous Isl1, but less Mef2c and Nkx2.5 expressions. )* : P,0.05( (D) Schematic diagram of the differentiation protocol by the addition of rISL1 protein (40 mg/ ml), which was added after induction with Activin A (days 1-8). qRT-PCR analysis of endogenous Isl1 expression in hESCs demonstrated that treated cells expressed higher significant endogenous Isl1 than the untreated control. )* : P,0.05( (E) The percentage of beating clusters in continuous treatment of hESCs by 40 mg/ml rISL1 protein during days 1-8 after differentiation initiation in comparison with the control (vehicle-treated) group. The percentage of beating clusters in the rISL1-treated group was significantly higher than the untreated group at day 14 after plating (75610% vs. 2062.5%). )* : P,0.05( (F) rISL1 treatment resulted in a 3.260.5 fold increase in the number of beating areas in comparison with untreated control group. rISL1 also caused a 2.260.4 fold increase in the other hESC line, Royan H6, which shows the reproducibility of this protocol for another hESC line. )* : P,0.05( (G) In order to assess the expression of cardiac-specific genes, we collected samples at 3 stages: day 3 after plating (the day of rISL1 removal); day 14 after plating (day of maximum beating); and day 20 after plating (day that beating decreased and cells were mature) by qRT-PCR in two hESC lines. Target genes were normalized by the reference gene Gapdh. The relative expression was calculated by dividing the normalized target gene expression of treated hESCs with rISL1 protein and elution buffer (as control) with that of the undifferentiated state (day 0). All data are statistically significant in comparison with undifferentiated state (day 0) otherwise marked with ''ns'' (ns: P.0.05). a: P,0.05 in comparison with control group (elution buffer treated group). All data were represented as log2-linear plots. doi:10.1371/journal.pone.0055577.g003 BMP4 to the cell culture media may also enhance its effect. It has been demonstrated that ISL1 promoted BMP expression (such as BMP4 and BMP7), and the expression level of BMP4 was reduced in ISL1 mutant cells [11]. It is likely that rISL1 increases the number of beating cells through enhancing the expression of the gap junction protein Connexin40 (a major protein in the conduction system) by BMP signaling. It has been shown that BMP signaling is necessary for the expression of T-box transcription factors [34]. Connexin40 is one of the direct downstream targets of the T-box transcription factors, which play an important role in the conduction system [35]. Taken together, these data indicate that direct delivery of the transcription factor ISL1 by protein transduction enhanced the cardiomyocyte differentiation in hESCs in vitro. Conclusions In this study we showed that under cell culture conditions purified rISL1 protein was stable for at least 48 h. When the protein was added to hESCs cultures, it efficiently penetrated into the cells and enhanced the differentiation of the two hESC lines into cardiac cells up to 3-fold. This approach may pave the way for the scaled up expansion of hESCs as carrier-free suspension aggregates for an extended period of time. It provides a controlled environment for a homogeneous culture and simplifies the handling and controlling of the differentiation of hESCs, which are required for their applications in bioreactor culture systems and cell therapy. Another advantage of this method is the lack of genetic manipulation of hESCs which may decrease the risk of their application in cell therapy. In conclusion, our data indicate that by addition of rISL1 protein into the differentiation medium we have successfully produced large numbers of functional cardiomyocytes that can be easily applied in drug discovery or cell therapy. However, further research is necessary to additionally increase the efficiency of differentiation using this method.
7,275.8
2013-01-30T00:00:00.000
[ "Medicine", "Biology" ]
Effects of spatial structure and diffusion on the performances of the chemostat Given hydric capacity and nutrient flow of a chemostat-like system, we analyse the influence of a spatial structure on the output concentrations at steady-state. Three configurations are compared: perfectly-mixed, serial and parallel with diffusion rate. We show the existence of a threshold on the input concentration of nutrient for which the benefits of the serial and parallel configurations over the perfectly-mixed one are reversed. In addition, we show that the dependency of the output concentrations on the diffusion rate can be non-monotonic, and give precise conditions for the diffusion effect to be advantageous. The study encompasses dead-zone models Introduction The chemostat is a popular apparatus, invented simultaneously by Monod [20] and Novick & Szilard [23], for the so-called continuous culture of micro-organisms.It has the advantage to study bacteria growth at steady state, in contrast to batch cultivation.In the classical experiments, the medium is assumed to be perfectly mixed, that justifies mathematical models described by systems of ordinary differential equations [29].The chemostat model is also used in ecology for studying populations of micro-organisms, such as lake plankton or wetlands ecosystems.In natural ecosytems, or in industrial applications that use large bioreactors, the assumption of perfectly mixed medium is questionable.This is why spatial considerations have been introduced in the classical model of the chemostat, such as the gradostat model [16] that is a series of interconnected chemostats (of identicalvolumes).Segregated habitats are also considered in lakes, where the bottom can be modeled as a dead zone and nutrient mixing between the two zones is achieved by diffusion rate [21].The consideration of dead zones is also often used in bioprocesses modelling [15,14,6,26,25,31,27]. Series of chemostats, instead of single chemostat, have shown to potentially improve the performances of bioprocesses, reducing the total residence time [13,17,10,11,12] or allowing species persistence [30,24].These properties have of course economical impacts for the biotechnological industry, and there is a significant literature on the design of series of reactors and comparison with plug-flow reactors (that can be seen as the limiting case of an arbitrary large number of tanks of arbitrary small volumes) [32,1,22,2,3,4,5].Sometimes a radial diffusion is also considered in plug-flow reactors [7], but surprisingly, configurations of tanks in parallel have been much less investigated [15].One can argue that knowing input rates and volumes of tanks in parallel, there dynamical characteristics can be studied separately, and there is no need of devoting a specific study for these configurations.This is no longer the case if one considers a passive communication between the tanks, through a membrane for instance.In saturated soils or wetlands, a spatial structure could be simply represented by separated domains with diffusive communication.This consideration is similar to patches models or islands models, commonly used in ecology [18,9], or lattice differential equations [28].For instance, a recent investigation studies the influence of such structures on a consumer/resource model [8].Consumer/resource models in ecology are similar to chemostat models, apart the source terms that are modeled as constant intakes of nutrient, instead of dilution rates that one rather met in liquid media. In this paper, we propose to bring new insight on parallel configurations of chemostats with communication, in a spirit different than the one usually taken in bioprocesses design.One usually chooses a target for the output concentration of substrate, and looks for minimizing the total volume, or equivalently the residence time, among all the configurations that provide the same desired output at steady state.Here, we fix both the total hydric volume and the input flow and study the input-output map at steady-state, investigating the role of the spatial structure on the performances of the system.The performance is here measured by the level of substrate that is degraded by the system, and collected at the output.We draw precise comparisons between the three configurations: perfectly mixed, serial and parallel (with diffusion rate) with the same total hydric volume and flow rate.This set of configurations is far to be exhaustive, being limited to two compartments only, but it is a first attempt to grasp this input-output map of a structured chemostat, and study how a spatial structure can modify this map, and what are the key parameters.We believe that this study is of interest for the modelling of ecosystems such as saturated soils for which it is not easy to know the spatial structure, and where one has only access to input-output observations of the substrate degradation. The paper is organized as follows.In Section 2, we present the three configurations under investigation and give the equations of the models.The main part of the paper is devoted to the analysis of the steady states, given in Section 3. The proofs of the global stability of the equilibriums are postponed to the Appendix, for lightening the presentation.Finally, discussion and numerical simulations are given in Section 4. The models The flow rate is labeled Q and V is the total capacity of the system.The three simple patterns we analyze are depicted on Figure 1: We recall the dynamical equations of resource (nutrient) and biomass concentrations, respectively denoted by S i and X i in a compartment i of volume V i fed from a compartment i − with a flow rate Q i and connected by diffusion rate d to a compartment i d (see Figure 2). For sake of simplicity of the analytical analysis, we assume that the growth function µ(•) is a linear function of the resource concentration: In Section 4, we shall consider Monod growth function and show that the qualitative results of our study are not changed.The yield coefficient y of the bio-conversion is kept equal to one (this is always possible by choosing the unit measuring the biomass).It is convenient to write dimensionless concentrations: for each concentration C i in the compartment i (C i can denote S i or X i ), we define We shall also consider that the time t is measured in units such that Q = V .Finally, we assume that the input concentration S in is large enough to avoid the (trivial) wash-out equilibrium to be the only steady-state in each compartment. 3 Steady-state analysis of the three configurations Configuration with one compartment The dynamical equations of the configuration with a single compartment are The non-trivial equilibrium is (1, s in − 1) under the condition s in > 1.Then, one has Remark.This is a well known property from the theory of the chemostat that the output concentration at steady state is independent of the input concentration, provided this latter to be large enough (i.e.s in ≥ 1). Serial connection of two compartments The dynamical equations of the model with two compartments in series (see Figure 1), assuming r to be different to 0 and 1, are Proposition 1.When s in > 1/r, there exists an unique equilibrium (s of (1) on the positive orthant.One has necessarily s ⋆ 1 = 1/r and s ⋆ 2 < min(1/r, 1/(1 − r)).Furthermore, one has Proof.One can readily check that there exists a non-trivial equilibrium (1/r, s in − 1/r) for the first compartment exactly when s in > 1/r.Furthermore, this equilibrium is unique.Then, any equilibrium for the overall system (1) has to be (s ⋆ 2 , s in − s ⋆ 2 ) for the second compartment, with s ⋆ 2 solution of the equation with s ⋆ 2 < 1/r.One can easily verify that there exists a unique s ⋆ 2 solution of (2) on (0, 1/r).Graphically, s ⋆ 2 is the abscissa of the intersection of the graphs (see Figure 3) of the polynomial function and the affine function Remark that s in > 1/r implies the inequality φ 2 with the value obtained in the configuration of one compartment: Parallel interconnection of two compartments The dynamical equations of the model with two compartments in parallel and diffusion (see Figure 1), assuming r to be different to 0 and 1, are the following where the output concentration s out is given by The wash-out in both compartments corresponds to the trivial equilibrium (s in , 0, s in , 0), that leads to the trivial steady-state s ⋆ out = s in .For convenience, we posit and assume, without any loss of generality that one has α 2 ≥ α 1 (if it is not the case one can just exchange indexes 1 and 2). When d = 0 (no diffusion), the equilibrium of the system can be determined independently in the two compartments as simple chemostats.In this case, there is an unique globally stable equilibrium (s in the non-negative orthant, where s * i = min(α i , s in ) (i = 1, 2).When d > 0, we define the functions Proposition 2. When s in > 1 and d > 0, there exists a unique equilibrium (s is the unique solution of the system on the domain (0, s in ) × (0, s in ), with when α 2 > α 1 . Proof.At equilibrium, one has which amounts to write, from equations ( 3) and deduces the property Consequently, an equilibrium in the positive orthant has to fulfill 3) at equilibrium, one obtains the equations which amounts to write that (s ⋆ 1 , s ⋆ 2 ) is solution of the system (4) (see Figure 4) or equivalently s ⋆ 1 is a zero of the function g(•).When (4).When α 2 > α 1 , one has necessarily α 1 < 1 and the condition s in > 1 implies g(α 1 ) < 0. We distinguish now two cases: In both cases, one deduces by the Mean Value Theorem the existence of s Rolle and Mean Value Theorems allow to conclude the existence of s ⋆ 1 ∈ (α 1 , s in ) such that g(s ⋆ 1 ) = 0. In any case, we obtain the existence of (s ⋆ 1 , s ⋆ 2 ) solution of (4) with Thus, the inequalities (5) are fulfilled. Finally, notice that functions φ 1 (•), φ 2 (•) are both strictly concave, and steady states (s ⋆ 1 , s ⋆ 2 ) are intersections of G 1 , the graph of the function φ 1 (•), and G 2 the symmetric of the graph of φ 2 (•) with respect to the first diagonal.Consequently, if (s ⋆ 1 , s ⋆ 2 ) is a steady state different from (s in , s in ), G 1 and G 2 are respectively above and below the line segment (s ⋆ 1 , s ⋆ 2 ) − (s in , s in ).We conclude that there exists at most one non-trivial equilibrium. Proof.When α 1 = α 2 , one has s ⋆ 1 = s ⋆ 2 = 1 and one can easily check When α 2 > α 1 , one has g(α 1 ) < 0 and we recall from the proof of former Proposition that s ⋆ 1 is the unique zero of g(•) on (α 1 , min(α 2 , s in )).We conclude that g is non decreasing at s ⋆ 1 .Notice that φ 1 and φ 2 are concave functions and that ) cannot be equal to zero, and consequently one has g ′ (s ⋆ 1 ) > 0. The global stability of the non-trivial equilibrium is proved in the Appendix. Proposition 2 defines properly the map d → s ⋆ out = αs ⋆ 1 + (1 − α)s ⋆ 2 for the unique non-trivial steady-state, that we aim at studying as a function of d.Accordingly to Proposition 2, s ⋆ out is equal to one for any value of the parameter d in the non-generic case α 2 = α 1 .We shall focus on the case α 2 = α 1 (and without loss of generality we shall consider α 2 > α 1 ).We start by the two extreme situations: no diffusion and infinite diffusion. Lemma 1.For the non trivial equilibrium, one has Proof.Under the assumptions s in > 1 and α 2 ≥ α 1 , we distinguish two cases when Then, one can write (recall that assuming α 2 ≥ α 1 imposes to have α < 1, and s 0 in is well defined).Notice that the number s 0 in is necessarily larger than one because α 1 ≤ 1, and one has also Consequently one concludes that s ⋆ out ≥ 1 exactly when s in ≥ s 0 in .Finally, remark that one has Lemma 2. For s in > 1, the non trivial equilibrium fulfill Proof.For any d > 0, Proposition guarantees the existence of a unique non trivial equilibrium (s ⋆ 1 , s ⋆ 2 ) ∈ (0, s in ) × (0, s in ) that is solution of (6).When d is arbitrary large, one obtains from (6) From equations ( 6), one deduces also the following equality valid for any d that can rewritten, taking into account the equality rα Consequently, one has If α 2 < s in , the property s ⋆ 1 < α 2 valid for any d > 0 implies that s ⋆ 1 cannot converges to s in .If α 2 ≥ s in and lim s ⋆ 1 = lim s ⋆ 2 = s in , there exists d such that rs ⋆ 1 + (1 − r)s ⋆ 2 > (s in + 1)/2.Then, one has that contradicts Corollary 2. Finally, one has lim s ⋆ 1 = lim s ⋆ 2 = 1 and consequently lim s ⋆ out = 1. We present now our main result concerning properties of the map d → s ⋆ out (d) defined at the non-trivial steady-state. -When s in ≥ 2, the map d → s ⋆ out (d) the non trivial equilibrium) is decreasing and s ⋆ out (d) > 1 for any d ≥ 0. -When s in < 2, the map d → s ⋆ out (d) (for the non trivial equilibrium) admits a minimum in d ⋆ < +∞, that is strictly less than one.Furthermore, one has Proof.Let differentiate with respect to d the equations ( 6) at steady state: that can rewritten as follows 1 1 Remark that one has From the Corollary 2, one has det(Γ) < 0 and one deduces that the derivatives Notice from inequalities (5) that we obtain B > 0 and deduce ∂ d s ⋆ 1 > 0 for any d.With Lemma 2 we conclude that s ⋆ 1 (d) < 1 for any d.From equations (7), we can write When s in ≥ 2, one has A > 0 and then ∂ d s ⋆ 2 < 0. With Lemma 2 we conclude that s ⋆ 2 (d) > 1 for any d.Then, one obtain the inequality σ < (s in − 2)(α 1 − α 2 ) ≤ 0 which proves with Lemma 2 that s ⋆ out is a decreasing function of d that converges to one. When s in < 2, we write As s ⋆ 1 and s ⋆ 2 tend to one when d takes arbitrary large values, we conclude that there exists d < +∞ such that σ > 0 for any d > d and consequently s ⋆ out is smaller than one and increasing for d > d.We conclude that the map d → s ⋆ out (d) admits a minimum, say at d ⋆ < +∞, that is strictly less than one. , for which we conclude Remark that this case is feasible because of the inequality 2α 1 α 2 < min(2, α 2 )(α 1 + α 2 ).We conclude that for s in larger than this last value, d ⋆ is necessarily strictly positive. Remark.The particular case α = 0 corresponds to a configuration of a perfectly mixed tank of volume (1 − r)V connected to a dead-zone of volume rV .This is a way to approximate a non well-mixed tank or segregated bioreactors of total volume V , estimating the fraction of the volume occupied by the highly agitated area. Numerical computation and discussion Propositions 1 and 3 reveal the existence of a threshold on the value of the input concentration s in (equal to 2 for our choice of the parameters units) that reverses the performances of the serial and parallel configurations in terms of s ⋆ out , compared to the single tank case (for which s ⋆ out = 1): -for s in > 2, there exist serial configurations such that s ⋆ out < 1 for r large enough (i.e. the first tank has to be large enough), but any parallel configuration produces s ⋆ out > 1, -for s in < 2, there exists parallel configurations such that s ⋆ out < 1, while any serial configuration has s ⋆ out > 1.There exists another threshold s 0 in ∈ (1, 2) such that configurations with s ⋆ out < 1 require to have d large enough when s in > s 0 in (cf Lemma 1).Furthermore, the best performance of the parallel configuration is obtained -for arbitrary large values of d when s in > 2 , -for a finite positive d ⋆ when s in ∈ (s in , 2) (where the expression of s in is given in Proposition 3).For the parallel interconnection, we depict on Figure 6 the two kind of configurations that occur, depending on whether the number s in is larger than one or not.The analytic analysis of Section 3 has been conducted under the assumption of the linearity of the function µ(•).It is often in microbiology that the growth rate µ(•) presents a concavity, as described by the usual Monod (or Michaelis-Menten) function.We have computed numerically the same curves s ⋆ out (•) than Figures 5 and 6, considering the Monod function µ(S) = 6S 5 + S instead of the linear function (see Figure 7).This function has been chosen to fulfill s ⋆ out = 1 for the single tank configuration, guaranteeing the same steady state than the linear growth for this configuration. The values of the parameters are given on the table below On Figures 8 and 9, we observe that the concavity of the growth function does not change qualitatively the theoretical results and the existence of threshold for s in that favourites one of the configuration. We notice on all the figures that the yield is better for the Monod function in the parallel configuration and worst for the serial one.This implies that the threshold on s in , that was determined to be equal to 2 for the linear case is higher when the growth function is concave. Remarks.The serial configuration for the limiting value r = 1 is equivalent to a single tank.This explains why all the curves on Figures 5 and 8 For the parallel configuration with α = 0.1 and r = 0.9 one has α 2 = 9.This implies that for the limiting value d = 0 the only equilibrium in the second tank is the wash-out when s in < 9.This is not the case for the first tank but the flow rate αQ being small, the output s ⋆ out remains closed to s in in any case, as one can see on Figures 6 and 9 for small values of the parameter d. Conclusion Given a flow rate and the total volume of a chemostat system, this study shows the existence of a threshold on the value of the input concentration s in such that above and below this threshold, serial and the parallel configurations are respectively the best ones with respect to the criterion of minimizing the output concentration s ⋆ out at steady state.For the parallel scheme, the best performances are obtained for a precise value of the diffusion parameter that is proved to be positive when s in is not too small.This study concerns also dead-zone configurations, as particular cases of the parallel configurations. Whatever are the data of the problem, there always exists a configuration that is better than a single perfectly mixed tank.We have shown that the non-trivial steady states are unique and globally exponentially stable under the assumption of a linear increasing growth rate. Finally, this study reveals the role of the structure of the space on the performances of simple ecosystems or bioprocesses.The possibly non-monotonic influence of the diffusion parameter on the output steady state is not intuitive, and leave further investigations open for understanding or taking benefit of this property for natural ecosystems (such as saturated soils or wetlands) as well as for bioprocesses (such as waste-water treatments).This result can be also of interest for reverse engineering when deciding which among serial or parallel configurations is better fit for the modeling of chemostat-like ecosystems, providing that one has an estimation of the hydric capacity of the system. Acknowledgments.This work has been achieved within the VITELBIO (VIRtual TELluric BIOreactors) program, sponsored by INRA and INRIA.The authors are grateful to this support.The work is also part of the PhD thesis of the first author. 6 Appendix: global exponential stability of the non-trivial equilibrium First, one can easily check that the domain D = IR 4 + is invariant by the dynamics ( 1) and ( 3).We consider the 2-dimensional vector z of variables z i = s in − x i − s i (i = 1, 2) whose dynamics are respectively for the serial and parallel configurations Notice that matrices A s and A p are Hurwitz : So z converges exponentially toward 0 for both systems, which implies that dynamics (1) and ( 3) are dissipative, in the sense that any solution of (1) or (3) in D converge exponentially to the compact set K = {(s 1 , x 1 , s 2 , x 2 ) ∈ D s.t.x 1 + s 1 = s in and x 2 + s 2 = s in }. We recall a result from [19,Theorem 1.8] that shall be useful in the following. The serial configuration Proposition 5.Under the condition s in > 1/r, any trajectory of (1) with initial condition in D such that (s 1 (0), x 1 (0)) = (s in , 0) converges exponentially to the unique non-trivial steady-state (s 2 ) given by Proposition 1. Proof.Dynamics (1) has a cascade structure.It is straightforward to check that the solutions of the (s 1 , x 1 ) sub-system converges asymptotically towards the non-trivial equilibrium (1/r, s in − 1/r) from any initial condition away from the wash-out equilibrium (s in , 0).From the convergence of z 2 toward 0, we deduce that the s 2 variable has to converge to the bounded interval [0, s in ] and that its dynamics can be written as a scalar non autonomous differential equation: This last dynamics has the property to be asymptotically autonomous with the limiting differential equation: Statement of Proposition 1 implies that this last scalar dynamics has a unique equilibrium s ⋆ 2 that belongs to [0, s in ].Furthermore, one has f (0) > 0 and f (s in ) < 0. Consequently any solution of ( 9) in [0, s in ] converges asymptotically to s ⋆ 2 .Then applying Theorem 4, we conclude that any bounded solution of (8) converges to s ⋆ 2 .Finally any solutions of the (s 2 , x 2 ) sub-system converges asymptotically to (s ⋆ 2 , s in − s ⋆ 2 ). The Jacobian matrix of dynamics (1) at the non-trivial equilibrium (s 2 ) that are both negative numbers, accordingly to Proposition 1.The exponential stability of the non-trivial equilibrium is thus proved. The parallel configuration Proposition 6.When s in > 1 and d > 0, any trajectory of (3) with initial condition in D such that x 1 (0) > 0 and x 2 (0) > 0 converges exponentially to the unique non-trivial steady-state (s given by Proposition 2. Proof.Considering the time vector z(•), the (s 1 , s 2 ) sub-system of dynamics (3) can be written as solution of a non-autonomous planar dynamics ṡ1 = s 1 (z We know that z converges to 0 and consequently the vector S of variables s From Poincaré-Bendixon theorem and Dulac criterion, we conclude that bounded trajectories of (12) cannot have limit cycle or closed path and necessarily converge to an equilibrium point.Consequently, any trajectory of (11) in S either converges to the rest point S ⋆ = (s ⋆ 1 , s ⋆ 2 ) or approaches the boundary B. Notice that one has s i = s in , s j < s in ⇒ ṡi < 0 (i = j) So the only possibility for approaching B is to converge to the other rest point S 0 = (s in , s in ).This shows that the only non-empty, closed, connected, invariant and chain recurrent subsets of S are the singletons {S ⋆ } and {S 0 }. Applying Theorem 4 we conclude that any trajectory of (10), issued from initial condition of dynamics (3) in D, converges asymptotically to S ⋆ or S 0 .Consider now any initial condition with x 1 (0) > 0 and x 2 (0) > 0. We show that the solution (s 1 (•), s 2 (•)) of (3) cannot converge to S 0 .If it is the case, there exists T < +∞ such that one has s 1 (t) > α 1 and rs 1 (t) + (1 − r)s 2 (t) > 1 for any t ≥ T under the assumption s in > 1.Let us consider the function V (x 1 , x 2 ) = min(rx 1 + (1 − r)x 2 , x 1 ) (see Figure 10) and v(t) = V (x 1 (t), x 2 (t)) that is positive and tends to 0 when t tends to +∞ We conclude that the function t → v(t) is non-decreasing for t ≥ T and consequently cannot converge to zero, thus a contradiction. The Jacobian matrix of dynamics (3) at the non-trivial equilibrium (s ⋆ 1 , x ⋆ 1 , s ⋆ 2 , x ⋆ 2 ) is of the following form in (z 1 , z 2 , s 1 , s 2 ) coordinates Recall that A p is Hurwitz.One has Figure 1 : Figure 1: The set of configurations under investigation. For the serial configuration, the graph of the function s ⋆ out is plotted as function of r ∈ [1/s in , 1] on Figure5for different values of the input concentration s in . Figure 5 : Figure 5: Comparison of s ⋆ out for the serial configuration. Figure 6 : Figure 6: Comparison of s ⋆ out for the parallel configuration (s in > 1 on the left and s in < 1 on the right) . Figure 9 : Figure 9: Comparison of s ⋆ out for the parallel configuration (s in > 1 on the left and s in < 1 on the right) . Figure 10 : Figure 10: Iso-value of the V function. coincide for this value of the parameter r.
6,358.8
2010-11-16T00:00:00.000
[ "Mathematics" ]
Identification of novel TGF-beta /Smad gene targets in dermal fibroblasts using a combined cDNA microarray/promoter transactivation approach. Despite major advances in the understanding of the intimate mechanisms of transforming growth factor-beta (TGF-beta) signaling through the Smad pathway, little progress has been made in the identification of direct target genes. In this report, using cDNA microarrays, we have focussed our attention on the characterization of extracellular matrix-related genes rapidly induced by TGF-beta in human dermal fibroblasts and attempted to identify the ones whose up-regulation by TGF-beta is Smad-mediated. For a gene to qualify as a direct Smad target, we postulated that it had to meet the following criteria: (1) rapid (30 min) and significant (at least 2-fold) elevation of steady-state mRNA levels upon TGF-beta stimulation, (2) activation of the promoter by both exogenous TGF-beta and co-transfected Smad3 expression vector, (3) up-regulation of promoter activity by TGF-beta blocked by both dominant-negative Smad3 and inhibitory Smad7 expression vectors, and (4) promoter transactivation by TGF-beta not possible in Smad3(-/-) mouse embryo fibroblasts. Using this stringent approach, we have identified COL1A2, COL3A1, COL6A1, COL6A3, and tissue inhibitor of metalloproteases-1 as definite TGF-beta/Smad3 targets. Extrapolation of this approach to other extracellular matrix-related gene promoters also identified COL1A1 and COL5A2, but not COL6A2, as novel Smad targets. Together, these results represent a significant step toward the identification of novel, early-induced Smad-dependent TGF-beta target genes in fibroblasts. Despite major advances in the understanding of the intimate mechanisms of transforming growth factor-␤ (TGF-␤) signaling through the Smad pathway, little progress has been made in the identification of direct target genes. In this report, using cDNA microarrays, we have focussed our attention on the characterization of extracellular matrix-related genes rapidly induced by TGF-␤ in human dermal fibroblasts and attempted to identify the ones whose up-regulation by TGF-␤ is Smad-mediated. For a gene to qualify as a direct Smad target, we postulated that it had to meet the following criteria: (1) rapid ( min) and significant (at least 2-fold) elevation of steady-state mRNA levels upon TGF-␤ stimulation, (2) activation of the promoter by both exogenous TGF-␤ and co-transfected Smad3 expression vector, (3) up-regulation of promoter activity by TGF-␤ blocked by both dominant-negative Smad3 and inhibitory Smad7 expression vectors, and (4) promoter transactivation by TGF-␤ not possible in Smad3 ؊/؊ mouse embryo fibroblasts. Using this stringent approach, we have identified COL1A2, COL3A1, COL6A1, COL6A3, and tissue inhibitor of metalloproteases-1 as definite TGF-␤/Smad3 targets. Extrapolation of this approach to other extracellular matrix-related gene promoters also identified COL1A1 and COL5A2, but not COL6A2, as novel Smad targets. Together, these results represent a significant step toward the identification of novel, early-induced Smaddependent TGF-␤ target genes in fibroblasts. Members of the TGF-␤ 1 superfamily (activin, bone morphogenic proteins, TGF-␤s, and decapentaplegic) are multifunctional cytokines that control various aspects of cell growth and differentiation and play an essential role in embryonic development, tissue repair, or immune homeostasis (1,2). In addition, TGF-␤ is the prototypic fibrogenic cytokine, enhancing extracellular matrix (ECM) gene expression and down-regulat-ing that of matrix-degrading enzymes. Increased expression of TGF-␤ is often associated with fibrotic states and abnormal accumulation of ECM proteins in affected tissues (3)(4)(5)(6). The TGF-␤s signal via serine/threonine kinase transmembrane receptors, which phosphorylate cytoplasmic mediators of the Smad family (7)(8)(9). The ligand-specific Smad1, Smad2, Smad3, and Smad5 interact directly with, and are phosphorylated by, activated TGF-␤ receptors type I. Smad1 and Smad5 are specific for bone morphogenic proteins, whereas Smad2 and Smad3 can be activated by both TGF-␤ and activin receptors. Receptor-activated Smads are kept in the cytoplasm in the basal state bound to the protein SARA (Smad anchor for receptor activation) (10). Upon phosphorylation at their SSXS carboxyl-terminal motif, they are released from SARA and form heteromeric complexes with Smad4, a common mediator for all Smad pathways. The resulting Smad heterocomplexes are then translocated into the nucleus where they activate target genes, binding DNA either directly or in association with other transcription factors. Members of the third group of Smads, the inhibitory Smads, Smad6 and Smad7, prevent phosphorylation and/or nuclear translocation of receptor-associated Smads (7)(8)(9). TGF-␤ also initiates other signaling pathways, such as the stress-activated protein kinase/c-Jun amino-terminal kinase (JNK) pathway (11). This intracellular signaling proceeds through sequential activation of a mitogen-activated protein kinase/extracellular signal-regulated kinase kinase (MEKK1), a mitogen-activated protein kinase kinase (MKK4 or MKK7), and a mitogen-activated protein kinase, JNK. JNK then translocates into the nucleus where it phosphorylates several transcription factors including c-Jun, ATF-2, and Elk-1 (12), leading to specific transcriptional responses. Despite the fundamental role played by TGF-␤ in ECM remodeling and as a fibrogenic factor, little is known about the role of Smad signaling in ECM gene expression. In this report, we have used complementary techniques, differential hybridization of cDNA microarrays together with precise gene promoter analyses, to search for novel fibroblast Smad targets. This approach allowed us to characterize six novel Smad targets, COL1A1, COL3A1, COL5A2, COL6A1, COL6A3, and TIMP-1, and to propose an additional list of 49 immediate-early TGF-␤ target genes whose activation by TGF-␤ is rapid and does not require either protein neo-synthesis or JNK activity, therefore representing potential novel Smad targets. Differential Hybridization of Atlas TM Human cDNA Expression Arrays-Total RNA from control and TGF-␤-treated fibroblasts was obtained using an RNeasy kit (Qiagen) and treated with DNase I to avoid genomic DNA contamination of reverse transcription reactions. Radioactive cDNA synthesis was carried out as described in the Atlas™ cDNA expression arrays user manual (CLONTECH, San Diego, CA). Equal amounts of 33 P-radiolabeled cDNAs (10 7 cpm) from control and TGF-␤-treated fibroblast RNA samples were hybridized in parallel to Atlas™ human cell interaction cDNA expression arrays (catalog number 7746 -1; CLONTECH) for 18 h at 68°C. The filters were then washed four times in 2ϫ SSC and 1% SDS for 30 min at 68°C and twice in 0.1ϫ SSC and 0.5% SDS at 68°C, according to the manufacturer's protocol. Membranes were then exposed to Eastman Kodak Co. phosphor screens for 3 days. Hybridization signals were quantified with a Storm 840 phosphorimager using ImageQuant software (Amersham Pharmacia Biotech) and normalized against glyceraldehyde-3-phosphate dehydrogenase mRNA levels in the same samples. Significant modulation of gene expression was set arbitrarily to 2-fold. Transient Cell Transfection and CAT Reporter Assays-Transient cell transfections were performed with the calcium phosphate/DNA co-precipitation procedure. CAT activity was measured using [ 14 C]chloramphenicol as substrate (40) followed by thin layer chromatography and quantitation with a phosphorimager. Effects of TGF-␤ on Fibroblast ECM-related Gene Expression Profiles as Measured by cDNA Microarray Analysis-The technique of differential hybridization of cDNA expression arrays was used to identify differences in the expression pattern of 265 known ECM-related genes between control and TGF-␤-treated fibroblasts. Because Smad activation and nuclear translocation occurs within minutes and Smad-DNA complexes can be ob-served as early as 10 min after TGF-␤ addition into fibroblast culture medium (13), we focused our attention on early time points, to determine which genes are activated rapidly by TGF-␤, as opposed to secondary gene activation that may involve protein/transcription factor neo-synthesis. At each of the time points tested (30, 60, 120, and 240 min), RNA was extracted from both control and TGF-␤-treated fibroblast cultures, and differential hybridization of cDNA arrays was performed. Among the 265 genes whose probe sets are represented onto the Atlas™ cell interactions cDNA arrays used in these experiments, 77 of them showed no significant hybridization signal in either control or TGF-␤-treated cultures at any of the time points tested (not shown). Among the 188 genes detected, 90 showed no or little alteration in their expression levels upon TGF-␤ treatment. The remaining 98 genes, modulated by TGF-␤, were classified into clusters, based upon the temporal profile of their activation (Fig. 1). Clusters 1-3 contain 58 genes whose expression is strongly up-regulated 30 min after TGF-␤ addition and keeps increasing with time (cluster 1), reaches a plateau (cluster 2), or returns rapidly to basal level (cluster 3). Clusters 4 -6 comprise genes whose expression is delayed; their expression is not noticeably up-regulated by TGF-␤ at the 30-min time point and then follows various patterns of temporal regulation. The complete list of genes contained within these clusters is provided in Table I. Because our main aim was to identify immediate-early targets of the Smad pathway, genes induced as early as 30 min post-TGF-␤ addition to the cultures were further studied in experiments in which on-going protein synthesis is blocked by cycloheximide. Analysis of gene expression by differential hybridization of cDNA arrays indicated that, at the 30-min time point, a similar set of 58 genes was induced by TGF-␤ in the presence or absence of cycloheximide, consistent with a transcriptional response not requiring on-going protein synthesis, such as expected from direct Smad targets. It should be noted that a broad increase in gene expression induced by cycloheximide alone was also observed (not shown), a phenomenon that has been previously described (41). In the presence of curcumin, an inhibitor of JNK activity (42), only three genes among the 58 identified above and belonging to clusters 1-3 were not stimulated by TGF-␤ after 30 min, fibronectin, perlecan, and closely related low density lipoprotein receptor (43). Interestingly, it has have been shown previously that fibronectin gene activation by TGF-␤ is a JNK- List of genes classified in clusters according to their induction kinetics by TGF-␤ 90/198 expressed genes derived from cDNA microarray (Atlas™ cell interaction array, Clontech catalog number 7746 -1) analysis were significantly (over 2-fold) upregulated by TGF-␤ in two independent experiments. They are classified in six clusters according to the kinetics of their induction (see Fig. 1). GenBank™ accession numbers, gene names, and categories for classification are provided, as supplied by Clontech. Note that clusters 1-3 contain genes whose expression is significantly upregulated by TGF-␤ after 30 min. Novel TGF-␤/Smad Targets dependent mechanism that does not require the Smad pathway (44), consistent with the inhibitory effect of curcumin observed in our experiments. Regarding perlecan, we have shown previously that its up-regulation by TGF-␤ is mediated by transcription factor NF-1 and not by Smads (45). Inversely, we found that genes previously identified as Smad targets, plasminogen activator inhibitor-1 (14), COL1A2 (17), and ␤5 integrin (23) (see Table I and the Introduction), as well as p21 (15), the latter detected in another set of experiments using different cDNA microarrays (catalog number 7741-1; CLONTECH, not shown), also belong to these early-induced gene clusters. Together, these data provide a strong argument for the specificity of our experimental approach and its appropriateness for the characterization of early-induced TGF-␤/Smad targets. Effects of TGF-␤ and Smad3 Overexpression on ECM Promoter/CAT Reporter Gene Constructs-We next tried to determine whether the rapid elevation of steady-state mRNA levels observed for several ECM-related genes upon TGF-␤ stimulation, as observed using differential cDNA array hybridization, resulted from transcriptional activation at the level of their promoter regions. We focused our attention on the 5Ј regulatory regions of COL1A2, COL3A1, COL6A1, COL6A3, and TIMP-1 genes, which all belong to clusters 1 and 2, corresponding to genes whose expression is enhanced at least two times by TGF-␤ within 30 min. In a first set of experiments, TGF-␤ responsiveness was examined. All promoter constructs tested responded to exogenous addition of TGF-␤ by a 3-5-fold elevation of their activity ( Fig. 2A). As a first approach to determine whether these promoters were sensitive to Smad activation downstream of TGF-␤, co-transfection experiments were performed in which each ECM promoter/CAT reporter construct was co-transfected with a Smad3 expression vector. As shown in Fig. 2B, Smad3 overexpression led to significant up-regulation of each of the promoters tested, indicating that the Smad pathway may be involved in the TGF-␤ effect. Dominant-Negative Smad3 and Inhibitory Smad7 Expression Block TGF-␤-induced ECM Promoter Activation-If the Smad cascade is responsible for the up-regulation of a given gene by TGF-␤, then the expression of either a dominantnegative Smad3 or inhibitory Smad7 should block its activation by TGF-␤. To test this hypothesis, all promoter constructs shown above to respond to both TGF-␤ and Smad3 overexpression (see Fig. 2) were co-transfected with either Smad3⌬C or Smad7 expression vector. In both cases up-regulation of promoter activity by TGF-␤ was blocked (Fig. 3), suggesting that the COL1A2, COL3A1, COL6A1, COL6A3, and TIMP-1 genes are immediate-early targets of the TGF-␤/Smad3/4 signaling cascade. It may be extrapolated, although it remains to be investigated precisely, that most genes identified in clusters 1-3 whose mRNA steady-state levels were elevated at least two times by TGF-␤ after 30 min, but whose 5Ј-end regulatory regions were not analyzed functionally for Smad responsiveness, also represent direct Smad targets. COL1A1 and COL5A2, but Not COL6A2, Are Direct Smad Targets-Because both COL6A1 and COL6A3 were characterized as direct Smad targets (see above), we wanted to determine whether COL6A2, which encodes the ␣(2) chain of heterotrimeric type VI collagen, was similarly up-regulated by TGF-␤. Interestingly, when a 2.5-kilobase pair COL6A2 promoter fragment (24) was tested in an identical experimental system, we observed that it did not confer either TGF-␤ or Smad3 responsiveness (not shown), suggesting a differential regulation of the three genes encoding type VI collagen by TGF-␤, where both COL6A1 and COL6A3 are coordinately regulated and are direct Smad targets, whereas COL6A2 is not. These data differ slightly from previous observations indicating specific up-regulation of COL6A3 but not COL6A1 or COL6A2 by TGF-␤, when mRNA steady-state levels were detected after 48 h of stimulation (46). mRNA steady-state levels of COL1A1, which encodes the ␣(1) chain of type I collagen, have been shown previously to be elevated by TGF-␤ (47). The corresponding promoter was found to be up-regulated by both exogenous addition of TGF-␤ and co-transfection of a Smad3 expression vector (not shown). In addition, its activation by TGF-␤ was blocked by both dominant-negative Smad3 and Smad7 overexpression (not shown), indicating that the COL1A1 promoter is also a Smad target. Interestingly, these data corroborate the previously described coordinate regulation of COL1A1 and COL1A2 (48 -50) and indicate transcriptional coordination by TGF-␤, orchestrated by Smad-dependent mechanisms. Identical results were obtained with the ␣(2) type V collagen gene (COL5A2) promoter, indicating that the latter is also a direct Smad target (not shown). Absence of COL1A1, COL1A2, COL3A1, COL5A2, COL6A1, COL6A3, and TIMP-1 Promoter Transactivation by TGF-␤ in Smad3 Ϫ/Ϫ Mouse Embryo Fibroblasts-To ascertain the role played by Smad3 in the transactivation of the ECM-related promoters identified above, transient cell transfection experiments were carried out using either wild-type or Smad3 Ϫ/Ϫ mouse embryo fibroblasts (26,27). As expected from the data presented above in which either dominant-negative Smad3 or inhibitory Smad7 expression vectors blocked TGF-␤ effect, no TGF-␤-driven transactivation of the COL1A1, COL1A2, COL3A1, COL5A2, COL6A1, COL6A3, and TIMP-1 promoters was observed in Smad3 Ϫ/Ϫ mouse embryo fibroblasts. On the other hand, all promoters add their activity significantly increased by TGF-␤ in the corresponding wild-type mouse embryo fibroblasts (not shown). These results confirm the role played by Smad3 to mediate TGF-␤ transactivation of these ECM promoters. The implication of TGF-␤ in fibrotic processes has long been suspected (3)(4)(5)(6). The demonstration that several genes encoding fibrillar collagens, COL1A1, COL1A2, COL3A1, and COL5A2, are up-regulated by TGF-␤ acting directly through the Smad pathway indicates that the latter is likely to play a key role in the development of tissue fibrosis. It also suggests that therapeutic approaches directed toward the Smad cascade may prove useful in the treatment of fibrotic disorders. In conclusion, using cell matrix interaction-specific commercial cDNA microarrays we have identified 58 immediate-early targets for TGF-␤. Only three of these 58 genes had their activation blocked by curcumin, a selective inhibitor of JNK, a signaling pathway alternative to the Smad cascade downstream of the TGF-␤ receptors. These data suggest that the JNK pathway downstream of the TGF-␤ receptors likely affects very few early ECM-related target genes as compared with the Smad pathway. Using promoter/reporter gene constructs to analyze the transcriptional responsiveness to the TGF-␤/Smad pathway of several genes identified by differential hybridization of cDNA arrays, we have formally identified six novel TGF-␤/Smad immediate-early gene targets, namely COL1A1, COL3A1, COL5A2, COL6A1, COL6A3, and TIMP-1. Together with the identification of 49 other immediate-early TGF-␤ gene targets, this study represents a major leap forward in the identification of TGF-␤/Smad targets.
3,754.8
2001-05-18T00:00:00.000
[ "Biology" ]
Extracellular Vesicles: The Future of Diagnosis in Solid Organ Transplantation? Solid organ transplantation (SOT) is a life-saving treatment for end-stage organ failure, but it comes with several challenges, the most important of which is the existing gap between the need for transplants and organ availability. One of the main concerns in this regard is the lack of accurate non-invasive biomarkers to monitor the status of a transplanted organ. Extracellular vesicles (EVs) have recently emerged as a promising source of biomarkers for various diseases. In the context of SOT, EVs have been shown to be involved in the communication between donor and recipient cells and may carry valuable information about the function of an allograft. This has led to an increasing interest in exploring the use of EVs for the preoperative assessment of organs, early postoperative monitoring of graft function, or the diagnosis of rejection, infection, ischemia-reperfusion injury, or drug toxicity. In this review, we summarize recent evidence on the use of EVs as biomarkers for these conditions and discuss their applicability in the clinical setting. The Thriving Field of Solid Organ Transplantation Solid organ transplantation (SOT) has developed from an experimental treatment in the 20th century to the standard of care for patients suffering from end-stage organ failure [1]. In 2021, 144,302 solid organs were transplanted in the European Union (EU) according to the Spanish National Transplant Organization, which represents a 19.1% increase from 2010 [2]. The exponential growth in the elderly population over the last decades, which requires cost-effective solutions to non-communicable diseases, plays a role in the lengthening of the transplantation waitlist [3]. The kidney is the most frequently transplanted organ and is the gold standard for renal replacement therapy, which provides better survival and quality of life than dialysis [4]. It is followed by the liver and heart, which are transplanted as the last resort in organ failure. Lungs, pancreas, pancreas-kidney, and intestine transplants are common practices today; more novel transplants, such as cornea, pancreatic islet, or liver fraction transplants are still being implemented in major hospitals [5]. Improvements in surgical techniques have led to more successful multiorgan transplants with fewer complications and reduced systemic injury. Additionally, immunosuppression therapy has been refined to minimize the host's immune response and improve the survival rate of transplanted organs. All these improvements have resulted in have the capacity to selectively load some molecules into EVs, such as miRNAs found only in low concentrations in the cytoplasm, to modulate gene expression in distant cells [21]. The growing interest in EVs is partly due to the wide range of physiological functions they are involved in, including the immune response [22,23], tissue remodeling and repair [24][25][26], stem cell pluripotency [27,28], angiogenesis [29,30], and coagulation [31]. The immune response was one of the first functions discovered when Raposo et al. showed that B cells secreted EVs to present antigens to T cells [32]. Other studies have shown that dendritic cells take up circulating EVs from other dendritic cells, and their cargo proteins are processed and presented as antigens, playing a role in immune regulation [33,34]. Among other pathological conditions they are involved in, cancer has received the most attention [35,36], but EVs also play a role in neurodegenerative diseases such as Alzheimer's [37] and cardiovascular diseases such as atherosclerosis [38], as well as infectious diseases such as HIV-1 infection [39]. EVs as Stable, Organ-Specific Biomarkers of Health and Disease The potential of EVs as biomarkers in end-stage organ failure has been widely explored in recent years. Compared with soluble biomarkers, EVs provide the advantages of high stability in the extracellular medium, longer half-lives, and information about their parent and target cells [40]. In kidney diseases, EVs have been studied in the diagnosis of acute kidney injury, chronic kidney disease, renal transplantation, thrombotic microangiopathies, vasculitis, IgA nephropathy, nephrotic syndrome, urinary tract infection, cystic kidney disease, and tubulopathies [41][42][43][44]. They are also useful in diagnosing and grading the prognosis of heart failure [45,46]. Nonetheless, some of these conditions have been widely studied for many years; therefore, several soluble biomarkers already exist that are well integrated into clinical practice, as is the case for brain natriuretic peptides (BNP and NT-proBNP) in heart failure. In contrast, the utility of these biomarkers in graft function monitoring is not fully established and requires further study [47]. Regarding BNP, it has been found that it tends to remain high after transplantation, even with no evidence of left ventricular function [48]. Thus, the need for novel biomarkers to monitor the function and detect potential conditions affecting the integrity of an allograft has drawn attention to the field of EVs. EVs from different cell types are present in nearly every body fluid, from plasma to synovial fluid, including breast milk, saliva, and urine [49]. In the field of organ transplantation, most studies use EVs from plasma, urine, or perfusion fluid, given their availability in the volumes needed for most isolation protocols [50,51]. Isolation methods include ultrafiltration, size exclusion chromatography, and immunoaffinity-based techniques [52,53], although most studies in SOT use ultracentrifugation since it is a cost-effective technique that reaches high purity rates [51,54] (Figure 1). An additional benefit of EVs compared with soluble biomarkers is the possibility to discern whether they come from the donor or the recipient, shedding light on the underlying immune pathways at the time of the transplant. Donors' and recipients' EVs are most frequently identified through imaging flow cytometry based on the staining of specific mismatching HLA complexes [55]. Search Strategy The current narrative review aims at summarizing the current knowledge on the use of extracellular-vesicle-derived components as biomarkers in a range of conditions associated with SOT. Original research studies were identified by searching the Medline (PubMed), Embase, Web of Science, and Google Scholar databases from their inception. The main search was run on 20 December 2022 and updated on 3 January 2023. The keywords 'solid organ transplantation' (transplant, graft, kidney transplant, liver transplant, lung transplant, heart transplant, intestine transplant, pancreatic islets transplant, or corneal transplant), 'extracellular vesicles' (exosomes, exosomal, or microvesicles), and 'diagnosis' (complications, rejection, allograft rejection, acute rejection, chronic rejection, infection, drug toxicity, graft function, graft quality, or ischemia-reperfusion injury), or any of their synonyms listed in brackets, were typed in various combinations using Boolean operators. Queries were limited to those studies involving mammalian subjects and an in vivo design, with full texts available. Hand searches of the reference lists of articles and relevant literature reviews were used to complement the computer search. The search focused solely on articles in English published in peer-reviewed journals to enhance the methodological rigor. Previous reviews, position papers, and case reports or case series were excluded. Figure 2 summarizes some of the most relevant findings. Preoperative and Postoperative Assessment of Donor Organ Function Assessing the function of donor organs non-invasively at the time of transplantation is a crucial goal to increase graft survival rates, as well as to ensure donor safety in living donation. Traditionally, assessment of organ function has relied on laboratory parameters, such as glomerular filtration rates for kidneys, and imaging techniques. Some studies propose that the accuracy of the glomerular filtration rate, especially with near-normal kidney function, may be suboptimal; for this reason, new soluble biomarkers, such as cystatin C, are gaining importance [56]. This goal gains relevance as the number of organs from deceased donors (DDs), particularly DCDs, increases. Many factors explain the poorer outcomes of organs transplanted from DDs versus LDs, such as the younger donor age or the planned surgery, and the less strict screenings of graft function may also play a role. Moreover, pretransplant evaluation relies mostly on preexisting medical conditions and biopsies, which are not exempt from risk [57]. EVs provide different advantages in the preoperative evaluation of donor organs, as summarized by Ashcroft et al. [50]. In a kidney transplant, a study by Turco et al. found that specific populations of urinary EVs (uEVs) can indicate aging-related structural changes in living donor kidneys. Both the number of EVs and their cellular origin changed with conditions such as nephrosclerosis or nephron hypertrophy [58]. Another study by Lozano-Ramos et al. also found that uEVs can be used to assess donor kidney function by analyzing their miRNA profile. They compared EVs from LDs and DDs and found no overall differences in miRNA profiles in normofunctioning grafts at one year. Interestingly, only miR-326, which targets the pro-apoptotic protein Bcl-2, was overexpressed in living donors [59]. Notably, EVs have also been isolated from the preservation fluid of organs both in DCDs and brain death donors (DBDs). These EVs, secreted by the renal endothelium, contain miRNAs that might be able to predict early or delayed graft function (DGF) [60]. EVs have also shown a role in the early assessment of postoperative graft function. Regarding kidney transplantation, research has been conducted to identify specific patterns of EVs in urine or blood related to DGF. DGF is defined as acute kidney injury that occurs in the first week of kidney transplantation, necessitates dialysis intervention, and is associated with higher rates of acute rejection and shorter graft survival in the long term [61]. Some recent studies have found specific EV components with high prediction accuracy for DGF, as is the case for CD133 as an EV membrane marker [62], neutrophil gelatinase-associated lipocalin (NGAL) [63,64], and individual miRNAs [65]. Other works provide a more global picture through whole proteome analysis [66] or EV-contained miRNA panels [67]. Differential diagnosis of acute graft dysfunction is another current challenge that could be addressed through EVs. Currently, a combination of laboratory tests (e.g., GFR and proteinuria), immunological findings (e.g., donor-specific antibodies), imaging techniques (e.g., Doppler ultrasound), and histological parameters is needed to differentiate between conditions such as rejection, infection, drug-induced damage, ischemic injury, recurrence of the primary disease, or surgery-related vascular or urinary tract complications. Matignon et al. proposed an mRNA signature in urinary cells to successfully differentiate some of these conditions, reducing the number of biopsies in these patients [68]; a similar uEV-based approach may be of use to this end. Diagnosis of Graft Rejection Despite the recent technical advances and better outcomes achieved, graft rejection remains the Achilles' heel of SOT [69][70][71]. Graft rejection can be defined as the loss of allograft graft function caused by the recipient's immune system. Acute rejection (AR) occurs within the first few weeks or months after transplantation and is caused by a rapid and strong immune response to the transplanted tissue [13]. This type of rejection is usually prevented and treated with immunosuppressive drugs [72]. However, even short episodes of graft rejection can have long-term consequences on a liver graft, including an increased risk of failure and mortality. Chronic rejection, which occurs after the first year posttransplant, is less common and responds poorly to treatment, leading to permanent organ damage [13]. Depending on the immunological mechanisms, rejection can be divided into antibody-mediated (ABMR) or T-cell-mediated (TCMR) rejection, with different treatment strategies and outcomes [69]. Hence, there is an urgent need for improved methods for immune response monitoring transplant recipients. Despite all available laboratory parameters and imaging techniques, histological examination remains the gold standard for the diagnosis of rejection. Thus, serial surveillance biopsies are the standard of care in heart and lung transplantation to enable early therapeutic intervention; kidney biopsies may also be needed if there is a diagnostic concern. Nonetheless, biopsies are associated with a risk of bleeding and damage to the allograft or the surrounding organs [73]. Regarding pancreatic islets, neither biopsy nor imaging is available; therefore, islet function is monitored mostly through c-peptide concentrations and glycemia [74]. Therefore, the development of non-invasive biomarkers to detect immune-mediated allograft injury is required for clinicians to tailor immunosuppression and intervene early, ideally before any visible organ dysfunction occurs [15]. Kidney graft rejection is the main cause of graft failure censored for death at any time following transplantation [75]. According to some series, the incidence of ABMR increases over time at a rate of 1.1% per year, while TCMR is rare 6 years after transplantation [75]. The differential diagnosis of graft rejection in kidney transplants is an ongoing challenge, since other conditions, such as drug toxicity or infections, may simulate rejection, particularly in the long term. The current diagnosis of chronic allograft failure through serial biopsies poses a problem since, aside from the well-known risks of bleeding and infection, the percentage of inconclusive samples is considerable [76]. Most authors study uEVs in the quest for biomarkers of AR. Proteomics analysis has provided several candidates, such as cystatin C (CST3), lipopolysaccharide-binding protein (LBP) [77], tetraspanin 1 (TSPAN1), and hemopexin (HPX) [78]. Some studies have compared these proteins to soluble urine biomarkers and identified some that are specific to EVs [79]. An mRNA panel has been shown to outperform laboratory kidney-function-based methods in early diagnosis or AR while still being able to differentiate between their immune mechanisms [80]. Urinary EVs from T cells are also useful since an increase in membrane marker CD3 has shown specificity to TCMR [81]. Studies on plasma-derived EVs have identified some EV subpopulations linked to AR, which can also be used to monitor responses to treatment [82]. Others have focused on mRNAs and have found a combination of four genes that can accurately predict ABMR [83]. As for chronic kidney rejection, several EV-based biomarkers are currently under study. While some studies can identify this condition based on a single biomarker in uEVs [84], others have proposed a combination of proteins to this same end [85]. Interestingly, uEVs of renal origin can differentiate chronic rejection from other confounding conditions, such as calcineurin inhibitor toxicity, in which biopsies and laboratory assays are frequently needed [84]. Membrane markers in EVs with an immune origin, such as T helper cells, can also shed light on the underlying cause of graft failure, as shown by Yang et al. [86]. Liver and pancreas rejection studies mostly make use of plasma-derived EVs. In models of liver rejection, protein galectin-9 revealed an accurate diagnosis of TCMR, and several miRNAs were found to be over-(miR-223 and let-7e-5p) and under-expressed (miR-199a-3p) in TCMR [87]. The only study performed in this field is based on a human-intomouse xenogeneic islet transplant model of pancreatic islets. The authors found that mice with AR showed a decrease in donor EVs and an increase in T-cell EVs from the recipient. The potential of donor EVs as biomarkers of rejection has previously received attention in a kidney transplant model, where CD9+ HLA-A3+ EVs from the donor increased only in recipients with no allograft dysfunction [88]. Furthermore, they found four proteins that were overexpressed in mice with induced AR compared with controls. The clinical interest of these findings is reinforced by the fact that these biomarkers precede classic manifestations of organ dysfunction, such as hyperglycemia [89]. The incidence of heart transplant rejection has steadily dropped in recent years, from 30.5% in 2004-2006 to 24.10% in 2010-2015, from discharge to 1 year of follow-up [90]. Nonetheless, they remain among the highest rejection rates in SOT. The diagnosis of rejection relies on endomyocardial biopsy (EMB) as well as donor-specific antibodies. In addition to usual risks, EMB increases the risk of tricuspid regurgitation [91]. Studies have found candidate soluble biomarkers in plasma, such as microRNA, mRNA profiling, and detection of circulating cell-free DNA; however, these are only stable for a short time in plasma [92]. For this reason, EVs have emerged as tools for rejection monitoring. Preclinical studies in mice have shown that simple measures, such as total EV concentrations in plasma, can accurately predict heart AR at an early stage, at which biopsies still show insignificant or grade 0R changes [93]. In humans, EV-based models have been able to diagnose AR and its two immunological variants, ABMR and TCMR, with adequate sensitivity and specificity. Castellani et al. based their model on mostly membrane proteins [94], while Kennel et al. performed a proteomic analysis [95]. In lung transplants, recent studies have succeeded at early diagnosing both acute lung rejection and the most common manifestation of chronic rejection, bronchiolitis obliterans syndrome (BOS). BOS, a chronic obstructive pulmonary disease (COPD)-like clinical pattern, affects about 50% of transplanted patients within 5 years [96,97] and accounts for more than 30% of the mortality rate after this period [97]. Early diagnosis and treatment of AR can prevent it from evolving into chronic rejection; however, diagnosis relies on a CT scan and lung biopsy, which have limited sensitivity [98]. Hence, intense surveillance for AR is limited, reducing early recognition. Some recent studies have investigated the use of EVs in bronchoalveolar lavage fluid (BALF) to generate a molecular fingerprint of AR. Gregson A. et al. performed an mRNA analysis and found a transcriptomic signature that accurately characterized patients with AR [99]. Another work by Gunasekaran et al. analyzed proteins and miRNAs in both plasma-derived EVs and BALF EVs in healthy recipients and compared them with those of lung transplant patients with AR or BOS. They found that donor HLA molecules and lung-associated self-antigens, such as collagen-V (Col-V) and K alpha 1 tubulin (Kα1T), were overexpressed in both conditions and could lead to an earlier diagnosis by up to 6 months. Several EV-contained miRNAs related to inflammation and endothelial activation, as well as to the expression of certain costimulatory molecules, could accurately identify these conditions [100]. In a more recent study by the same group, plasma-derived EVs from BOS patients were isolated, and their proteins and transcription factors were analyzed, further expanding the candidate biomarkers for BOS diagnosis. Moreover, when healthy mice were treated with the aforementioned EVs, they developed a proinflammatory phenotype consisting of antibodies against self-antigens and increased IL-17 and IFN-γ, and decreased IL-10. Thus, they suggest that EVs produced during rejection have immune-boosting qualities and play a significant part in chronic rejection after lung transplantation. In this line, some studies have also focused on the role of native EVs in the pathophysiology of rejection [22,54]. It is now known that allograft recognition does not always occur through the direct recognition of donor cells. Instead, the immune response leading to graft rejection can be triggered by EVs carrying donor MHC molecules and peptides. Studies have shown that host antigen-presenting cells (APCs) in lymph nodes can present EVs bearing donor MHC I and II molecules, which initiates T cell activation after skin and heart transplants [101]. This suggests that host APCs can acquire donor MHC molecules present on EVs secreted by donor cells, and, hence, EVs would be responsible for determining the fate of the allograft through a semi-direct pathway. Other studies have described the role of EVs in allograft recognition through an indirect pathway, whereby the EV-presented antigen is taken up and processed by B lymphocytes before being presented to the T cell [102]. Diagnosis of Ischemia-Reperfusion Injury Ischemia-reperfusion injury (IRI) is a condition affecting most transplanted organs, particularly when they are derived from donations after circulatory death (DCDs), due to the longer times of warm ischemia. However, it is also present in most organs susceptible to ischemia of any cause, such as myocardial infarction. The pathophysiology of IRI is complex: while the imbalance between metabolic supply and demand causes tissue hypoxia and microvascular dysfunction, subsequent reperfusion boosts innate and adaptive immune responses and activates the cell death machinery [103]. EVs garner interest for the differential diagnosis of IRI versus other causes of DGF in the postoperative setting. Sonoda et al. propose aquaporin 1 (AQP1) as an early negative biomarker of IRI; according to their study, the decrease in AQP1 in uEVs may be a consequence of both decreased release and production and may be useful for diagnosing IRI within the first 6 h, before changes in renal function parameters are observed [104]. Nonetheless, other studies propose AQP1 reduction as a constant phenomenon in kidney transplantation [105]. Some of these biomarkers stand as potential targets to minimize IRI-related damage; for instance, Li et al. found that EV-contained miR-23a, which increases in IRI in response to hypoxia-inducible factor 1, could be targeted to limit inflammation of the renal parenchyma [106]. Similar results were obtained with miR-374b-5p [107]. Diagnosis of Immunosuppressive Drug Toxicity and Graft Infection Immunosuppressive drugs are responsible for the remarkable increase in graft survival during the last decades [72]. Despite their long-known side effects, such as nephrotoxicity, calcineurin inhibitors (CNIs) remain the cornerstone of immunosuppression in kidney transplantation. Chronic CNI toxicity (CNIT) can result in vascular dysfunction, interstitial fibrosis, and tubular atrophy, compromising the integrity of the graft [108]. Many factors account for nephrotoxicity, the most important of which is drug dosing; however, a nonnegligible interindividual variability exists, since side effects have been reported even with low doses. For this reason, drug levels in plasma and serial biopsies are losing ground in favor of non-invasive strategies for pharmacokinetic monitoring. Proteomic and miRNA analysis of urinary EVs in kidney transplantation have shed some light on the question, according to some recent studies. Carreras-Planella et al. identified members of the uroplakin family as predictors of CNIT over healthy and of-other-cause kidney fibrosis [109]. Costa da Freitas et al. used a similar approach to correlate uEV-contained miRNAs with tacrolimus levels [110]. This is in line with previous studies demonstrating the potential of EVs to monitor immunosuppressive treatment in autoimmune diseases [111]. Post-transplant infection is one of the most feared complications, given the high morbimortality it accounts for both in the short and the long term [112]. Early diagnosis of infection may be delayed by the atypical clinical manifestations of transplanted patients under immunosuppressive regimes. Moreover, infection screening through laboratory parameters generally requires biopsy confirmation, as is the case for BK polyomavirus (BKV) in kidney transplant recipients [113]. Although its incidence has dropped in the last decades, BKV is still a prevalent cause of nephropathy, affecting up to 10% of kidney recipients and causing allograft failure in 10 to 80% of these [114]. Hence, novel biomarkers of infection in SOT are currently under development, which could help to initiate prompt treatment and achieve adequate balance in immunosuppressive therapies [115]. Kim et al. proved that, aside from human miRNAs, viral miRNAs (miR-B1-5p and miR-B1-3p) could also be used as biomarkers of infection with high sensitivity and specificity [116]. These findings were supported by a previous study on kidney biopsies, wherein the same viral miRNAs were found [117]. In lung transplants, the potential of EVs goes beyond that of a diagnostic tool; they also represent the mechanism through which infections relate to long-term graft dysfunction and rejection, as proven by Gunasekaran et al. [118] (Table 1). EV-contained donor HLA and collagen V were significantly overexpressed in AR and BOS compared with healthy patients (p < 0.05). Collagen V was detected 3 months before AR and 6 months before BOS diagnosis. Differentially expressed immunoregulatory miRNAs were found for AR (miR-92a and miR-182) and BOL (previous ones and miR-142-5p and miR-155) compared with control. Human TR with vs. without ABMR vs. TCMR Proteomic analysis A total of 45 EV-derived proteins were identified to differentiate 3 groups: control/heart failure group, heart transplant without rejection and, ABMR and TCMR. A total of 15 of them were differentially expressed between the 2 last groups (p < 0.05). Most of these proteins play a role in the immune response (complement activation, adaptive immunity, and coagulation). Members of the uroplakin and plakin families were significantly overexpressed in the group with calcineurin inhibitor toxicity. CTSZ, RAB8A, and SERPINC1 were significantly overexpressed in patients with toxicity compared with normally functioning ones. Carreras-Planella L. et al. [109] Human TR under various immunosuppressive therapies and tacrolimus therapy miRNA analysis Expression of miR-155-5p and miR-223-3p showed significant correlation with tacrolimus dose and could be used to monitor toxicity. miR-223-3p also correlated with serum creatinine. Costa de Freitas R. et al. [110] All transplant recipients received allogenic grafts. All changes in the "reported outcomes" column were measured in EVs from the aforementioned origins. Opportunities and Future Directions The use of EVs as diagnostic tools in SOT is a rapidly growing field of research. Evidence suggests that EVs can provide valuable information about the function of transplanted organs, allowing for early detection of complications such as rejection or infection. As research progresses, EVs are likely to become widespread biomarkers, providing important benefits for patients and physicians alike. However, at least three challenges must be addressed before they are fully implemented in the clinical setting. First, standardizing EV isolation and characterization procedures is necessary to generate homogeneous research that can be compared and meta-analyzed. This is particularly applicable to urinary EVs, as highlighted by the ISEV. Most studies on kidney transplants use uEVs, since they are easily available and non-invasive, and urine is already routinely collected to measure renal function parameters. However, current investigations on uEVs should address certain biases, such as the variable uEV concentrations or the wide range of isolation methods available, which affects the reproducibility of the studies. As possible solutions, the normalization of uEV concentrations to urine dilution and the use of flow cytometry to identify specific uEV populations have been proposed [122]. Additionally, it is important to move from the study of single biomarkers to that of full diagnostic panels, which are cost-effective and feasible for clinical use. Thus, there is a manifest need for clinical studies that validate the use of EVs as efficient biomarkers in SOT, through their comparison with traditional biomarkers or diagnostic criteria. This is aligned with the 2018 insight paper from ISEV, which remarks on the need to evolve from basic to applied research that takes full advantage of the potential of EVs. Finally, future studies should aim not only at diagnosing a certain condition but also at solving frequent issues of clinical practice. For instance, instead of looking for biomarkers of rejection, future studies should, rather, look for biomarkers that establish the differential diagnosis of graft dysfunction, and therefore help decision-making. Other clinical situations where EVs could be of help are in the monitoring of responses to immunosuppressive or antimicrobial therapy. In general, a thorough study design to include control patients who resemble those in the clinical setting would be key to this goal. Addressing these challenges is crucial for ensuring that extracellular vesicles realize their full potential as a diagnostic tool in solid organ transplantation.
6,094.2
2023-03-01T00:00:00.000
[ "Medicine", "Biology" ]
Method for Determination of $|U_{e3}|$ in Neutrino Oscillation Appearance Experiments We point out that determination of the MNS matrix element |U_{e3}| = s_{13} in long-baseline \nu_{\mu} \to \nu_e neutrino oscillation experiments suffers from large intrinsic uncertainty due to the unknown CP violating phase \delta and sign of \Delta m^2_{13}. We propose a new strategy for accurate determination of $\theta_{13}$; tune the beam energy at the oscillation maximum and do the measurement both in neutrino and antineutrino channels. We show that it automatically resolves the problem of parameter ambiguities which involves \delta, \theta_{13}, and the sign of \Delta m^2_{13}. I. INTRODUCTION With the accumulating evidences for neutrino oscillation in the atmospheric [1], the solar [2] and the accelerator neutrino experiments [3], it is now one of the most important subjects in particle physics to explore the full structure of neutrino masses and the lepton flavor mixing. In particular, it is the challenging task to explore the relatively unknown (1)(2)(3) sector of the MNS matrix [4], namely, θ 13 , the sign of ∆m 2 13 and the CP violating phase δ. The only available informations to date are the upper bound on θ 13 from the reactor experiments [5], and an indication for positive sign of ∆m 2 13 by neutrinos from supernova 1987A [6]. Throughout this paper, we use the standard notation of the three flavor MNS matrix, in particular U e3 = s 13 e −iδ , and define the neutrino mass-squared difference as ∆m 2 ij ≡ m 2 j − m 2 i . The long baseline ν µ → ν e neutrino oscillation experiment is one of the most promising way of measuring θ 13 . In particular, it is expected that the JHF-Kamioka project which utilizes low energy superbeam can go down to the sensitivity sin 2 2θ 13 ≃ 6 × 10 −3 [7]. A similar sensitivity is expected for the proposed CERN → Frejus experiment [8]. Although a far better sensitivity is expected to be achieved in neutrino factories [9], it is likely that the low energy conventional superbeam experiments are the ones which can start much earlier. Therefore, it is of great importance to examine how accurately θ 13 can be determined in this type of experiments. In this paper, we point out that determination of sin 2 2θ 13 by using only neutrino channel suffers from large intrinsic uncertainty of ± (30-70) % level due to the unknown CP violating phase δ and the undetermined sign of ∆m 2 13 . It should be noted that the intrinsic uncertainty exists on top of the usual experimental (statistical and systematic) errors. To overcome the problem of the intrinsic uncertainty, we suggest a new strategy for determination of θ 13 by doing appearance experiments utilizing both antineutrino and neutrino beams. Our proposal is a very simple one at least at the conceptual level; tune the beam energy to the oscillation maximum and run the appearance experiments in bothν µ →ν e and ν µ → ν e channels. We will show that it not only solves the problem of intrinsic uncertainty mentioned above but also resolves the (δ − θ 13 ) two-fold ambiguity discussed in Ref. [10]. Furthermore, it does not suffer from possible ambiguity due to the unknown sign of ∆m 2 13 , the problem first addressed in Refs. [11,12]. 1 We are aware that there are combined ambiguities to be resolved (even ignoring experimental uncertainties) to determine a complete set of parameters including δ, θ 13 , and the sign of ∆m 2 13 , which are as large as four-fold [14]. We take experimentalists' approach to the ambiguity problem and try to resolve them one by one, rather than developing mathematical framework for the simultaneous solutions. The most important issue here is again to accurately determine θ 13 , because then all the combined ambiguities will be automatically resolved, as we will show below. Let us clarify how large uncertainty is expected for determination of θ 13 due to our ignorance of δ in the ν µ → ν e appearance experiment. To achieve intuitive understanding of the issue we use the CP trajectory diagram introduced in previous papers [11,12]. Plotted in Fig. 1 are the CP trajectory diagrams in bi-probability space spanned by P (ν) ≡ P (ν µ → ν e ) and P (ν) ≡ P (ν µ →ν e ) averaged over Gaussian distribution (see next paragraph) with three values of θ 13 , sin 2 2θ 13 = 0.05 and 0.02 for ∆m 2 23 > 0 case and sin 2 2θ 13 = 0.064 for ∆m 2 23 < 0 case. Since we assume |∆m 2 23 | ≫ |∆m 2 12 | the sign of ∆m 2 23 is identical with that of ∆m 2 13 . (The fourth one with sin 2 2θ 13 = 0.04 is for our later use.) The values of sin 2 2θ 13 for the second and the third trajectories are chosen so that the maximum (minimum) value of P (ν) of the second (third) trajectory coincides with about 1.1 %, the minimum value of P (ν) of the first trajectory. The remaining mixing parameters are taken as the best fit value of the Super-Kamiokande (SK) and the K2K experiments [15], |∆m 2 23 | ≡ ∆m 2 atm = 3 × 10 −3 eV 2 , and the typical ones for the large mixing angle (LMA) MSW solar neutrino solution as given in the caption of Fig. 1. While we focus in this paper on the JHF experiment with baseline length of 295 km, JAERI-Kamioka distance, many of the qualitative features of our results remains valid also for the CERN-Frejus experiment. Throughout this paper we take the neutrino energy 1 Our new strategy and these results were announced in the 8th Tokutei-RCCN workshop [13]. distribution of Gaussian form with width of 20 % of the peak energy. Of course, it does not represent in any quantitatively accurate manner the effects of realistic beam energy spread and the energy dependent cross sections. But we feel that it is sufficient to make the point of this paper clear, illuminating our new strategy toward accurate determination of θ 13 . Suppose that a measurement of appearance events gives us the value of the oscillation probability P (ν) ≃ 1.1 %. Then, it is obvious from Fig. 1 that a full range of values of sin 2 2θ 13 from 0.02 to 0.064 are allowed (even if we ignore experimental errors) due to our ignorance to the CP violating phase δ and the sign of ∆m 2 13 . 2 If we know that the sign is positive, for example, the uncertainty region would be limited to 0.02-0.05, which is still large. Let us estimate in a syatematic way the uncertainty in the determination of θ 13 due to the CP violating phase δ. To do this we rely on perturbative formulae of the oscillation probabilities P (ν) and P (ν) which are valid to first order in the matter effect [18]. With relatively short baseline ∼ 300 km or less the first-order formula gives reasonably accurate results. Ignoring O(sin 3 2θ 13 ) terms the formula can be written with use of the notation (L and E denote baseline length and neutrino energy, respectively) in the form P (ν/ν) = P ± sin 2 2θ 13 + 2Q sin 2θ 13 cos 2 It may be worth to remark the following: Low energy neutrino oscillation experiments with superbeams are primarily motivated as a result of the search for the place where CP violating effects are comparatively large and easiest to measure [16]. See e.g., [17] for works preceding to [16]. Unfortunately, this large effect of δ is the very origin of the above mentioned large intrinsic uncertainty in determination of θ 13 . where a = √ 2G F N e denotes the index of refraction in matter with G F being the Fermi constant and N e a constant electron number density in the earth. The ± signs in P ± refer to the neutrino and the antineutrino channels, respectively. The maximum and the minimum of P (ν) for given mixing paramters, neutrino energy and baseline is obtained at cos δ + ∆ ij 2 = ±1. Then, the allowed region of sin 2θ 13 for a given value of P (ν), assuming blindness to the sign of ∆m 2 13 , is given by In Fig. 2 presented is the allowed region of sin 2 2θ 13 for a given value of measured oscillation probability P (ν). The size of the intrinsic uncertainty must be compared with the statistical and the systematic errors which are expected in the actual experiments. A detailed estimation of the experimental uncertainties is performed for the JHF experiment by Obayashi [19] assuming the off-axis beam (OA2) [7] and running of 5 years. The results strongly depend upon θ 13 . We implemented these errors in Fig. 2b which is drawn with the similar energy as the peak energy of OA2 beam (∼ 780 MeV). We should note, however, an important difference between Fig. 2 and the plot in [19]; the abscissa of Fig. 2 is the Gaussian averaged probability, whereas the corresponding axis of the plot in [19] is the number of events. Therefore, we tentatively determined the location of errors in Fig. 2 so that the center of the error bars coincide with the center of the allowed band of sin 2 2θ 13 . Keeping this difference in mind, we still feel it informative for the readers to display the expected experimental uncertainties in Fig. 2b for comparison. Therefore, the intrinsic uncertainty due to δ and undetermined sign of ∆m 2 13 is larger than the expected experimental errors in most of the sensitivity region for θ 13 in the experiment. We note that the experimental errors are dominated by the statistical one in phase I of the JHF-SK neutrino project and hence it should be improved by a factor of ∼ 10 in two years of running in the phase II with a megaton water Cherenkov detector [7]. Thus, the intrinsic uncertainty completely dominates over the experimental ones if one stays only on the neutrino channel. It is tempting to think about seeking better resolution by adding more informations. A natural candidate for such possibilities in this line of thought is to do additional appearance experimentν µ →ν e using antineutrino beam. While it strengthens constraints, it does not completely solve the uncertainty problem even if we ignore the experimental errors. It is due to the inherent two-fold ambiguity which exists in simultaneous determination of δ and θ 13 as has been pointed out by Burguet-Castell et al. [10]. While their discussion anticipates applications to neutrino factory, the issue of the two-fold ambiguity is in fact even more relevant to our case because of the large effect of δ as we saw in the previous section. The existence of the two-fold (θ 13 − δ) ambiguity is easy to recognize by using the CP trajectory diagram. We show in Fig. 1 by a dash-dotted curve another trajectory drawn with sin 2 2θ 13 = 0.04 which has two intersection points with the solid curve trajectory with sin 2 2θ 13 = 0.05. Suppose that measurements of neutrino and antineutrino oscillation probabilities P (ν) and P (ν) had resulted into either one of the two intersection points. Then, it is clear that we have two solutions, for positive ∆m 2 13 , (sin 2 2θ 13 , δ) = (0.04, 0.65π) and (0.05, 0.35π) for the upper intersection point, and (sin 2 2θ 13 , δ) = (0.04, 1.4π) and (0.05, 1.7π) for the lower intersection point. Similar two-fold (θ 13 − δ) ambiguity also exists for negative ∆m 2 13 which however is not shown in Fig. 1. In other word, we can draw two different CP trajectories which pass through a point determined by given values of P (ν) and P (ν). This is the simple pictorial understanding of the (θ 13 − δ) two-fold ambiguity which is uncovered and analyzed in detail in [10]. We will show in the next two sections that the ambiguity is automatically resolved by our proposal. IV. NEW STRATEGY FOR DETERMINATION OF θ 13 We now present our new strategy for determination of θ 13 which avoids the problem of the large intrinsic uncertainty. It is intuitively obvious from the CP trajectory diagram displayed in Fig. 1 that if one can tune the experimental parameters so that its radial thickness (which measures the cos δ term in Eq. (1)) vanishes then the two-fold ambiguity is completely resolved. It occurs if we tune the beam energy at the oscillation maximum so that ∆ 13 = π as is clear from Eq. (1). We explain below in more detail how it occurs and then discuss by what kind of quantity θ 13 is determined. In the following discussion we assume that the mixing parameters We note that the oscillation probabilities (1) can be written as , and C ± = P ± sin 2 2θ 13 in the present approximation. It is easy to show from this expression that CP trajectory diagram is elliptic in the approximation that we are working [11]. (In fact, it is the case for all the known perturbative formulae.) Given (5) it is simple to observe that the CP trajectory is a straight line at the oscillation maximum, A = 0; the equation obeyed by the oscillation probabilities is given as P (ν) + P (ν) = C + + C − . Moreover, the first order matter effect cancels in C + + C − , leaving the vacuum peace of P ± . Therefore, the slope of the straight-line CP trajectory is the same as that in vacuum, and the matter effects affects only on the maximum and the minimum points of the straight line in P (ν) and P (ν) coordinates. Thus, once a set of values of P (ν) and P (ν) is given by the experiments, one can determine C + + C − as the segment of the "CP straight line" in the diagram, and hence sin 2 2θ 13 to which C + + C − is proportional. Thus, measurement of P (ν) and P (ν) at the oscillation maximum implies determination of θ 13 without suffering from any uncertainties due to unknown value of δ and the sign of ∆m 2 13 . In Fig. 3 MeV if we sit on the oscillation maximum. It arises because the contributions from higher and lower energy parts around the peak energy do not completely cancel because of the extra 1/E factor in the cos δ term for symmetric Gaussian beam width. Thus, we need slightly higher energy to have the thinnest trajectory. It should be noted, however, that the feature highly depends upon the specific beam shape, and will also be affected by the fact that the cross section has an extra approximately linear E dependence. The slightly different slope of the straight-line trajectories of positive and negative ∆m 2 13 indicates the higher order matter effect. This effect must be (and can be) taken into account when one try to determine θ 13 following the method proposed above. AMBIGUITIES We now show that the (θ 13 − δ) ambiguity is automatically resolved by tuning neutrino energy at the oscillation maximum. It must be the case because two straight-line trajectories with the same slope do not have intersection points. For our purpose, it suffices to work with oscillation probability at a fixed monochromatic beam energy because averaging over a finite width complicates the formalism and may obscure the essence of the problem. It can be shown [10] that the difference between the true (θ 13 ) and the false (θ ′ 13 ) solutions of θ 13 for a given set of P (ν) and P (ν) is given under the small θ 13 approximation by where z = P − + P + P − − P + tan ∆ 13 2 (7) Hence, the difference vanishes at the oscillation maximum, ∆ 13 = π, which means z → ∞. It should be emphasised that our strategy of tuning beam energy at the oscillation maximum is not affected by the ambiguity correlated with the sign of ∆m 2 13 which is discussed in Ref. [11]. It is because the matter effect split the straight-line CP trajectories of positive and negative ∆m 2 13 toward the direction of the line itself in first order of the matter effect. The possible correction comes from higher order matter effect which is small in the relatively short baseline of the JHF (as well as the CERN → Frejus) experiment, as shown in Fig. 3. The effect can be easily taken care of in the actual determination of θ 13 . VI. CONCLUDING REMARKS In this paper, we proposed a new strategy for accurate determination of θ 13 without suffering from the intrinsic ambiguity due to unknown value of δ. That is, tune the beam energy at the thinnest CP trajectory and do the measurement both in neutrino and antineutrino channels. We have shown that our new strategy completely resolves the ambiguities in the determination of θ 13 due to δ and due to the sign of ∆m 2 13 within the experimental accuracy attainable in such experiments. One of the proposal which could be extracted from the strategy described in this paper is a possibility of havingν µ beam as early as possible. It would be the promising option for the case of relatively large sin 2 2θ 13 , say, within a factor of 2-3 smaller than the CHOOZ bound. In this case, the ν µ → ν e appearance events can be easily established in a few years of running of next generation neutrino oscillation experiments. Then, the uncertainties in determination of θ 13 would be greatly decreased by switching toν µ beam rather than just running with the ν µ beam. What would be the implication of our strategy to the determination of δ? The tuning of beam energy at thinnest trajectory in fact also provides a good way of measuring δ. 3 The ambiguity (δ → π − δ), however, is unresolved and it would necessitate supplementary measurement either by using "fattest" trajectory configuration [11], or by second detector with different baseline distance [10]. We should emphasize that once θ 13 is measured accurately there is no more intrinsic ambiguities in determination of δ. We have explicitly shown that (δ −θ 13 ) ambiguity is resolved. The only ambiguity which would survive (from the viewpoint of determination of δ) would be the accidental one that arises in a correlated way (δ − sign of ∆m 2 13 ), which is nothing but the remnant of (δ → π − δ) degeneracy in vacuum [11]. But it is also resolved by either one of the two second measurements mentioned above. Nore added: While this paper was being written, we became aware of the paper by Barger et al. [21] whose results partially overlaps with ours. However, most of the ambiguities discussed in the paper will be gone once θ 13 is determined accurately, as we noted above. ACKNOWLEDGMENTS We thank Takashi Kobayashi and Yoshihisa Obayashi for valuable informative correspondences on low energy neutrino beams, detector backgrounds, and θ 13 sensitivity in the JHF experiment. This work was supported by the Brazilian funding agency Fundação de Amparoà Pesquisa do Estado de São Paulo (FAPESP), and by the Grant-in-Aid for Scientific Research in Priority Areas No. 12047222, Japan Ministry of Education, Culture, Sports, Science, and Technology. 3 Tuning of beam energy at the oscillation maximum itself has been proposed before for differing reasons from ours. First of all, it is preferred experimentally because it maximizes disappearance of ν µ as well as the number of electron appearance events [7]. The tuning of beam energy to the oscillation maximum for measurement of CP violating phase δ was proposed by Konaka for the purpose of having maximal CP-odd (sin δ) term at the energy [20,7]. The beam profile, the mixing parameters and the matter density are taken as in Fig. 1.
4,754.4
2001-12-28T00:00:00.000
[ "Physics" ]
Maternal Health-Seeking Behavior: The Role of Financing and Organization of Health Services in Ghana This paper examines how organization and financing of maternal health services influence health-seeking behavior in Bosomtwe district, Ghana. It contributes in furthering the discussions on maternal health-seeking behavior and health outcomes from a health system perspective in sub-Saharan Africa. From a health system standpoint, the paper first presents the resources, organization and financing of maternal health service in Ghana, and later uses case study examples to explain how Ghana's health system has shaped maternal health-seeking behavior of women in the district. The paper employs a qualitative case study technique to build a complex and holistic picture, and report detailed views of the women in their natural setting. A purposeful sampling technique is applied to select 16 women in the district for this study. Through face-to-face interviews and group discussions with the selected women, comprehensive and in-depth information on health- seeking behavior and health outcomes are elicited for the analysis. The study highlights that characteristics embedded in decentralization and provision of free maternal health care influence health-seeking behavior. Particularly, the use of antenatal care has increased after the delivery exemption policy in Ghana. Interestingly, the study also reveals certain social structures, which influence women's attitude towards their decisions and choices of health facilities. Introduction to the Problem The reduction of maternal mortality and morbidity recently is viewed as top priority area in many developing countries (Bergstrom, 1994). Recent studies point to the declining maternal mortality mostly high for countries in the developing and transition world (Okiwelu et al., 2007). However, since it is difficult to measure how much progress is occurring, estimates are usually assumed (Graham et al., 2001). Maternal mortality ratio in sub-Saharan Africa alone has reduced by only 1.6% per annum since 1990 while other regions like the East Asia and the Pacific regions have seen on average an annual decline of 4.5%. It is expected that a 5.4% decline per annum is required to achieve the millennium development targets in the sub-Saharan African region (Wagstaff et al., 2004). This is obviously a tall mountain to climb for countries in this region. Among the knowledge base on why maternal deaths occur and how to avert them, access to maternal health services is a primary intervention for achieving better maternal health outcomes (Maine et al., 1999;Thaddeus et al., 1994;Bour, 2003). Notwithstanding this, the organization of maternal service and how maternal health service is financed have also been seen to play a part in the health-seeking behavior in general and outcomes (Oppong et al., 1994;Agyepong, 1999;Witter et al., 2007;Witter et al., 2008). This paper presents outcomes on maternal health-seeking behavior after the country had introduced the delivery exemption policy alongside the organization of maternal health services. However, there have been few studies that have examined the impact of this policy on health-seeking behavior (Deganus et al., 2006;Asante et al., 2007;Penfold et al., 2007). Studies that have been carried out after the policy have either looked at the financing part in isolation or the cost-effectiveness of the intervention (Arhinful et al., 2006;Deganus et al., 2006;. There are also other researches that have shown results on the effects of the reform on maternal births and incomes of health workers (Bosu et al., 2007;Witter et al., 2007). This paper therefore, contributes to the extant art of knowledge by examining maternal health-seeking behavior from a health system perspective that combines the organization and financing of maternal health services. The remainder of the paper is structured as follows: section 2 discusses the health system, beginning with human resource and organization of maternal health services. The financing of maternal health services at the national and district levels are also discussed in this section. The methods used in the study are presented in section 3. The case study results, which highlight the influence of health system on maternal health-seeking behavior, are examined in section 4. Finally, section 5 deals with the conclusions and recommendations of the paper. The Health System in Ghana Since maternal health care relies on the entire health system of a country, its outcomes including health-seeking behavior can be traced from the way health systems operate (Parkhurst et al., 2005). The health system includes the human resources, organization of maternal health services which dwells on the availability of both private and public sectors, and reforms in the health sector (Graham, 2002). These are in general, what Ghana's maternal health service is dependent on, and have various implications for maternal health-seeking behavior. This section of the paper discusses these elements in the health system. Firstly, human resources in maternal health service are generally understood as the presence of skilled birth attendant during delivery (Parkhurst et al., 2005;Hoope-Bender et al., 2006). This can be a doctor, midwife, nurse and the increasing numbers of traditional birth attendants (TBA), who may not have any midwifery training (WHO, 2004). It is the accessibility to these skilled birth attendants that affects the maternal health outcomes. However, studies have shown that the presence of these birth attendants may be meaningless unless they are well coordinated so that they become accessible, and also resourced enough to do what is expected of them should complications or in the extreme case deaths occur (Graham et al., 2001;WHO, 2004;Parkhurst et al., 2005). Secondly, both the private and public health services influence delivery of maternal health services and maternal health outcomes (Parkhurst et al., 2005). Accordingly, people tend to seek health services in private or public health facilities based on, perhaps their ability and willingness to pay. However, people's ability and willingness to pay for health services are subject to their incomes. The differences in income tend to create disparities and inequalities in health-seeking behavior thus affecting maternal health outcomes (Oppong et al., 1994;Agyepong, 1999). The disparities are wider in developing and transition economies (WHO, 2005), of which Ghana is no exception. We can argue that, if income can determine the facility where women seek for health care, then the quality of healthcare availability can be established. This stems from an assertion that, private health facilities are inclined to offer quality services as compared to public health facilities (Wilson et al., 1997;Graham et al., 2001;Parkhurst et al., 2005). The low quality of healthcare in the public health facilities is in consequence of the exodus of well-qualified human resources from public health facilities to their private counterparts as a result of attractive incentive packages the private health sector offers them (i.e workers). Thirdly, reforms pertaining to health care are happening globally. Particularly, in sub-Saharan Africa, some of the key reforms are decentralization of health services and the sector-wide approach (SWAp) (Peter et al., 1998;Mayhew, 2003). In Ghana for instance, there have been models of decentralizing the public administrations with defined functions at the national, regional and district agencies. However, the role of the central administrations still remains stronger (GOG, 1996;Mayhew, 2003). The Ministry of Health (MOH) retains policy-making functions while the regional hospitals and district health management teams have the status of Budget Management Centers (GOG, 1996;Agyepong, 1999;Mayhew, 2003). The exact roles and functions of these decentralized offices remain overlapping and mixed in many instances (Mayhew, 2003). The control from the centre also affects funds flows, since the MOH assumes the responsibility of staff recruitment and payment, as well as budgetary allocations and planning specifications (Agyepong, 1999;Mayhew, 2003). Consequently, the district level may not have authority over who is hired or the human capital at their disposal. These mixed functions within the system affect health programs -such as antenatal care and delivery-which are pertinent for maternal health outcomes. The SWAp was introduced in most developing and transition countries to de-concentrate funds and streamline them to certain assigned projects that are prioritized with budget ceiling at the national level (Cassels et al., 1998;Hutton et al., 2004). In Ghana, the SWAp was introduced to increase coordination of funds since its health sector relies heavily on donor support (Goodburn et al., 2001). Even though this practice in some cases has led to the delay of funding to the district level from the government (Mayhew, 2003), what is worth considering is how reforms can shape maternal health services and eventually, maternal health outcomes. In conclusion, these reforms address: how health sector funds are distributed; the merging of separate health services or privatization of services; and the re-organization of health care delivery (Parkhurst et al., 2005). So far, the reforms that have been undertaken in Ghana have affected the local systems, changed the nature of incentives for health workers, and regulated and improved accountability at all levels of the health sector (Agyepong, 1999;Mayhew, 2003). Financing Maternal Health Care in Ghana As at 2010, the maternal mortality ratio in Ghana stood at 350 deaths per 100,000 live births, making it one of the countries with high rate of maternal mortality (Population Census, 2010). In order to reduce maternal mortality and meet millennium development targets by 2015, there was the need to clear some barriers that hinder women in seeking maternal health care. Some of the targeted barriers include financial challenges, which is one of the teething issues that obviate women from seeking maternal health care. Accordingly, the provision of free maternal care under the delivery exemption policy was introduced in 2004. This policy was financed by the local government ministry through the Highly Indebted Poor Countries (HIPC) debt relief fund (Witter et al., 2009). The exemptions for delivery care program began earlier than the National Health Insurance Scheme (NHIS), but later on it had to be financed through the NHIS in 2008 and other means (MOH, 2004;NHIA, 2008). The policy exempted pregnant women both insured and uninsured under the NHIS from paying facility user fees during pregnancy check-ups and delivery. Although this policy did not reduce facility cost 1 to zero, it granted pregnant women access to virtually free antenatal, deliveries and postnatal care in many health facilities (Armar-Klemesu 2006;Witter et al., 2009). The additional merit of the delivery exemption policy was related to financial barriers, which were removed particularly for poorer women. However, non-facility cost (such as transportation cost) was not included in the policy. The NHIS programme is centrally administered and it is funded through formal and informal sources of funding. The deductions from Social Security and National Insurance Trust (SSNIT) and government budget allocations are formal contributions to the National Health Insurance Fund (NHIF). Annual premiums that range between 3.6 USD and 24 USD per head based on income and ability to pay form part of the informal contributions to the fund (NHIA, 2008;NHIA, 2010). Also, taxes (both direct and indirect) that are levied 2 on selected goods and services go into the fund as informal contribution (GOG, 2003). These contributions are supported with grants, donations, and gifts. The National Health Insurance Levy (NHIL) from taxes accounted for about 61% of total income of the NHIS in 2009 (NHIA, 2010). Formal sector contributions made up 15.6%, while the informal sector premium was only 3.8% the same year. The NHIF provides funds for the scheme and financially support people who are not able to pay. The scheme is designed to promote social health protection through risk equalization, cross subsidization, solidarity, equity and quality care. The NHIS also reduces unexpected expenditure on health care and catastrophic spending among the insured. The scheme also exempts certain category of individuals from paying annual premiums such as children under 18 years and adults above 70 years (NHIA, 2008;MOH, 2009). Notwithstanding the National Health Insurance reimbursements, other means by which the regional and district level health services can source finances include; internally generated funds, funds from Non-Governmental Organizations (NGOs), Government funds and minimum contributions of cash and in kind from philanthropists (Agyepong, 1999). Health Services in Bosomtwe District There are three sub-districts and 63 communities in Bosomtwe district. The estimated population is 93,910 as at 2010 (Population census, 2010). Health service supply is organized in 14 Community Health Planning Service (CHPS) zones (GHS, 2009). There is staff strength of three hundred and eighty-eight (388) health personnel (public and mission). Out of this, one hundred and eighty-four (184) work for private and mission health facilities and 118 are Ghana Health Service (GHS) personnel in the public health services. Table 1 gives a description of the type of health professionals working in the district. Also, health service delivery is carried out in sixteen (16) public and private health institutions. These institutions are made up of four (4) government facilities, seven (7) CHAG or Mission and five (5) private facilities. The district also has 38 outreach points that offer Reproductive and Child Health Services (GHS, 2009). There is also community based surveillance program in the district, which employs volunteers who have the responsibility to record and report diseases, deliveries and deaths in their various communities on monthly basis. There are other non-orthodox treatment centres in the district. Table 2 shows the types of health facilities both public and privately administered hospitals, clinics and maternity homes in the district. Currently, there is collaboration between the health directorate and other health centres in the district to enhance health service delivery. Furthermore, the sources of funding to the district health directorate come from the donor pool fund, internally generated fund and government subvention (GHS, 2009). The internally generated fund includes funds from the NHIS and fees paid by patients, which are not covered by health insurance. The Ministry of health funds are also part of their sources of fund to the district health directorate. St. Mary clinic (M) NB: G -Government Institutions, M -Mission, P -Private However, this fund is usually earmarked for specific programs (for instance, TB Care or malaria program). The district health directorate can also be supported by the district assembly with funds from the district assembly common fund when the need arises. The district assembly usually helps the health service directorate with donations and in kind. Holding this picture of the health system of this district in our minds, we will present the interviews and group discussions to examine what impacts such resources (personnel), organization and financing of district health service can have on maternal health-seeking behavior in section 4 of this paper. Methods Yin (2011) posits that qualitative research approach has currently become an attractive-if not the mainstream-sort of research in both academic and professional operations. Since this study wanted to access and produces an in-depth and adequate data essential for the study's analysis, a qualitative research approach was employed. Also, in order for the researchers to thoroughly evaluate and examine the data that were collected from the women in www.ccsenet.org/gjhs Global Journal of Health Science Vol. 5, No. 5;2013 their natural setting, a case study design was applied to complement the qualitative research method. Case Selection The diverse case method-a non-random purposive procedure-was used to select the cases. This method has the capability to handle differing cases within categories and also explain the outcome through the different cases. The method covers all the relevant range of variation in cases, which enhances the representativeness of the variability in the population (Gerring, 2008). Women were selected based on indicatives such as delivery at home, at health facility and women with peculiar experiences before, during and after birth. However, for a woman to be included in this study, she must have given birth not more than a year prior to data collection. This allowed us to minimize recall bias in the responses of the informants. Other key informants 3 including a doctor, three midwives and a trained traditional birth attendant were also selected. Table 3 below depicts the number of informants by age and the health zones. The distance (km) of the health zones from the district hospital are also shown. Data Collection Techniques The study used interviews with a semi-structured guide to collect data from informants (Yin, 2009). The advantage of flexibility in semi-structured guide to changes in the order and form of questions, such that every informant can be probed when interesting and peculiar issues to an individual are encountered was crucial for our choice of this type of guide (Crang et al., 2007;Kvale et al., 2009). The questions for the interviews were prepared with relevant areas regarding factors that influence women decisions and choices of maternal health services. Two group discussions were also conducted, one in each of the selected towns for the study. In order to make a comparative analysis, the selection of the women in Kuntenase and Abono was based on travelling distance to the district hospital, which created a natural disparity between women living in these two towns. These discussion groups were made up of fairly homogeneous women. We made sure that, the women who took part in the group discussions had already been interviewed one-on-one during the interview phase. The questions for primary informants were prepared in sections. Each section had sets of questions to decisions and choices of maternal health services. The questions that centered on decisions on maternal health services were; decision on facility to use for delivery and factors that influenced the decisions-for instance social relations (close relatives, friends, or hospital staff they know). Other questions were also centered on choices of facility for maternal care (for instance, antenatal care and place of delivery) and the type of assistance received during delivery. Questions on knowledge and need for maternal care and barriers to maternal health care were also asked. Their knowledge of Traditional Birth Attendants (TBAs) or other health care providers (midwife) in the communities nearest to them were asked. Their knowledge on rights to certain type of services in the hospital and patient satisfaction were included in the questions. There was also a section on the background information of our informants that centred on age, marital status, level of education, occupation and residence. Informant consent was sought from all participants. Informants were also aware if their direct quotes were used in the reporting. All the interviews and discussion groups were moderated by the researchers. The questioning and answering occurred in a calm and serene atmosphere that enabled the researchers to tape-record the responses with small tape recorder. Following Ravasi and Zattoni (2006), the various transcriptions were supported with contact summary sheets and interview notes. Both the interviews and focus group discussions were held in local Akan language, but during the reporting stage, all the quotations were translated into English. Text Analysis Texts that were transcribed from interviews and group discussions with the women were analysed using selective coding method. Selective in the sense that those interesting, diverse and rival comments in the conversation were brought under related broad themes formulated from the research objectives. The content of such comments from the transcribed interviews were studied to check for patterns and how they related to the concepts and analytical approaches used. Limitations Although the small sample selection reduces the strength of generalization, in order to make it possible, a replication method was applied in this study. Yin (2009) argues that replication logic is the same logic that underpins each case and be applied in all cases. The study employed a common technique for data gathering and this method was directed by a case study protocol. This design was chosen to strengthen external validity of outcomes and make the results closely representative of the population (Gerring, 2008;Yin, 2009). Results All the examples on health-seeking behavior among the women happened after the introduction of the delivery exemption policy with major shifts in the financing of maternal health from local government to national insurance coverage. Study Participants From the interviews, important background information of informants concerning age, level of education, occupation, and parity were analysed. The average age for the women involved in the study was 26 years. The oldest among was 39 years and the youngest, 21 years. The ages of informants also reflected in the number of children they had. On average, the number of children (parity) an informant had ranged between 1 and 7 children. All our informants had formal education with the least having completed primary education. The highest level of education our informants had was high school and there were only two of such among our informants. The type of occupation informants were typically engaged in was farming and trading. It was only one of our informants who was unemployed. From our analysis of the interviews with the women, we tried to observe whether there were any differences in maternal health seeking behavior and background information of women. We did not observe any consistent difference between women in their health seeking behavior and type of occupation they were engaged in. However, differences were observed for age, parity and level of education, and the use of maternal health services. For instance, those who had higher levels of education tend to use higher levels of care (i.e. hospitals) than women with lower levels of education who usually give birth at home or with a traditional birth attendant. Also, women who had experienced more than one birth, especially beyond four children, tend to give birth at home while those experiencing their first delivery utilize the health facilities. Health System and Health-Seeking Behavior The observable facts of the study highlight that the various characteristics of maternal health system are considered as determining factors on women's behavior and decision to seek care, choice and use of maternal health services. These characteristics include location of health facility, order of referrals, capacity of health facility, and also how the financing of maternal health care through the delivery exemption policy influences health-seeking behavior. These aforementioned characteristics of the maternal health system are examined in subsequent sub-sections. Who Pays and Behavior Outcomes Some enabling factors brought about by structural policies influence the health seeking behavior of the women in Bosomtwe district. The introduction of virtually free maternal health services under the delivery exemption policy financed under the NHIS has created new forms of health seeking behavior among women. However, there are marked differences for seeking maternal health care from conception to delivery. For instance, the attendance of Antenatal clinic (ANC) has increased tremendously. Women in the study are more aware of the need to seek maternal care during pregnancy to know their health status. Even though distance is still a barrier, the use of health facility for antenatal check-up has increased and women tend to use it more than they actually need to. This is termed as 'moral hazard' (Philip 1990). As a result of the insurance coverage, all the women we interviewed attended ANC at least more than twice before delivery. The situation is quite different with deliveries as labor can suddenly happen. For instance, labor may occur in the night and there will be the need for delivery at home if there are no easy forms of transport for the women, especially those from locations far from www.ccsenet.org/gjhs Vol. 5, No. 5;2013 the facility. Capacity of Facility and Behavior Outcomes The position of a health facility in the hierarchy of health structure provides an idea of how the facility is resourced and the kind of services it can provide (Bergstrom, 1994;Ojeifo, 2008). For instance; the capacity of health facility can be in terms of number of staff (both skilled and partially skilled), number of delivery wards, obstetric logistics and the level of technology. In the Bosomtwe district, the health facility which is well resourced is the district hospital. The district hospital is supported by other lower level care providers such as the clinics and other formalized maternal care services like trained TBAs that provide maternal health services. Interviews with key informants reveal that both the district and other supporting lower facilities are under resourced in terms of staff, equipment, and technology to provide satisfactory services. In a group interview with midwives on their resources, some mentioned that; ''The hospital has three midwives. There is only one labor ward here… and only one delivery pack, which needs 'first class' disinfection for so many hours before it can be used for other deliveries. The women are therefore, told to bring items such as carbolic soap, disinfectants, white or grey cloth for this exercise.'' The clinic and trained TBAs in the communities also have virtually no resources at all. An interview with a trained TBA who has been practicing for over 8 years and claim to have assisted in uncountable deliveries had this to say; ''[All] I have is my blade, tread, hand glove and a cloth, [which are] necessary to assist women during delivery. After the assistance, I take only some few gifts and items from the women to bless my soul. This is because the work is difficult.'' This description shows how resourced auxiliary health providers like the trained TBAs are and the level of assistance they can provide, should complications occur in the remote communities of the district. These trained TBAs serve as first point of call for most of the women in the rural areas when they are in labor and require immediate assistance. However, in most instances, hygiene surrounding deliveries are compromised, which can possibly increase the risk of postpartum infections (Ojeifo, 2008). The hospital staff has instituted a form of facility use cost through a collection of items such as disinfectants, soaps and white Kaliko cloth to supplement their materials. There are therefore, strong tendencies that the choice of place for delivery will be at home without supervision since women who cannot afford such items will be inclined to deliver at home though they still use the hospital for antenatal care. Decision to Seek Maternal Health Care The decision to seek maternal care can be earlier, delayed and spontaneous. This depends on the situation that prevails in the whole period from conception to delivery (Bergstrom, 1994). The decision made when choosing a health facility for maternal health services, whether delayed or spontaneous, may come from the woman, some close relatives and where the woman resides. It is evident that decisions to choose a maternal health facility lie with the help of 'significant others' who are either the mother or grandmother of the woman. For instance one of the women interviewed said: "I am staying with my mother and she said it would be proper if I gave birth at the hospital". For another woman it was mostly common for her to say "it was my mother's advice for me to give birth at home". This is usually connected to women with cases of first birth experience and their proximity to the hospital (Bour, 2002). Women with high parity have more autonomy in deciding for themselves where to give birth. Close relatives with high parity and extensive experience with birth are also highly regarded in circumstances pertaining to decisions and choices of health facility. This is connected to cultural reasons where the mother or grandmother helps in taking care of the baby and their ability to assist in delivery. Informants from Kuntenase health zone have more autonomy to decide the type of health facility for birth. This is due to their proximity to health facility and their socioeconomic status like education and income levels. Their locational and effective access opportunities are high and thus influence their health-seeking behavior. Women are able to make a decision on where to give birth based on their knowledge on the use of the facility in their first experience or from friends who had experienced it (Gay et al., 2003). Informants who choose home for births do so on the basis of their past experiences in regard to home births. For instance, a 33 year old informant with seven children said: www.ccsenet.org/gjhs Global Journal of Health Science Vol. 5, No. 5;2013 "I have never given birth in hospital before. When I take orthodox medicine, I fall sick. In the beginning I had no knowledge with births and my mother assisted me always, but from my fourth child onwards I had given birth alone unassisted." The influence of social relation on decision of women and health seeking behavior applies not only to women farther away from health facilities, but also women who are closer to a maternal health facility. Women with lower parity have their close relative deciding in almost all cases and the appropriate time for them to go to the facility when in labor. On the contrary, for periods when user fees for maternal services applied, women with low economic status virtually depended on their husbands. With the introduction of the free delivery exemption policy, women are free to decide which facility to use for antenatal care and birth. This is an example of norm and value changes resulting from financing reforms in the health sector (Valtonen, 1994;Mikkelsen, 2005). Reasons for Delivery at Home Delivering a baby at home particularly, without professional supervision can be risky and may come with unforeseeable consequences (Okafor et al., 1994). From the perspective of our informants, reasons for giving birth at home were discussed in groups, especially for those who had distance disadvantage. There are some home births that are supervised by trained traditional birth attendants and others without any supervision. In the Abono community health zones, which are served by lower order facilities, home birth is a common practice. Some women make preparation for home birth and give birth at home. This is due to their knowledge about the hospital resources and other reasons. Accordingly, two women had the following to say: "For children that walk in the sand, I have ten of them and three have passed away. I gave birth to nine at home and the last in the hospital for which I suffered complications. Home birth is just like the hospital. In the hospital, no one will give birth for you. All they do is to help in some cases with injection and water drips (am I lying?). Otherwise, everyone will give birth at home." "I have seven children and I gave birth to all of them at home. From the fourth child onwards I always gave birth at home unassisted. I always give birth at home because when I take orthodox medicine I become so weak. I usually use herbs from my village before and after birth. I have never given birth at the hospital but I am always strong by God's grace." The perception of women in the Abono health zone is that sometimes hospital care conflicts with the use of traditional forms of care. However, for those who had experienced hospital care, the use of hospital is not different from home birth and they can substitute hospital birth for home birth if they are sure of uncomplicated birth, based on their antenatal care examinations. Among women with first birth experience and those that expected complicated births, the need for specialised maternal care is considerably intensified. Women with knowledge of signs of complicated birth, pregnancy related illness and first birth experiences prefer the hospital to home as place of giving birth. In the Kuntenase community health zone, hospital birth is common and women who give birth at home are looked upon strangely. From the interviews, it is clear that giving birth at home is not all about experience with birth, but also other factors such as insurance coverage and distance from the health facility. "It was in the olden days that women gave birth at home. Nowadays no one should tell you to go to the hospital more especially, when you do not have to pay anything. When you look at the level of civilization, you follow the world as it moves and knowledge is increasing." This was the view of a 32 year mother with 2 children and secondary level education. The risk of giving birth at home is considered high for women in the Abono health zone. The risk is however, perceived to be lower for women in communities in the Kuntenase health zone. This behavioural margin among the women in these two health zones is partly as a result of the distance in reaching the nearest health facility for birth, what forms of knowledge they are predisposed to, health beliefs and the level of risk the women can allow (Philips, 1990;Olujimi, 2007). The access opportunities of women and structure of the health care delivery system in terms of levels of care strongly influence the behavior of the residents in a particular area. For instance, all other factors remaining unchanged, once a woman in the Abono health zone is successful with her first home birth, the rest of the children may be born at home unless there are serious complications that will require critical referral. Women in the Abono health zone do not feel being treated unfairly with provision of health services if complications are minimal with the kind of services they receive. Thus an important finding is that, inequality does not necessarily imply inequity for women in the Abono health zone if complications do not occur. By-passing Health Facility The concept of 'by-passing' health facilities is the situation where for instance, a patient uses a higher order facility for a treatment that a lower order facility can offer (Oppong et al., 1994). This situation can occur due to the resources of a particular facility, organization of health service delivery and the nature of health care financing. For instance, the empirical facts highlight that pregnant women misperceive the use of levels of maternal health services both higher and lower. Usually antenatal care is sought from a higher order facility like the regional hospital. From the perspective of a midwife who works in the district hospital, 'bypassing' occurs because; "Some of the pregnant women are petty traders who trade in all kinds of goods and services in the city. They sell in the city all day and come back to their villages in the evening. They think the nearest facility for antenatal care is the regional hospital. As such we [hospital staff] do not have any records on them [women]. Some therefore, either deliver at the regional hospital or at home." Women, who do not have records in a particular health facility, do not prefer giving birth in that facility since they are not sure of receiving satisfactory services. We observed that bypassing of facilities are not only due to economic activities women engage in, but also other social outcomes such as finding out which one will provide better services (Oppong, 1994). For instance, some of our informants visit more than one maternal health facility for antenatal care. This is what a 28 year mother with first birth experience in an interview had to say: "... some women say if you are pregnant you should visit at least two health facilities so that during labor you choose the one which you think you received enough care and satisfaction of services from the hospital staff." Consistent with the study of Okafor et al. (1994), women now have more options in services and visit many facilities as basis for reference and comparison because, they are exempted from fees irrespective of the facility they use, if only the facility is registered with the NHIS to give 'free' maternal health care. These 'rational' decisions with regards to visiting more than one health facility lead to misuse of levels of health services. The connection between the organization and financing of health services and how the health system shapes health-seeking behavior is clearly demonstrated here. Barriers and Behavior Outcomes Access to maternal health care may be interspersed by inability to use the health services provided by health facilities. The users of maternal health services may be discouraged from using the services delivered (Valtonen, 1994). Some of the considerations that may limit the use of services from a particular health care facility may be manifold including; geographical, medical, socio-cultural and knowledge barrier results from health system, and human orientation (Thaddeus et al., 1994;Barnes-Josiah, 1998). The geographical barrier which has to do with the distance between the service providers and recipients is more entrenched for women farther away from a health facility. In some remote towns, the unavailability of transportation and the cost of transportation are severe all year round and thus sometimes the use of boats that are mostly serving tourist purposes on the lake Bosomtwe are used to transport women in Labor. This points to transport barriers to maternal care, which causes delay in reaching the facility. Here is a descriptive example of an extreme geographic barrier one woman told us during the group discussion. ''… [T]he women who live on the other side of the lake have even worse situation. Sometimes they have to cross the lake before they get to the nearest hospital to deliver. At first it was forbidden to cross the lake when pregnant, but now we cross and it is even easier with some bridges at some places.'' Medical Barriers and Behavior Outcomes The structure and organization of maternal health services present medical barriers for women seeking care. The difficulty with referral of patients to another level of care, waiting time at the hospital, and other partial user fees at some health care levels raise pressing issues for consideration. The waiting time at the hospital at the hospital a stage of delay in using health services (Maine, 1993;Barkat et al., 1995). In the district hospital, which has one labor ward, waiting times are longer even after arriving at the facility. The longer waiting periods are as a result of the capacities of the hospital in terms of logistics and materials to deliver satisfactory services. This situation makes women wait at home until they feel that their time is fully due and this may increase the risk for giving birth at home unassisted. This is consistent with the findings of Wilson et al. (1997). Another delay related to women in labor, having to wait at one level of health care is the difficulty with referral from one level of care to another. The lower order levels of care for instance, trained TBAs; believe they are capable of assisting delivery in all conditions. Critical accessibilities from a trained TBA to midwife in the hospital are sometimes delayed because, trained traditional birth attendants are not resourced enough to refer and accompany pregnant women, hence ensuring continuity of care. A typical medical barrier observed was partial user fees and bottlenecks introduced by hospital staff to supplement the capacity of facility. Even though maternal health services are virtually free as a result of the NHIS, lower order maternal health care services like the trained TBAs who are not enrolled on the NHIS accept gifts from women as fees. Also, the hospitals, clinics and trained TBAs are under resourced and therefore, collect items such as carbolic soaps, disinfectants and Kaliko cloth to assist delivery. These items are collectively known as 'dropping prices', which are the new user fees in a sense that, they have come to replace the user fee and increased cost burden for some women whose social statuses are low. Conclusion This paper highlights the sort of ramifications the health system in general can have on maternal health-seeking behavior and eventually, maternal health outcomes. The study reveals that the health systems (such as human resources and organization/financing) serve as barriers and filters between the women, and the actual facility women decide on and choose for antenatal and delivery. Interestingly, the study also illuminates certain social structures (such as values, norms, health beliefs and family resources), which health planners are to pay a particular attention to. These social structures influence the decisions and choices of women in selecting the kind of health facility for their antenatal and delivery. Figure 1 below recapitulates the findings of the study. Source: Authors' own for purpose of this research Figure 1. Influential health-seeking behavior elements in the health system and facility used We recommend that since the decision and choice of women are influenced by the social structures and the health system, it is vitally important that discussions on maternal health planning and policy ought to encompass the entire health system and the social structures in order to achieve the needed maternal health outcomes. Recent debates and discussions on maternal health outcomes in sub-Saharan Africa must address the entire health system as well as the social structures. Even though modest progress in maternal health outcomes has been achieved in the last decade, a closer look of these perspectives on maternal care can help improve maternal health outcomes. Also, women should have the right to choose to give birth at home. Meanwhile, the risk associated with such deliveries can be minimised if trained traditional birth attendants are somehow supported and financed at the district levels. In addition, indirect health care cost such as transportation and certain petty charges at the facility should be brought to the barest minimum. These changes will go a long way to shape not only maternal health-seeking behavior, but also maternal health outcomes.
9,514.4
2013-05-30T00:00:00.000
[ "Economics", "Medicine", "Sociology" ]
Temporal and spectral evolution of an interrupted virtual single-photon transition : creation of optical gain and loss We examine the optical response of a virtual dipole transition of a quantum mechanical two-level system (TLS). In the case of off-resonant excitation the timeintegrated dipole response (TIDR) is expected to be zero, which corresponds to transparency of the system with respect to the exciting pulse. Our new time-frequency representation reveals that even for a zero TIDR there are positive and negative contributions included in the response. Furthermore, we present a way to access these contributions by using a second electromagnetic field, which interrupts the temporal evolution of the dipole response. The theoretical results are confirmed by attosecond transient absorption spectroscopy in helium (He). Introduction The advent of light pulses with sub-femtosecond duration opened the door to experiments, which are able to follow the electronic dynamics of atoms and molecules [1][2][3][4][5].In attosecond transient absorption spectroscopy a single attosecond pulse (SAP) or an attosecond pulse train (APT) is overlapped in an interaction target with a second laser pulse, which usually is the fundamental infrared (IR) pulse.The transmitted photon yield in the extreme ultraviolet (XUV) spectral region is detected as a function of photon energy and delay between the two pulses.Most of the experiments performed so far concentrate on the excitation of bound-bound transitions in matter or the direct ionization.The off-resonant excitation of matter was not investigated in previous attosecond transient absorption spectroscopy experiments. Here, we present a novel time-frequency representation of the temporal evolution of an offresonantly excited virtual dipole transition.The temporal evolution is not accessible in linear optical experiments where only the TIDR can be detected.Our theoretical analysis reveals positive and negative contributions, which are inherently included in the response.In the case of off-resonant excitation these contributions cancel out in the time integration.A second electromagnetic field can interrupt the evolution in time.This results in a nonzero and therefore detectable TIDR. We verify the theoretical results with an attosecond transient absorption experiment in He, where we observe optical gain and loss of equal magnitude.The optical gain and loss are gated in time by the delay between the pulses. For our theoretical time-frequency analysis we use a quantum-mechanical TLS, consisting of the ground state g and the excited state e .Both states are separated in energy by the transition energy Δ .The response of the system is characterized by its transition dipole d(t) ∝ a(t) + a * (t) , where a(t) represents the amplitude of the excited state e .If we excite the TLS off-resonantly, the transition dipole can be written in rotating-wave approximation as (using atomic units): Here V  (t) and ω  are the electric field and the angular frequency of the driving field.Figure 1A shows the temporal evolution of the dipole as a function of time and photon energy.The exciting pulse at 23.37 eV is off-resonant with respect to the excited state at 23.09 eV.A second electromagnetic field is now used to modulate the transition energy of the system and interrupt the temporal evolution of the dipole.The interruption results in a nonzero TDIR since positive and negative contribution do not cancel out anymore.A positive response corresponds to absorption, while a negative response represents a net emission of photons, which can be understood as optical gain.Figure 1B displays the TIDR as a function of photon energy and delay.We test our model by comparing it with attosecond transient absorption spectroscopy in He.The He 1s 2 ground state represents the lower state of the TLS, while the excited state e corresponds to the excited 1s3p state.For the off-resonant excitation we use the 15th harmonic of an APT.It is energetically centered between the 1s3p and 1s4p excited state.The latter is negligible since its transition strength is significantly weaker compared to the 1s3p transition.For the control pulse we use a moderately strong (intensity <10 13 W/cm 2 ) IR pulse.This control pulse can be delayed in time by changing the optical path length.The transmitted XUV yield shows a strong decrease around ~0 fs delay.This relates to a twophoton absorption of one XUV and one IR photon (see Figure 2A).In addition the photon energy is sufficient to ionize He. Figure 2B shows the change of absorbance.Despite the strong absorption feature (positive change of absorbance), which was already visible in the XUV yield, we now observe a net emission of XUV photon (negative change) for certain photon energies and delay between the pulses.In excellent qualitative agreement with the theoretical results, both, absorption and optical gain are of the same order of magnitude. Conclusion We introduced a new time-frequency representation for the temporal evolution of a virtual dipole transition.This representation shows that even in the case of a zero TIDR there exist positive and negative contributions to the response.Furthermore, we demonstrated for the first time that a second electromagnetic field can interrupt the dipole response, which leads to optical gain and loss.The delay between the two fields enables us choose between gain and loss at certain XUV photon energies.Finally, we experimentally verify our theoretical results with attosecond transient absorption spectroscopy in He. Fig. 1 . Fig. 1. Figure 1A shows the temporal evolution of the dipole response in time-frequency representation.Light grey (red) corresponds to a positive response and dark grey (blue) to a negative response.In Figure 1B the calculated time-integrated dipole response for different values of the delay is displayed. Fig. 2 . Fig. 2. Figure 2A shows the transmitted XUV radiation as a function of delay and photon energy.Around 0 fs delay 50% of the XUV radiation is absorbed due to two-photon absorption.Figure 2B depicts the change of absorbance as a function of delay and photon energy.
1,316
2013-01-01T00:00:00.000
[ "Physics" ]
Reimagining GIS Instruction through Concept-Based Learning . Research in geographic information science has not yet found clear answers to the questions of what geographic information is about or what a geographic information system (GIS) contains. This lack of consensus makes it especially challenging to teach and learn GIS. Existing pedagogical approaches either focus on the representational level of data (e.g., “raster and vector”) or are too generic (e.g., “geo-referenced information”). This characterization of GIS and its content is difficult for learners to transfer and apply broadly. As instructors, we approach the challenge of teaching GIS from a conceptual basis. We describe our process to develop a set of core concepts of spatial information, which we use to redesign an undergraduate-level introductory GIS course. Our intervention focuses instruction on the kinds of questions that geographic information enables before training students to produce workflows and answers through system commands. The course redesign complements and informs ongoing research on core concepts of spatial information. Our results demonstrate that GIS courses can deliver more than software training, indicating both theoretical gains and didactic challenges. Introduction A quarter century into labeling academic studies around GIS (geographic information systems) as science (Goodchild, 1992), we still lack a consensus on what this science is about or, technically speaking, what GIScience studies and what constitutes geographic information. Clearly geographic information is about more than "raster and vector data " or "geo-referenced information"-but what exactly is it? This lack of consensus is particularly apparent in the context of GIS instruction. A related consequence is the difficulty of explaining to an economist or biologist, for example, what GIS can do for them. The ambiguous definition of GIS as both a tool and a scientific endeavor shapes core concepts and learning outcomes (Wright et al., 1997). As a tool, GIS plays a supporting role in applied problem solving for other research endeavors; for example, it is an enabling technology for the study of botany, history, and many other subjects (Kerski et al., 2013). In a tool-centric view of GIS, the core concepts that students must acquire involve learning the software commands that one performs with a GIS such as data "capture", "manipulation", and "integration" (Raper and Green, 1992). However, GIScience offers more than a set of tools for interacting with the world (Goodchild, 2006). Learners are increasingly drawn to interdisciplinary GIS methods for answering complex questions across the social, natural, and physical sciences (Kidman and Palmer, 2006;Rickles et al., 2017). While there is consensus that as a tool, GIS can be used to capture, store, check, and display data related to positions on Earth's surface, it is still unclear what these data describe conceptually; the answer could be "almost anything" including populations, bus schedules, or climate models. A theory of geographic information is needed to bring order to this variety. How can this be done without restricting GIS to some application domains (e.g., terrains or utilities), while still saying something meaningful about the content of data? Is there a level above data models ("raster and vector") or applications ("viewshed analysis"), and below the generic and obvious ("geo-referenced information") at which we can talk about and teach GIS? We explore this question by designing an introductory GIS course structured around a set of core concepts of spatial information following those proposed by Kuhn (2012). The goal of the course was to define an appropriate conceptual level at which the contents of a GIS could be meaningfully studied and discussed with first time learners. At this level, all possible GIS contents can be distilled into instances of a small set of core concepts. The development of the course accompanied research to test the core concepts in real-world GIS applications, which involved first developing formal specifications and a high level language around them, and then implementing them in the Python programming language (Kuhn and Ballatore, 2015). The course described in this paper simultaneously informed, and was informed by, this GIScience research. In the remainder of this paper, we review prior efforts to organize GIS contents and then contrast these with our approach to redesign a GIS course around core concepts. We discuss the connection between teaching and research demonstrated by the hands-on activities that accompanied the course and conclude with open research questions for GIS pedagogy stimulated by the experience of designing the course. What does a GIS contain? The question of what a GIS contains has not yet been explicitly answered at a level above technicalities and below generalities; one could try to infer answers from the organization of GIS courses, academic textbooks, or efforts in the GIScience community to structure knowledge about GIS. These efforts have been largely designed to inventory knowledge and skills gathered from leading scholars and professionals in the field rather than develop comprehensive theories of geographic information; thus, they do not explain what distinguishes geographic information (GI) from other types of information and how GI can be organized in its own right. The NCGIA Core Curriculum (Kemp and Goodchild, 1991) and the UCGIS Body of Knowledge (GIS&T BoK) (DeMers, 2009) are the main efforts to organize and define a GIS curriculum. Rather than defining what GIS contain, they take stock of the concepts needed to understand and apply what GIS do, ranging from mathematical to social aspects. The Core Curriculum and GIS&T BoK frameworks also perpetuate the dual identity of GIS as both a tool and a science by blending tool-centric concepts (e.g., "hardware system software", "raster data structure") with conceptual issues (e.g., "spatial objects and relationships") and operations (e.g., "vector data structures and algorithms"). As a consequence, GIS education has not always distinguished domain concepts from software concepts (Kemp et al., 1992). For a contrasting example, the discipline of statistics clearly distinguishes many of its core concepts (e.g., "probabilities", "distributions") from software concepts or operations (e.g., plotting a histogram in SPSS). Similar remarks can be made about the nature and goals of popular GIS textbooks, which pragmatically organize contents at the data model level (Bolstad, 2005;Longley et al., 2015). On the other hand, theoretical frameworks from academic literature have pursued more ambitious, unifying frameworks including canonical representations for geographic information (Goodchild et al., 2007;Camara et al., 2014;Zhu et al., 2017), multi-level views of that information beyond current system implementations (Couclelis, 2010), and classifications of analysis functions (Albrecht, 1998). While these frameworks provide insights into the nature of geographic information and computing, they still do not tell an undergraduate student or a colleague from another application domain about unifying concepts or what they can do with a GIS. In the absence of comprehensive theoretical frameworks that explain and organize GIS contents, many instructors organize training around software modules and system-level commands. A recent survey of over 300 university-level GIS courses found that vector analysis, data models, and data acquisition were the most common topics covered (Wikle and Fagin, 2014). Students are typically given step-by-step instructions on how to apply a particular GIS software to a problem that has been fitted to its commands. While the learning outcomes of such courses may satisfy graduates and employers in the short run, the knowledge and skills acquired have a rather short half-life; computing paradigms change continuously. This style of learning also makes it hard for students to transfer their understanding to other products and problems. Conceptual frameworks have been proposed to address the knowledge transfer challenges that GIS students face. A recent review of research studies about GIS instruction found that constructivist approaches help students develop technical competence through experiential, hands-on projects (Schulze, 2020). Howarth and Sinton (2011) propose a framework that sequences spatial concepts and combines problem-based learning with cognitive load theory to scaffold student learning. Srivastava and Tait (2012) define threshold concepts for GIS instruction to inform course design. Other efforts to support GIS usability have focused on reordering functions, for example common GIS operations in toolboxes (Gao and Goodchild, 2013). These approaches derive concepts from the GIS&T Body of Knowledge and as such, do not always distinguish representational spatial concepts (e.g., location, distance, hierarchy) from analytical concepts (e.g., extraction by buffer). Rather than seeking a canonical form of geographic information or reorganizing system-level commands, our approach to instruction defines a high level view of GIS contents, allowing users to specify their application perspectives. We relate information content to user questions rather than to the data formats and system commands that dominate the current image of GIS teaching (Vahedi et al., 2016). Core concepts in GIS The core concepts of spatial information (Kuhn, 2012) offer a means of relating questions to the content of spatial information. They provide a high-level vocabulary for spatial thinking and computing, which can be used to ask and answer questions about phenomena in space and time. The concepts are meant to be generic enough to be applicable to geographic, as well as other spaces. They are comprised of a base concept of location, four concepts of information content, and three concepts of information quality, which are metainformation concepts applicable to all content concepts and their combinations (Table 1). Each core concept comes with "threshold concepts" (Land et al., 2016), which offer learners transformational insights, or ways of seeing, into an application domain. The following is a brief characterization of each concept in view of its use in GIS instruction. Instruction begins with the base concept of location, which allows students to ask "where" questions. The first threshold concept that students encounter is the idea that location is a spatial relation between a figure and a ground (Talmy, 1983). Following this, students are presented with a series of content concepts: field, object, network, and event. The concept of location is foundational to the field concept, as students learn that fields express values at positions over a given domain. Students learn that core concepts of spatial information are ways of viewing the world. They select, and in some cases interchange, core concepts of spatial information to produce a desired view of a spatial problem; for instance, students learn that geographic phenomena, such as land cover, can be conceptualized and analyzed as a field or as a set of objects (e.g., discrete parcels of land with attributes). The network concept builds on the object concept, answering questions about connectivity. Students conceptualize events using any combination of core concepts as "participants" (e.g., a rainfall-induced traffic event involves a set of participants including cars as objects, a road network along which they can be located, and a field of precipitation). Finally, students are prompted to reflect on their own learning process as they interrogate the quality of geographic information. The concepts of information granularity, accuracy, and provenance are examined in relation to each of the core content concepts. Using the concept of granularity, students understand that in some cases, they can refine their answers to previous questions when appropriate, but that overly specific answers can be less accurate. Students use the concept of provenance to interrogate their own understanding of how they derive answers from processed information and whether they can trust those answers. For more background information on the core concepts of spatial information, including definitions used in our teaching program, see Kuhn and Ballatore (2015). Designing a concept-based GIS course Concept-based instruction defines the foundational language that learners must acquire (Erickson, 2007). In this view, the central challenge of teaching GIS is articulating an appropriate set of concepts for GIS learners to acquire (Srivastava and Tait, 2012). Problem-based learning (PBL) is a complementary approach, which balances theory and application in designing activities to teach GIS concepts. With PBL, GIS learners actively apply concepts to solve real-world problems, often in consultation with a client (Keßler et al., 2018). We structure an introductory GIS course around core concepts of spatial information (Kuhn, 2012) by focusing learning activities on software-independent foundational concepts. The course activities motivate students to use core concepts of spatial information to solve realistic problems; students are prompted to ask and answer questions about the real world in ways that underlie GIS software but are established independently of it. The course was first designed in the fall of 2014 and has been updated every year until 2019. It is the first part of a series introducing undergraduate university students to GIS at UCSB. Most students pursuing a geography degree take this course, while those looking to become geographic information scientists take the yearlong series, which concludes with a capstone project addressing a real-world problem. Previous versions of the course offered a pragmatic approach to learning the history and techniques of GIS. Students were taught how to employ methods on generic types of geographic data. They learned a set of analytical techniques that would help them operate GIS software at a more advanced level than their untrained peers. Labs provided step-by-step instructions exclusively with ArcGIS. The general principle we followed was to teach GIS by asking questions about spatio-temporal concepts, rather than the reverse (i.e., hoping to arrive at sensible questions by teaching software commands). A running example frequently mentioned in class involved the as-sessment of local solar energy potential with the goal of planning solar panel installations. Case studies showed how each concept was represented in data models and handled through typical operations on data in a GIS. The weekly lectures related each core concept of spatial information to student experiences, discussing the views students held previously and showing possible shortcomings or misconceptions. Students submitted weekly questions in lectures, highlighting conceptual gaps in understanding. Refinements were made to the companion text for the course based on students' questions, which illustrated important ways of thinking that challenged the research. Software and data availability A description of the course version offered in 2017-including activities, lab materials, and data-is available online (https://github.com/saralafia/geog-gis-176). The lab assignments involved many tools, including web services, mobile apps, and online games. The lab tasks exemplified key questions around the core concepts without requiring students to engage much with software. The assignments encouraged students to explore and solve several problems around the concept as it occurs in a typical application area. The location, field, object, and event labs all required students to ask and answer questions about the university campus. For the field and object labs, the teaching assistants worked with the university library to curate a digital elevation model and building footprints of university buildings for students to use. In the network lab, the teaching assistants created a custom network dataset for the university campus from OpenStreetMap that included pedestrian walkways and bicycle trails. The event lab used data from campus administration on connections made to internet routers; students were excited to work with this event data when they realized it could serve as proxy for the number of people on campus at any given time. Each of the content labs added a conceptual layer to the students' GIS so they could understand how concepts could be used together; for example, students joined the internet connectivity data to the building footprints so they could visualize busy places on campus (e.g., large lecture halls during final exams). For the quality concept labs, students revisited data from earlier assignments to examine its granularity and accuracy, reinforcing the idea that the core concepts build upon each other. Course delivery One week of lectures and labs was devoted to each of the core concepts, spanning a quarter of instruction. In a semester system, allotting two weeks to each concept would be an attractive option, allowing for more discussions of applications, modeling approaches, and GIS projects. The weekly lectures followed a similar pattern for each concept. The course was deliberately sequenced to follow the order in which the concepts are presented in Section 3 beginning with the idea of location information, introducing the content concepts of field, object, network, and event, and concluding with the information quality concepts of granularity, accuracy, and provenance. The first lecture of each week related the core concept to past experiences of students, discussing the views students held previously, showing possible shortcomings or misconceptions, and illustrating the concept. Between the class meetings, students read a concise text on the concept and were encouraged to post questions of understanding in an online course forum, which were then discussed in the second lecture. The second lecture showed how GIS handle the concept in terms of data models and typical operations on the data, as well as applications. The examples focused mainly on the university campus, an experience shared by all students. Illustration of a concept-based lab Rather than guiding students through workflows, the labs encouraged students to ask and answer questions with a GIS. Students were given the following prompt: "Campus administration is interested in finding optimal locations for installing solar panels. Using your previous knowledge of location and fields and your new knowledge of objects, determine the best rooftops for installation." The data that students were given included a digital elevation model of terrain and a layer of university building footprints. Below are examples of questions from the activity and responses given by students, updated to reflect standard terminology: • What is the location of a rooftop that might be suitable for solar panel installation? Use a spatial relation that holds between the rooftop and its surroundings ( Figure 1). • Why aren't rooftops with varying heights optimal for solar panel installation? Remember that eleva-tion is often understood as a field, which is most commonly represented as raster data (Figure 2). Figure 2. "I selected only those rooftops with a suitable shape, elevation values, slope, and aspect. I translated these criteria into a query for buildings with these attributes." • Which buildings on campus would be suitable for solar panel installation based on all of the variables you have investigated? Remember that buildings are modeled as objects with attributes that we can query ( Figure 3). Course activities guided students toward a high-level understanding of spatial computing, independent of software commands, but immediately applicable in the labs thanks to the focus on questions organized by core concepts and answered through GIS commands. Thus, students learn about spatial analyses through conceptualizations of geographic environments before they discover the GIS commands and data models required by the GIS to perform the analyses. This sequence appears more desirable for an introductory GIS course than starting with the organization of the commands in a particular GIS. The core concepts of spatial information offer learners a language for asking questions about the world and answering them using any GIS. Student impressions of the course At the conclusion of an early version of the course in 2016, we administered a survey to gauge student impressions; 66 of the enrolled 72 students participated in the survey. When asked what they liked most about lab activities, more than half of the students cited software training as a benefit despite our deliberate attempt to diminish the emphasis on use of software and emphasize problem-solving as a means of teaching GIS. Students also saw value in applying concepts to practical problems as part of the lab assignments, suggesting that activities succeeded in relating concepts to practical GIS analyses. Many students felt that "the pace of the labs was too fast" and that "too much content was covered". This may be attributed to students learning how to use software and trying to solve an application problem at the same time. Some also remarked that switching tools and software between labs was challenging because they did not have previous experience and felt that they were expected to "pick everything up right away". Other complaints included the "lack of written and detailed instructions" in the labs. We expected that students would be willing to apply the software to problem-solving, as they would in an experimental laboratory course, yet, students still requested step-by-step guidance. The majority of students (68%) favored the network applications, suggesting that they may be the most intuitive to conceptualize. Student explanations included "I feel like the network section can be applied to a lot of scenarios. . . I like seeing how things are connected and those connections are measured", as well as "I liked it. . . because [the lab] was relatable and the instructions were clear". Students' second favorite concept application area was events (16%) as they were interested in dynamic mapping. Following this survey, we redesigned all the labs to facilitate mapping questions to software commands. We reduced the workload per lab and the pace of instruction. Despite students' stated preference for learning software, instructional time remained focused on understanding a problem before learning how to navigate software menus. Informal assessment strategies, such as ungraded quizzes, have occasionally been used in class to help students anticipate and address their conceptual gaps as the course progresses rather than relying on a single survey at the conclusion of the course. These assessments need to become much more frequent in order to provide more insights on how concept-based GIS teaching enables learners to transfer knowledge to application problems and other software products. Conclusion Our goal was to address limitations that students face when learning GIS from a traditional software command perspective, such as the long-term retention of methodological knowledge and the transfer of technical skills. We proposed an alternative teaching strategy to mitigate these challenges for students by allowing them to learn GIS from an information content perspective. We tested and refined this approach in several iterations of an undergraduate-level introductory GIS course. We then evaluated our approach by conducting a student survey and reflecting on lessons learned from a teaching perspective. Our experience suggests that students can start learning GIS from a content, as opposed to software command, perspective. Most core concepts turned out to be relatively easy for new learners to acquire since they relate directly to daily experiences in situations like wayfinding, interpreting weather reports or using social networking tools. A rigorous focus on the core questions that a GIS helps answer encourages an understanding of GIS at a level above software menus, although this understanding is not what many students expect to gain in an introductory course. The main difficulty with designing labs was to develop unambiguous yet challenging prompts; questions that were too specific did not provoke enough critical thought and questions that were too vague inhibited students from reaching a common understanding. Students also struggled to grasp the ways that core concepts interact and guide applications, indicating opportunities to structure future versions of the course around a few in-depth case studies. We found that illustrating concepts through relevant applications engaged students with diverse interests. The main obstacle to student satisfaction with the course was their expectation of learning a particular software package. Prioritizing question asking over button pushing challenged students to experiment and discover-something that many did not expect. We underestimated the difficulties associated with designing and guiding students through "hands-on" assignments that encouraged critical spatial thinking rather than recipedriven exercises for software commands. With "big spatio-temporal data" and "data science" entering so many human endeavors, a focus on information content rather than software commands appears more justified than ever, but remains difficult to convey. Calls to develop students' "critical spatial thinking" are also emphasizing the conceptual value of GIS education rather than positioning it simply as tool training (Bearman et al., 2016). A UCGIS survey of instructors who adapted their courses for remote learning between September and December 2020 highlighted opportunities for innovation in GIS education (Bowlick and Shook, 2020). The survey found that teaching a primarily software-based curriculum remotely was challenging; instead, instructors that switched to web-and open-source GIS tools, combined with flipped classroom assignments where students worked collaboratively during class meetings, saw greater satisfaction. We also anticipate other trends, such as the inclusion of spatial sciences in emerging data science curriculum. Efforts to incorporate GIS into the data science curriculum also require a vocabulary of spatial computing that is clear and powerful (Rey et al., 2020). We believe that providing interdisciplinary learners with core concepts that support spatial questions is a first step toward expanding access to GIS-like functionality in any form. As research is solidifying the core concepts, the literature and didactic take-up are gradually improving and a book addressing life-long learning needs of professionals as well as graduate students is in preparation. In a future form of practice, the core concepts are envisioned to serve as bridges between analysts' questions or hypotheses and the rapidly growing variety of GIS models, tools, and workflows (Scheider et al., 2017). The core concepts have also been used to evaluate students' spatial thinking, for example offering a taxonomy to describe features observed when interpreting thematic maps (Ishikawa, 2016). At a more general didactic level, the core concepts offer constructive alignment by segmenting course content into discrete stages (Etherington, 2016). Our teaching and research efforts continue to pursue core concepts of spatial information that support spatial questions and answers through GIS queries and workflows.
5,787.6
2021-06-04T00:00:00.000
[ "Geography", "Computer Science", "Education" ]
Equity Risk Premium in ASEAN : Empirical Analysis on Its Puzzle and Impact of 2008 Financial Crisis This paper aims to mainly investigate equity risk premium of the six major members of ASEAN countries such as Indonesia, Malaysia, Philippines, Singapore, Thailand and Vietnam which have been chosen based on their stock market development and data availability. It has focused on the two main issues of the equity risk premium such as the intriguing issue on the existence of equity premium puzzle and the analysis on the impact of the 2008 financial crisis on the trend of the equity risk premium and their potential contribution on the risk aversion's attitude of the ASEAN investors. Three methods are utilized to test this phenomenon (1) basic model consumption of Mehra and Prescott (1985) and simplified model by Ni (2006); (2) calibration (Campbell, 2003) and (3) GMM estimation (Hansen, 1982). The calibration method results suggest that the puzzle exists in Indonesia.It has determined that the puzzle seems lying on the negative covariance between the consumption growth rate and the average real stock return. After applying GMM as method of the three sub-sample analyses for before, after and excluding 2008, it shows that financial crisis didn't affect much the value of risk aversion, but it cannot deny the fact that it has profound effect on the behavior of the equity risk premium. It can also be inferred that after crisis, ASEAN investors are likely tend to become more decreasing relative risk averse and prefer to have happiness tomorrow than today. Background of the Study Equity risk premium (ERP) which refers to the difference between returns on stocks and bonds plays an important role in the financial market.This is the main component of every risk and return in finance models.It can be also considered as the central input in projecting the costs of both equity and capital in the corporate finance and valuation. Most of the studies related on equity risk premiums mainly focused on the developed countries such as US, UK, Germany, France etc.In this case, it implies that there are very few studies mainly focusing in developing countries data.The growing economy of Association of South East Asian Nations or simply called ASEAN is one of the known international associations aiming to have integration on accelerating economic growth among countries members.The potential contribution of this community as one of the efficient markets in the world should be also given an attention.In this present study, it focuses on the six major members of the organization such as Indonesia, Malaysia, Singapore, Thailand, Philippines and Vietnam which have been selected based on their stocks development and data availability, these six countries already comprise almost 95% of the ASEAN GDP (2012 World Bank Data). And as a given importance of the equity risk premiums in financial market, this paper aims to focus in two different issues involved with it in the six major ASEAN countries.First is the intriguing issue on the existence of equity premium puzzle which is a phenomenon that there is an observed higher return in the stock market in comparison with yield in the bond market.Mehra and Prescott (1985) had argued that observed equity risk premiums are not consistent with the financial conventional theories and have estimated historical premiums about 6% were too high and eventually termed this phenomenon as equity risk premium puzzle (ERPP).They suggest investors would have implausibly high risk-aversion to justify these premiums.Many studies attempt to give explanations both based on risk or non-risk reasons on this puzzle.This study firstly aims to find out if the puzzle also exists in ASEAN six main members.Second issue is the common analysis on the behavior of the equity risk premiums among the countries, then eventually employing the sub-sample analyses to also determine the possible impact of the 2008 financial crisis. For the first part, three methods are used to test the existence of the puzzle, the basic model and simplified model of Mehra and Presscot (1985) and Ni (2006); the calbration model (Campbell, 2003); and the GMM estimation (Hansen, 1982) using the E-views.Other studies simply use only until two methods such as the basic model and the calibration method.However, this paper also includes execution of the GMM method. Second part is regarding on the behavior of equity risk premiums of ASEAN countries showing a dramatically drop in year 1999 and 2008 which led to analyze the possible impact of financial crises on risk aversion and might solved the puzzle.However due to lack of data availability, only the 2008 financial crisis has been the focus of study in the second part. Objectives of the Study Issue on the existence of equity premium puzzle has been one of the most interesting topics on financial studies when Mehra and Prescott (1985) had coined this term after observing equity risk premiums are not consistent with the financial conventional theories and have estimated historical premiums about 6% were too high.They suggest investors would have implausibly high risk-aversion to validate these premiums.Many researches exerted efforts to find out possible rational reasons on its occurrence based on both risk or non-risk reasons.But in this study, to determine whether the puzzle exists in the six major countries, the criteria of Mehra and Prescott of having risk aversion more than 10 was applied.This means if the risk aversion among the countries turn to have higher than 10, that country will be considered to have equity premium puzzle.Thus, following is the first main objective of this paper. To determine the existence of equity premium puzzle in six major members of Association of South East Asian Nations (ASEAN) Many researchers have exerted efforts to seek explanations of the existence of equity premium puzzle.In the past studies, many explanations have been suggested for the reason of this puzzle.Potential explanations involved consumption based generalized expected utility models, as suggested by Epstein & Zin (1991), Constantinides (1990), Abel (1990) and Campbell (1999),considering the additional risk provided by rare and disastrous events (Rietz 1988;Barro 2006) or idiosyncratic income shocks (Constantinides & Duffie 1996, Krebs 2000).Other possible factors are the liquidity limitations (Bansal & Coleman 1996), borrowing constraints (Constantinides, Donaldson & Mehra 2002) and tax reasons (McGrattan & Prescott 2003, McGrattan & Prescott 2003).Also, it has been shown that an industry group's higher risk ought to higher equity risk premiums (Athanassakos, 1998).The variables on behavioral finance have also been proposed, most remarkably aversions to ambiguity (Chen & Epstein 2002, Barillas, Hansen & Sargent 2009, Gollier 2011, Rieger & Wang 2012) and myopic loss (Benartzi & Thaler 1995, Barberis & Huang 2008).Furthermore, through international evidence, it has been revealed that there is a relevant relation between the historical equity premiums and the discount factors.It is supporting the explanation of the presence of the puzzle in myopic loss aversion.This can be interpreted that greater historical equity risk premiums are noticeable in countries where participants prefer to be more short-term oriented than long term oriented (Marc Oliver Rieger, Mei Wang and Thorsten Hens, 2013).However, among these studies, still no factor alone can clarify the puzzle.It is still a puzzle.In Cochrane (2011) study, he specifies that the variation in the discount rates might be the reason on why this puzzle occurs since even the traditional finance theories can't confirm on why the discount rate varies so much than we expected.The theories are in their infancy.As Cochrane states that the variation of the discount rate seems to be a cause on why there are a lot of small puzzles in finance.One of these puzzles is the equity premium puzzle.As he mentioned that the models and traditional theories of Finance failed to explain this phenomenon, this research study about the ASEAN countries tried to find the possible and potential factor of the discount rate on existence of the equity premium puzzle.This paper has shown the integration of the consumption wealth model and the discount factor.The latter represents the marginal utility.This means that this factor is about the willingness of the ASEAN investors to accept the risk for the sake of additional satisfaction for the future returns of their portfolio.As many studies tried to describe different explanations on the occurrence of this puzzle, it is also important to find out the possible reason on why this phenomenon could occur in ASEAN countries. To find out a possible explanation of the puzzle in ASEAN countries Based on the neoclassical theory of interest by Irving Fisher, the interest rate decides the relative price of both present and future consumption.In association with relative levels of present and future consumption, it is important to define the marginal rate of substitution between the consumption with two different time preferences.These two rates must essentially be equal, and this equilibrium is conveyed approximately by the relative prices of these two consumptions.Subjective discount factor is the determinant whether investors prefer to have satisfaction today or in the future.If it is greater than one, this denotes the utility preferences of the investors for tomorrow rather than today.Otherwise if this beta is less than 1 but greater than zero, it is consistent with the conventional financial theory that most of the investors prefer to have utility now.Recent studies of Biwesh Neupane (2013), found out Nepalese to have preference for the present time rather than in the future as its accounted subjective discount factor is less than 1 but higher than zero.On the other hand, study of Ni, Z. (2006) in Chinese stock market suggests that Chinese people tend to invest more on the future than in the present, as its beta appeared to be greater than one.In this study, since it is important to determining what kind of investors ASEAN countries have, the following sub-objective has been drawn. To determine if ASEAN investors prefer happiness today or tomorrow Equity risk premiums are relevant part of every risk and return in model in finance.They are the key in measuring the costs of equity and capital in valuation and corporate finance.Most of the previous studies claim that equity risk premiums in emerging countries are higher in developed countries (Roelof Salomons and Henk Grootveld).In time series analysis, it is necessary to perform the stationary of the data and if it is not, the results may give spurious regression.This can be firstly done by finding out the trend or behavior of the data.Most of the recent studies claim that equity risk premium varies over time.In the seminar of Northfield Asia Research in Hongkong presented by Katsunari Yamaguchi (2013), it was discussed that equity risk premiums of Japan is varying slowly over time when employing the stock market data from 1980 to 2012.The trend of equity risk premium tends to persist over some years to one decade.ERP may have a characteristic of mean-reverting in the long-run, or investment horizon over a few years.Thus, the second main objective of this paper is below relating on finding out the trend of equity risk premium among the ASEAN countries: To analyze the behavior or trend of equity risk premium of sample countries After finding the trend analysis of the equity risk premiums in the second main objective, if the outlier or sudden drop of the data occurs, subsample analysis is necessary to perform.Sub-sampling analysis can be different depending on the field but in statistics and business, it denotes with the range of a subset of individuals from within a statistical population to estimate features of the whole population.In the analysis of this paper, it is mainly referring on the examination of the existing data of the ASEAN countries, and doing structural break to perform the sub-sample analyses.This is to define whether there would be some differences or affect after removing the part of making the series unstationary, and may lead to spurious regression and irrational results.Thus the sub-objective is the following: To figure out if sub-sample analysis is necessary Another issue is the potential impact of the financial crises on the equity risk premium.After doing sub-sample analyses, which would have been done probably due to the impact of the financial crises on the equity risk premium, same procedures of conducting Generalized Method of Moments (GMM) was executed to determine if there's a variation in the value of risk aversion and subjective discount factor.If the big change occurs, this would suggest the impact of financial crises both on the risk aversion and the beta, otherwise no influence on them.Hence, the second sub-objective of this part is the following phrase: Research Data The research data used in this study is the data of six major ASEAN members.Table 1 presents the summary of data collection of the stock index code per each sample country, period from 1995-2015 and the source is mostly from PSALM which stands for Power Sector Assets and Liabilities Management Corporation.Stock index per each country is the index of aggregate value produced by combining several stocks or other investment vehicles together and expressing their total values against a base value from a specific date in the respective country.Having the said value, stock market return is calculated.This is the market index from t to t+1 but these returns are not the real returns, these are the nominal returns.For this case, these returns have been adjusted through the inflation factor.Theoretically, the dividend should be incorporated in calculating the market return, but in here it is not considered.The statistical lag is used and dividend return over the same period of time has not been included.Thus, it is assumed that the dividend was not paid within the sample period.It is believed that the overall impact of the dividend payments would be ignorable since it would have been very small.The equation below shows the calculation of the real return on equity. Where: R t+1 = real rate on equity in the period t+1 I t+1 and I t = index value at time t+1 and t CPI t+1 = inflation deflator in the period t+1 Moreover, the risk free rate which is a nominal risk free rate.For this case, it has also needed to be a Real Annual Risk Free Rate of Return.So, the equation below shows on how to obtain it: Where: rf t+1 = nominal rate of return CPI t+1 and CPI t = Consumer price index at time t+1 and t RF t+1 = real risk free rate of return For the data of household consumption expenditure, it refers to the market value of all goods and services such as durable products bought by the households.In the expected data of the consumption, it focused on the consumption per capita indices.The series of this is be adjusted into real terms through dividing with consumer price index (consumption deflator series). Where: V t+1 and V t = consumption value in the period of t+1 and t in absolute terms CPI t+1 = inflation deflator for period t+1 Moreover, another set of data is gathered to employ the GMM estimation as the third methodology.The stock returns data from 1995-2015 of the top companies in each of the countries are collected (see Appendix A).To calculate the real stock returns of these firms, the last traded price of each companies and the dividends are collected.The formula used is the following: This is employed to get the real stock returns from the top companies of each country. Basic Model The methodology used in this study is referred to the studies of Mehra and Presscot (1985) and in Biwesh Neupane ( 2013).The basic model is derived from the consumption based model of the first reference paper while the simplification of the model is from the latter study. It has been assumed that all investors in the market have an identical in their endowment process and not focusing much attention on the risk management.The equation below presents the maximization of the expected utility having an infinite time horizon. Where: In here, it denotes that the higher the beta means investors prefer to have the satisfaction in the future. The utility function can be presented as the usual relative risk aversion utility function. In here, it presents the parameter measuring the risk aversion of the investors, and as the theory suggests, alpha can be shown as 0 < α < ∞. Hansen and Singleton (1983) model of basic capital asset pricing of consumption is also considered in this study wherein it has been calculated by first and second condition of normal logarithmic and getting an additional wealth's contribution. The covariance between the two factors can be presented like the equation below: Converting the model into covariance, it turns out to be like this: Rearranging the function will be shown like this: The basic model is employed to investigate the existence of equity premium puzzle. Furthermore, the simplified model which is derived by Ni (2006) is also applied in this analysis.It has stated that consumption growth is independent and identically distributed (idd) and the covariance with risk rate is assumed to be zero.This means following a log normal distribution indicating that in ( , ) and for both sides, getting the normal logarithm, the next equation can be obtained: And then eventually, other form of equation for the logarithmic return for the risk rate of return equity can be done. In which: Thus, the following set of equations below show the relations between the risk-free rate of return, return on equity, relative degree of risk aversion and subjective discount factor.In here, all the values can be solved if among these 4 variables, at least two values are known. Calibration Campbell ( 2003) has followed the works of Rubinstein (1976), Lucas (1978), Breeden (1979), Grossman and Shiller (1981), Mehra and Prescott (1985) and other classic papers regarding on the equity premium puzzle.All of them assumed about the existence of the representative agent who maximizes a time-separable power utility function specified over aggregate consumption : Where: = relative risk aversion This utility function is stated to have some relevant properties.One of these is stating that risk premia does not vary over time as aggregate wealth and the scale of the economy goes up.Moreover, this power utility function is the elasticity of intertemporal substitution which is simply the reciprocal of the relative risk aversion. The function also assumes that the aggregate consumption is conditionally lognormal.Then for the expositional convenience, the log discount factor will be the equation below where . Where: = unconditional variance of log consumption innovations = unconditional covariance of innovations Then the equation below suggests that the riskless real rate is linear in predicted consumption growth.It has the slope coefficient which is equal to the coefficient of the relative risk aversion.The precautionary savings effect is the negative effect on the riskless rate of the conditional variance of consumption growth. Finally, to solve the equity premium puzzle, the log risk premium on any asset is calculated as the product of the relative risk aversion and the covariance between asset return and consumption growth.To compute the covariance, three values are needed, the standard deviation of the log of excess stock returns, standard deviation of the consumption growth and the correlation of these two variables. The equation above says that an asset having a high consumption variance ought to have low returns when there is low consumption.Thus, a large risk premium is required for the risky asset. GMM Estimation The General Method of Moments (GMM) is first developed by Hansen (1982) which is an extension of the classical method of moments.It has estimated based upon the moment restrictions to be imposed.This aims to estimate the value of and β.This is a widely used method of the financial economists which the number of moment conditions is greater than the parameters.This means that this method aims to have an over-identification unlike with the traditional moment of conditions, having exactly identified because of having equal parameters and sample moments. Then, there are 11 moments of condition or less in total (risk free rate and the real return of the top companies per each sample country).The two parameters are the subjective discount factor and the risk aversion.For this case, we employed the GMM cross-sectional.And to verify if it is over-identified, the J-statistic is used. To show this, we have parameters b Kx1 and n moment conditions for n>K.N refers to the moment of conditions while K is indication of the number of parameters. So let the to be the function of the moment conditions.In this case, this function is the (nx1) vector.Hansen (1982) has suggested the following equation: And to estimate the value of b, it has to be minimized the: So, the first conditions are, In the power utility case, we try to find the value of β and .The next equation is the main equation to estimate the GMM. Where: = Subjective discount factor = Risk aversion = Rate of consumption growth = real return on stocks, risk free rate = price Then the vector matrix will be the following: As mentioned above, the real stock returns of the top companies per each country are used (see Appendix A).In conducting the GMM, Eviews which is the econometric software is used.To estimate it, the 2SLS coefficient estimates with GMM standard errors is utilized.Also this method has been applied to test the impact of 2008 financial crisis on the risk aversion and subjective discount factor. 2 presents the summary of empirical results of the six major ASEAN majors using basic model.It can be inferred in all sample countries that real returns on equity are all positively higher than real risk free rate.The consumption growth rate is all positive and it indicates that MAL has the highest lognormal of consumption growth rate, followed by Thailand, Singapore, Indonesia, Vietnam and Philippines respectively.The covariance of the consumption growth rate and the real return on equity carries different signs among the sample countries.SIN and VIT conveys positive sign while the other countries carry negative sign. Basic Model's Empirical Results Using the Mehra and Prescott (1985) basic consumption model, it displays that risk aversion of the six members of ASEAN is all less than 10 as shown in Table 3.This proposes that the equity premium puzzle doesn't exist in any of these Asian countries.Indonesia, Malaysia, Philippines, Thailand have a negative aplha with indicates that Indonesians, Malaysians, Filipinos and Thais are risk-lovers.But this value of their risk aversion is inconsistent with the conventional concepts stating that common investors are risk averse.The rational reason behind this result is the accounted negative value of the covariance between real return and consumption growth rate (Indonesia: -2.41990; Malaysia: -14.35658;Philippines: -0.09786 and Thailand: -24.38136).In line with the result of the subjective discount factor, values of the IND, MAL and PHIL countries are consistent with the fundamental financial theory.This suggests that these Asian people prefer to have happiness today than tomorrow.However, SIN, THAI and VIET carry values of SDF greater than one indicating investors in these countries choose to have satisfaction more for the future than the present.Basically Table 4 displays on the 7 th column the calibrated value of the correlation of the variance of consumption growth rate and variance of excess log stock returns into one.This is in fact a counter factual exercise but it is considered as a significant diagnostic test.It points out the extent into which the puzzle arises from the smoothness of consumption rather than having a low correlation with the returns on stocks.After conducting the calibration method, it shows that equity premium puzzle occurs only in Indonesia having value of risk aversion of 8.73533 in Table 5.According to Mehra and Prescot (1985), the value of alpha should be more than 10 to declare the incidence or explain the phenomenon of the puzzle.However, our result is supported by the research of Cecchetti, Lam and Mark (1993) and Kocherlakota (1996) suggesting value of more than 8 is also considered to explain the existence of the puzzle.Note: due to the missing data, GMM-cross section for Singapore is not estimated GMM Estimation's Results and Analysis It can be inferred from Table 6 that only PHIL displays to have subjective discount factor or the beta greater than one.This denotes that under some considerations, using GMM method, Filipinos tend to invest more for the future than for the present.Countries of Indonesia, Thailand and Vietnam carry less than one beta and consistent with the conventional financial theories.Only the country Malaysia accounted to have negative beta (-0.026761) which is suggesting that among the countries in ASEAN, investors in Malaysia likely to have investment that moves in the reverse direction from the stock market. In line with the risk aversion, it shows that all the five countries are risk lovers which are not congruent with the traditional theories.The outcomes tell that Asian people tend to have higher risks in expecting to have higher return on stocks.And again, the rational reason behind this negative value is the same with the reason explained in basic model's result.And this is due to the fact that covariance between the consumption growth rate and the excess returns on equity is negative. Sub-sample Analyses on the Impact of 2008 Financial Crisis The Figure 1 of the real equity risk premium of the ASEAN countries show sudden drop in the periods of 1999 and 2008.The rational reason behind this drop is the financial crisis during those times.However in this study, only the period of 2008 is the focus of the investigation due to lack of data availability of other countries in the period of 1999.All six major sample countries have data available for the year of 2008.In line with this, to determine whether 2008 financial crisis can affect the value of the risk aversion and the subjective discount factor of each country, the GMM estimation of the sub-sample analysis is performed.The sub-sample analysis is executed in three ways.First is the before crisis data analysis which aims to find out the values of risk aversion and SDF before crisis happens, then after and excluding 2008 accordingly.The Table 7 above shows that all the values of the risk aversion of the ASEAN countries before the crisis are negative indicating decreasing relative risk aversion of the investors.This infers that their negative wealth shocks raise the risk premium needed to hold risky assets. Subjective discount factor shows all negative to ASEAN major members except Indonesia.It can be noticed that Singapore is not in the list of GMM estimation due to the unavailability of the data of top firms of the said country.On the other hand, Table 8 shows the values of risk aversion and beta after the 2008 financial crisis.It can be compared from Table 7 that after the crisis, the values of risk aversion in most countries tend to have more negative decreasing values.This is indicating that ASEAN investors tend to be less risk averse after the crisis and prefer to invest their negative wealth shocks in a raised risk premium needed to hold risky assets to have happiness tomorrow than today.And under some considerations, Indonesia and Philippines carry square root of negative values which lead not to produce values of subjective discount factor.In all of the three ways of analyses, to determine whether the crisis could explain the equity premium puzzle is not supported as all displayed values are smaller than 10.However, it can be notable the impact of after crisis as the value of relative risk aversion shows to be negatively smaller after 2008 than before.While subjective discount factor have to be mostly negative values before 2008 while turned out to be positive after 2008.This implies that ASEAN investors tend to likely become more decreasing relative risk averse and prefer to have utility for the future. Summary In the first major part of this research, its objective is to determine if equity premium puzzle exists and based on the empirical results, among the six major countries of ASEAN, the existence of the puzzle is only proven to have in Indonesia using the calibration method after setting the correlation of the standard deviations of excess log stock returns and consumption growth into one.This is to test into what extent the puzzle arises from the smoothness of consumption and not on the low correlation with the real stock return. Subjective discount factor in basic model of IND, MAL and PHIL countries are consistent with the basic financial theory suggesting that these Asian people prefer to have happiness today than tomorrow.However, SIN, THAI and VIET carry values of SDF greater than one signifying investors in these countries choose to have pleasure more for the future than the present.SDF produced by GMM presents different result which reveals that among the ASEAN countries only PHIL has the preference to have satisfaction tomorrow than today.Rational reason behind this estimation is mainly because of the different utilized data in the third method of the first part, which are the top firms per each sample country. However using the other methods, to conclude whether the puzzle exists or not in the ASEAN countries is somewhat difficult as implied by the basic model of the Mehra and Presscot and Ni (2006); and the GMM estimation.This is for the reason that the risk aversion turns to be negative for countries IND, MAL, PHL and THAI and even less than ten for SIN and VIET.This indicates that the investors in these countries are risk lovers and relative risk averse respectively.Being risk lovers are inconsistent with the financial conventional theories.This paper found out that the reason behind this negative value is due to the negative covariance between the stock returns and consumption growth.While for being relative risk averse of the last two countries is somewhat supported by the financial theories, however not sufficient enough to prove the existence of the puzzle. For the second part of this paper, its main objective is to know the behavior or trend of equity risk premium of the sample ASEAN countries.Sub-objectives are to figure out if sub-sample analyses are necessary and investigate if financial crisis can explain the puzzle, impact on the risk aversion and subjective discount factor of the investors. Based on the graphical stochastic trend of ERP, it can be inferred that implementing sub-sample analyses is needed.Then in relation with the financial crises, after conducting three sub-sample analyses, it suggests that 2008 financial crisis cannot significantly affect the puzzle as it didn't make risk aversion higher than 10 in GMM approach.However, it is worth noting that crisis has potential impact on the trend of the equity risk premium as years 1999 and 2008 with global financial crises showed sudden and dramatic fall down.It can also be concluded that after crisis, ASEAN investors are likely tend to have more decreasing relative risk averse and prefer to have pleasure tomorrow than today. Research Contribution This paper has relevant contribution mainly on the methodology and uniqueness of data.It has also some important contributions for policy making and investment decisions.It can be summarized into four contributions. First, this study firstly tried to test the existence of equity premium puzzle by employing three methods such as the basic model (Mehra & Prescott, 1985), calibration (Cochrane, 2003) and the Generalized Method of Moments GMM (Hansen, 1981).Most of the other studies typically only use the basic model, and recent study of Xiaojing Jin (2011) used two methods such as basic model and the calibration among Asian countries. Second, few studies applied the approach of GMM mainly because of its complicated features.Study of Ki Young Park & Kwang Hwan Kim (2009) used GMM to test the equity risk premium of South Korea and simply use the stocks return, its lags and risk free rate as the moments of conditions.Present paper executes the top companies per each country in ASEAN.Up to date, this seems to be the first paper using last traded price of top companies to be moments of conditions to test the occurrence of this puzzle phenomenon. Third, related to research contribution number 2, this seems to be the first paper to mainly focusing on the data related to ASEAN countries.Most of the previous related literatures are focusing on developed countries or generally on the Asian countries.As the economic growth of ASEAN organization is getting stronger, it is a must to also pay attention on how the equity risk premiums of the member's countries could contribute on the financial literature. Lastly, financial crises are indeed having potential impact on the behavior of the equity risk premiums.This paper has a very different and could say simply unique from other studies, as it has tried to do sub-sample analyses by excluding year 2008 financial crisis.Other two sub-sample analyses are the before and after the crisis to determine the potential impact of the crisis on the risk aversion, subjective discount factor and eventually for the possible explanation of equity premium puzzle. Limitations and Suggestions for Future Research It is very common to have limitations on every research paper based on their data usage and methodologies.Followings are the limitations and recommended further research studies of this paper. The nature of the data itself has its own limitations.Due to difficulty of collecting data, some data are missing and not having the same periods.Gathered data is mainly focusing on ASEAN countries, other things equal, it cannot be generalized that same results could be the same with other countries such as European, Middle-east countries, or other organizations.Also, among ASEAN countries, there are also some limitations as each country has its own individual or heterogeneous characteristics. Methodologies can be also set as one of the limitations of this paper.As it has been discussed in the chapter 3 of methodologies on this research, different methods showed different results.For the first part of this paper, the basic model and GMM results have shown same implications but still different values.The calibration shows favorable results in relation with the objective of this paper to determine the existence of equity premium puzzle.However, this favorable method is not true to all countries in ASEAN, the calibration method has been only considered useful for Indonesia.Other remaining countries still have a mystery on testing the occurrence of this puzzle.This suggests that there could be a bias on the applied methods which could lead for its one of the limitations to consider. Furthermore, in relation with the second part of this paper which is doing sub-sample analyses through GMM, could have also drawn to have limitations.The year excluded is only 2008, but it could be noticed on the graph years of 1999 and 2011 show dramatic fall down of the equity risk premium.In this case, this denotes that there can be also limitation on concluding for the generalization of the results of the sub-sample analyzes.These analyzes have been executed only in three ways in this paper.Executing in more sub-sample analyzes could have different results and implications. For further research studies, since this study has been interested on equity risk premiums of ASEAN countries, expanding this analysis into country members of AIIB or the Asian Infrastructure Investment Bank might lead to have more accurate results of the paper as this organization is widely accepted in financial markets.Moreover, updating the data is one of needed further studies.This table shows the summary of the codes and names of the top companies per each country, the Singapore is eliminated from the list due to the difficulty of collecting data.The first and 2nd methods are only applied for the said country using the stock index.Moreover, among the list of the top firms in Vietnam only two companies are included in the GMM estimation due also to the lack of data's availability.These 2 companies are Petro Vietnam and Vietnam Electricity respectively. CIMB consumption per capita in the sample t β t (0 <β < 1)= subjective discount factor I t = information available at time t E 0 {./I t } = expectation operator in the flow of consumption I t U(.) = concave utility function Figure 1 . Figure 1.Real equity risk premium of ASEAN Table 1 . Summary of ASEAN stock index data collection Table 2 . Basic model's empirical results summary Table 3 . Summary of risk aversion and subjective discount factor Table 4 . Calibration's empirical results summary Table 5 . Summary of risk aversion values Table 6 . 2SLS coefficient estimates with GMM standard errors Table 7 . GMM estimation before 2008 financial crisis Table 8 . GMM estimation after 2008 financial crisis Table 9 In the third way of executing the GMM estimation, year 2008 in all of data is excluded.It can be noticed that only Malaysia has the square root of negative values presented in Table9which make the alpha and beta impossible to calculate.This likely suggests that among the ASEAN countries, this country seems to be much affected by the 2008 financial crisis.
8,521.8
2017-03-14T00:00:00.000
[ "Economics" ]
Vol. 48 No. 4/2001 1185–1189 QUARTERLY Communication Temporin A (TA) and a cecropin A-temporin A hybrid peptide (CATA) were synthesized and assayed for their hemolytic, anticoagulant, and antifungal properties. CATA retains significant antifungal activity, is less hemolytic than TA, and inhibits blood coagulation. These results recommend further studies of the biological activities of CATA. Temporin A (TA; Table 1) is an antibiotic peptide that was originally isolated from the skin of the European red frog, Rana temporaria [1].It is among the shortest naturally occurring, gene encoded, antibiotic peptides to be isolated and exhibits antibiotic activities primarily against Gram-positive organisms, but also against some Gram-negatives, the fungus Candida albicans, and human erythrocytes, with the latter result depending upon the assay system used [1,2].Cecropin A (CA; Table 1) is a gene encoded antibiotic peptide that was originally isolated from the hemolymph of larva of the silkmoth Hyalophora cecropia [3].A synthetic hybrid peptide, CA(1-7)Mel(2-9)NH 2 (CAMel; Table 1), containing portions of the amino-acid sequences of CA and melittin (Mel; Table 1), the latter an antimicrobial and hemolytic peptide that is the major toxic component of the venom of the honeybee (Apis mellifera), has been found to exhibit excellent antibiotic activities with minimal hemolytic activity [4,5].The sequence of the amino terminal portion of TA is somewhat similar to that of the Mel(2-9) portion of the CAMel hybrid, and it was hypothesized that a hybrid peptide CA(1-7)TA-(2-9)NH 2 (CATA; Table 1) might also have useful biological properties.The sequences of the CATA and CAMel hybrids are 69% identical, and there is an additional 23% sequence homology (I ~L ~V; K ~R) between the two peptides.The CATA hybrid was synthesized and preliminary biological assay data indicate that it has different hematological and antifungal properties than TA. MATERIALS AND METHODS Peptide synthesis, purification, and characterization.TA and the CATA hybrid were synthesized by solid phase peptide synthesis techniques, using Fmoc chemistries as described [6].The peptides were purified by reverse phase (RP) HPLC and characterized by amino acid analysis and electrospray ionization mass spectrometry as described [6,7]. Blood coagulation assay.Anticoagulant activity of the peptides dissolved in isotonic saline was tested by determination of prothrombin time (PT) and activated thrombo-plastin time (APTT) of a Coagulation Reference Plasma (Baxter AG), with reagents from Instrumentation Laboratory. Antifungal assay.To examine peptide inhibition of growth of Batrachochytrium dendrobatidis [9], 5 ´10 4 mature cells or 5 10 5 zoospores in 50 ml H-broth were plated in replicates in a 96-well microtiter plate with or without addition of 50 ml serial dilutions of each peptide in broth.Positive control wells received 50 ml broth without peptide, and negative control wells (on a separate plate) received 50 ml broth containing 0.4% paraformaldehyde.Growth at 96 h (23°C) was measured as increased absorbance at 492 nm with an ELISA plate reader. Synthetic peptide characterization Synthetic TA and CATA preparations were pure as determined by analytical RP-HPLC and had the correct amino-acid compositions and masses. Hemolysis assay CATA was relatively nonhemolytic compared with TA, but the hemolytic activity of both increased with increasing peptide concentration (Table 2).For comparison, the hemolytic activities are expressed as the concentration of peptide yielding 50% hemolysis, and this value was > 130 mM for both peptides.Previous studies found that the 50% hemolysis value for TA was >120 mM in agarose [1], and 30 mM in isotonic saline [2]. Coagulation assay The results of the coagulation assay are shown in Table 3.Normal values for PT and APTT times are 11-14.5s and 25-37 s, respectively [10].In the PT assay, these values were not exceeded at any concentration of TA, but were exceeded by a concentration of 40 mM CATA.In the APTT assay, the normal values were exceeded by 52 mM TA and 40 mM CATA.In comparison, the range of concentrations at which TA is antibacterial for Gram-positive and some Gram-negative organisms and the fungus Candida albicans, is generally less than 12 mM [1,6]. Antifungal assay The results of assays with the fungus Batrachochytrium dendrobatidis are shown in Table 4. TA was more active than CATA against both the zoospore form of the fungus (lacking cell walls) and the mature cell form, but CATA retained significant inhibitory activity.In comparison, the lethal concentration of TA Control is saline added to the Reference Plasma.Values out of range are marked with an asterisk.Control for APTT is out of the normal range because the substrate normal plasma was diluted with saline.This effect was not seen in the control for the PT test. DISCUSSION During the past two decades, a new class of antibiotic peptides has been discovered, the gene encoded, antibiotic peptides obtained from animal, plant and bacterial sources [11].A great deal of attention has been focused on these peptides due to the development of a public health crisis as a consequence of the appearance of drug resistant microorganisms, and the hope is that these new antibiotic peptides will provide a partial solution to this crisis.Their small sizes and relatively simple structures make them ideal candidates for modification by solid phase peptide synthesis technologies.Many synthetic analogs of the naturally occurring structures have been developed, including hybrids containing portions of the sequences of two or more antibiotic peptides [12].Several have been found to have improved antibiotic properties with respect to the parent peptides, and some of the new synthetic antibiotic peptides are either in, or have successfully completed, clinical trials.Previous successes in the development of hybrid peptides from cecropin A and melittin with improved antimicrobial activities in relation to the parent peptides but with no hemolytic activity indicated that it might be possible to do the same starting with cecropin A and TA.The preliminary data reported here indicate that: 1) both TA and CATA are not hemolytic at concentrations at which TA yields antimicrobial effects (i.e.generally less than 12 mM [1, 6]); 2) TA does not affect coag-ulation times at these concentrations; 3) CATA inhibits coagulation at all concentrations tested; and 4) CATA has a somewhat weaker but still significant activity against the chytrid fungus Batrachochytrium dendrobatidis (which has been linked to recent declines in amphibian populations [13]) and Candida albicans [1] in comparison with TA.These results support the concept that TA may be a potentially useful antibiotic, whereas CATA requires further study.Additional antifungal and antibacterial studies are under way. Table 2 . Percent hemolysis of human erythrocytes by TA and CATA a100% hemolysis occurred at 225 mM TA. CATA was not tested above 130 mM.
1,486.8
2001-01-01T00:00:00.000
[ "Biology", "Medicine" ]