id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
247168369 | pes2o/s2orc | v3-fos-license | Magnetic metasurfaces properties in the near field regions
In this paper, we present a general equivalent-circuit interpretation of finite magnetic metasurfaces interacting with an arbitrary arrangement of RF coils operating in near-field regime. The developed model allows to derive a physical interpretation of the interactions between the metasurface and the surrounding RF coils, both transmitting and receiving. Indeed, especially for near-field applications, the metasurface presence modifies the behavior of each RF coil differently, due to the specific reciprocal interactions. Hence, the proposed approach introduces a source-related complex magnetic permeability matrix, overcoming the traditional bulk definition. To prove the model validity against full-wave simulations, we present two significant test cases, commonly used in practical applications. The former is represented by the simple metasurface-coil arrangement from which important and fundamental considerations can be drawn. The latter system is composed by a transmitting and a receiving coil with a metasurface in between; detailed explanations on the metasurface interactions with both the RF coils are developed. Finally, we also achieved an excellent agreement between the numerical results and the measurements obtained through fabricated prototypes. In summary, the circuit interpretation herein presented, in addition to the rigorous electromagnetic theoretical approaches already appeared in the open literature, reveals useful in providing quantitative, practical, and easy-to-handle guidelines for the design and physical understanding of finite magnetic metasurfaces interacting with arbitrary RF coils arrangements in the near-field regime.
of resonant unit-cells, like spiral or split-ring resonators (Fig. 1a). The whole array reacts to an impinging magnetic field with a resonant behavior (usually described through a Lorentzian model), making possible to exploit enhanced and μ-negative permeability at specific bandwidths. It has been demonstrated in 37 that the entire metasurface can be represented by an equivalent RLC model; this step is fundamental to describe and quantify its interactions with other RF coils, as it will be shown in the following sections. A brief recall of the metasurface RLC reduction is herein reported for the reader clarity: more details can be found in 37 . In a generic arrangement, we can have M fed RF coils interacting with a passive metasurface. The metasurface can be assumed to be formed by N × N = P resonant unit-cells. If we refer to the RF coils with the first M indices and with the following P indices to the elements of the array, the overall system impedance matrix can be written as below. where c i is the generic i-th complex current coefficient and I x is the equivalent current flowing in the RLC model of the array. By summing up equations from row M + 1 to row M + P and re-arranging terms, it is possible to write the following system, where the P elements of the metasurface have been substituted by their equivalent resonator (marked with index x).
In particular, the Z xx term can be interpreted as the self-impedance of the metasurface equivalent resonator (RLC series), whereas I x is its equivalent flowing current and the various Z xi terms correspond to the mutual coupling coefficients between the metasurface and each of the M RF coils 37 .
At this point, we can express the current I x that flows in the equivalent metasurface RLC circuit as a function of the other M RF coils currents, exploiting the equations system (3): Thus, we can substitute the expression (4) in the first M equations of (3); therefore, the effect of the metasurface presence over the other M RF coils can be easier highlighted: Further, we can write the generic element of the impedance matrix in (5) by introducing the source-related complex (relative) magnetic permeability value µ ij r , as described below.
As it will be better clarified for the adopted test-cases, each RF coil undergoes to a unique impedance modification due to the presence of the magnetic metasurface, in dependence of its relative position and interactions with other elements. Thus, each RF coil, and each corresponding mutual coupling term of the impedance matrix (5), experiences a different equivalent complex permeability value (what we call source-related permeability, Fig. 1b). Finally, the overall M RF coils can be represented by the following complete equations system, where the metasurface presence has been translated into the complex (relative) magnetic permeability coefficients µ ij r : Therefore, practical guidelines and physical interpretations useful to accomplish the desired design can be derived from the retrieved lumped elements of the entire system, by using the complex relative permeability matrix µ r . Indeed, additional degrees of freedom are available to the designer to optimize the M RF coils system, consequently exploiting more effectively its potentialities through the introduced source-related magnetic permeability values.
Selected experimental set-ups.
It is worth remarking that the aim of this paper is to develop a circuit-based model able to provide useful and practical design guidelines for the realization of a finite magnetic metasurface interacting with a generic RF coils arrangement, also giving a physical interpretation of the entire system by using the retrieved complex magnetic permeability matrix, as previously explained. Therefore, we herein report two meaningful test-cases adopted to validate the proposed approach. Firstly, a single RF coilmetasurface system is faced; this simple configuration can be seen as the building block of several applications, as for instance in Magnetic Resonance Imaging RF coil design 38 . Secondly, the system formed by a transmitting coil, a metasurface and a receiving coil is analyzed with our circuit model (as schematically depicted in Fig. 1a): for this case, some important and effective design considerations can be drawn, especially suitable for resonant inductive Wireless Power Transfer applications 19 . Nonetheless, the provided analysis is completely general and can be applied also to more complex coils arrangements. Specifically, we exploited a Method of Moments electromagnetic solver (Feko suite, Altair, Troy, MI, USA) for the entire design process while the measurements have been performed by using the Keysight (Santa Rosa, CA, USA) N9918B FieldFox Handheld Vector Network Analyzer.
Single coil-metasurface system description. The first proposed test-case is depicted in Fig. 2a. It comprises an RF active planar spiral with a 10 cm external diameter. The coil presents 5 turns of a 28 AWG lossy copper wire, with a pitch between adjacent branches of 0.68 mm. No additional reactive loads are added, and the spiral is non-resonant.
We also consider a metasurface made of a planar 3 × 3 structure; each unit-cell is an 8-turns passive resonant spiral with a 2.4 cm external diameter. The unit-cell pitch is 0.18 mm, made of 28 AWG lossy copper wire. The overall metasurface is positioned 5 mm away from the active RF coil, in a coaxial fashion. In order to operate at the desired working frequency (around 6 MHz), a 390 pF capacitor is added in series to each unit-cell. The choice of the working frequency is arbitrary and other values might have been chosen as well. Following the methodology reported in 37 , we extracted the equivalent RLC model of the metasurface together with the mutual coupling coefficient with the active RF coil. The obtained values are: R meta = 4.13 Ω, L meta = 14.97 μH, C meta = 43.35 pF, M meta-coil = 2.09 μH.
Besides the numerical simulations, we also fabricated prototypes to perform experimental measurements (Fig. 3a, b). The prototypes are built with a 28 AWG copper wire glued onto an 0.8 mm thick FR4 slab (ε r = 4.3, tanδ = 0.02). The capacitors are soldered on the other side, following the design specifications. In addition, Fig. 3c shows the final experimental arrangement, where a plastic framework is employed to precisely positioning the radiating elements in terms of distances, exploiting the 4 external holes drilled on the FR4 substrate.
Transmitter-metasurface-receiver system description. The second CAD configuration is shown in Fig. 2b. Essentially, it consists in the same configuration of the previous test-case but an additional RF passive coil is added. This coil is geometrically identical to the fed RF spiral and it is non-resonant (thus, it is not loaded with capacitors). It has been placed 10 cm away from the fed one, always in a coaxial fashion. This arrangement is typically used in inductive WPT, where a transmitting coil, a metasurface and a receiving coil are positioned as in this example.
As in the previous case, we also extracted the mutual coupling coefficient between the metasurface and the added receiving coil. The other lumped values, i.e. the metasurface equivalent RLC and its mutual coupling with the fed coil, have been already calculated from the first configuration. The coefficient M meta-receiver was estimated equal to 0.12 μH.
Finally, also for this test-case, the experimental set-up was arranged (Fig. 3d). where the RF coil is indicated with the index 1, whereas the metasurface is globally reduced to its single equivalent resonator and pointed out with index 2. By expressing the current I 2 as a function of I 1 , it is straightforward writing down the impedance seen at the port 1: We can now exploit the developed analytical model to elaborate equation (9); in particular, we assume that the RF coil (element 1) is not loaded with any capacitor, thus it is represented by its self-resistance and inductance: Through some algebraic manipulations, we can express the port impedance in the following form: At this point, we can introduce the source-related complex (relative) magnetic permeability μ r ; this permeability is associated to the equivalent medium in which the RF coil 1 is immersed (Fig. 1b): and, thus, we can express this equivalent complex relative permeability as a function of the lumped elements of our circuit equivalent model: www.nature.com/scientificreports/ In order to report the complex magnetic permeability behavior versus frequency expressed by the Eq. (13), we used the lumped elements values retrieved from the CAD model described in Fig. 2a; the results are shown in Fig. 4a ( µ ′ r ) and Fig. 4(b) ( µ ′′ r ). In these graphs, we also compared the pure analytically retrieved permeability against full-wave simulations and experimental measurements. As evident from Fig. 4, we observe an excellent agreement, thus demonstrating the reliability of the circuit model. It may be worth highlighting that this is the equivalent magnetic permeability as seen by the RF coil 1 itself; thus, it does not represent the actual bulk permeability of the metasurface alone. Hence, differently from the canonical approach, we avoid describing the bulk permeability of the proposed metasurface; instead, the metasurface equivalent effect onto the medium surrounding the RF coils arrangement is pointed out (from which the term source-related permeability).
In this sense, a noticeable result that has been proved in the literature is that a μ r = −1 metamaterial can enhance the evanescent magnetic field produced by an RF coil 15,30,38 . Hence, the question is how a metamaterial with a μ r = −1 as its own bulk permeability interacts with the RF coil from a circuital point of view. As typically presented in the literature, a metamaterial can be simulated by a numerical solver and it consists of a thick slab of homogeneous material showing the desired permeability. As a matter of fact, the slab thickness is often www.nature.com/scientificreports/ larger than the diameter of the RF coil placed in its proximity 30,38 (see Fig. 5). In addition, it is also positioned very close to the coil. Since the electromagnetic field produced by a resonator significantly drops for distances larger than its diameter 7 , this configuration corresponds to divide the space where the RF coil is placed into two subdomains: a homogenous material with μ r = −1 at one side and free space on the other side (μ r = +1) (Fig. 5). Thus, it is reasonable to expect that the effective magnetic permeability seen by the RF coil will be the average value of the permeability of the two subdomains; this implies that the equivalent medium permeability is zero in its real component (see (13)). Therefore, according to (12), this condition has the effect of cancelling the reactive component of the RF coil impedance, thus putting the coil under resonance. By referring to the results of Fig. 4, the zero value for permeability happens at f = 6.4 MHz. Hence, the current flowing in the RF coil dramatically increases for a given voltage excitation; this is consistent with what observed in the literature and predicted by the theoretical derivations based on Maxwell equations 30 . In particular, Fig. 6 reports the H-field maps obtained for the CAD model of Fig. 2a through full-wave simulations, without and in the presence of the metasurface at the μ r = −1 point. By forcing the same circulating current in the RF coil for both the configurations, it is evident how the metasurface is able to enhance the H-field produced by the driving coil. Therefore, the herein provided circuital model is able to describe the μ r = −1 condition only through the retrieved lumped parameters (13); thus, the synthesis of artificial materials can be greatly simplified.
In a practical set-up, a metasurface is fabricated by starting from a 2D array of resonant magnetic inclusions, like spiral or split ring resonators. As a matter of fact, the actual thickness of the realized metasurface is not directly correlated to the equivalent thickness of the homogeneous material adopted in full-wave simulations (Fig. 5), being extremely thinner (usually a few millimeters). As evident in (13), the retrieved permeability can be finely tailored on the basis of the lumped elements values described in our model. Hence, the metasurface has an effective thickness that can be modulated through the lumped model. In fact, the availability of a simple and straightforward circuit model, in which the relative lumped elements can be modified to shape the metasurface magnetic response according to the design requirements, is one of the major advantages of the proposed approach. In particular, we can easily adjust such parameters by noticing that M 12 , L 2 , C 2 and R 2 are quantities ruled by the proposed model. Therefore, by modifying the distance between metasurface and RF coil (M 12 ), the unit-cell design (to control R 2 , L 2 and C 2 ) and their relative position (i.e., the array periodicity), we can obtain the desired curve for the equivalent permeability experienced by the RF coil. This implies that the proposed circuit model can also be used to characterize intermediate situations, in which, for instance, the metamaterial cannot be approximated by the semi-infinite hypothesis represented in Fig. 5. In that case, the equivalent permeability seen by the RF coil will be the average value between the air (present on one side) and an equivalent material with a diluted permeability. Several models in the literature have been developed to describe similar situations, but typically considering only the dielectric counterpart 7,39 . In this regard, Fig. 7 reports some meaningful examples of real and imaginary permeability values retrieved with the analytical model for the proposed radiating configuration. In particular, in Fig. 7a, b, the distance between the RF coil and the metasurface is varied, from 5 mm to 11 mm; this implies that the mutual coupling M 12 between the RF coil and the metasurface is becoming smaller with increasing distances and, as predicted, the complex permeability amplitude related to the RF coil accordingly decreases, simulating a progressively thinner metamaterial. Additionally, in Fig. 7c, d, the analytical model is employed to retrieve the source related complex permeability when the metasurface unit-cell capacitive load is gradually changed, from 351 pF to 429 pF; as evident, the complex permeability experienced by the RF coil can be modulated and controlled, on the basis of the specific application requirements.
For instance, in 40 it is reported that a metasurface with a pure imaginary permeability is able to perform as an ideal microwave absorber; whereas all the mathematical analysis is therein performed under the plane wave hypothesis, we can completely overcome this limit and design arbitrary metasurface complex permeabilities also (a) (b) Figure 6. Numerical magnetic field maps evaluated for the configuration shown in the inset on a plane perpendicular to the metasurface (xz plane in Fig. 2a): actively fed RF coil without (a) and with (b) metasurface, in the condition of the same circulating current. As evident, the metasurface presence with a μ r = − 1 behavior is able to significantly enhance the magnetic field amplitude, in according to the theoretical model. www.nature.com/scientificreports/ considering near-field sources. This latter condition is generally closer to practical applications, especially those at relatively low operative frequency.
As an added value, the same metasurface can be even used to compensate any desired reactance of the coil. Before its resonant frequency, when the real permeability is positive, capacitive reactance can be compensated; conversely, after the resonant point, a negative value of the real permeability can be used to null an inductive impedance (Fig. 4a, b). Moreover, provided that the permeability imaginary component, introduced by the metasurface ohmic losses, retains the proper value to guarantee a good matching to the port impedance (12), not only the tuning of the RF coil (i.e., the cancellation of its reactive impedance component), but also the matching to the output impedance of a generator can be achieved (for instance, 50 Ω). To this aim, Fig. 8 reports both the numerical and the experimental S 11 parameter of the model described in Fig. 2a Indeed, by a proper metasurface design, we can achieve tuning and matching of an RF coil without using any capacitive load and/or matching network. This implies a more efficient design of the RF coil, avoiding the use of lumped capacitors that are often the cause of undesirable electric field hot spots 20 . www.nature.com/scientificreports/ Transmitter-metasurface-receiver system. By recurring to the same circuital model previously described, it is possible to express the equations system for the CAD in Fig. 2b in the following way: in which we denote with the indices 1 and 2 the fed transmitter and the passive receiver coil, respectively; in this case, the magnetic metasurface has been replaced by its equivalent resonator and addressed with index 3. We can now proceed in the same fashion as for the previous case, i.e. we express the metasurface equivalent current I 3 as a function of I 1 and I 2 and substitute it in the first two equations of (14). The result is a 2-port system whose impedance matrix has the following form: From (15), it is evident that both the transmitter and receiver self-impedances are influenced by the metasurface presence. Indeed, all the 4 terms of (15) contain a dependence on the metasurface self-impedance Z 33 at the denominator, thus presenting a peak at its resonance. It is easy to verify that an expression formally equivalent to Eq. (13) can be derived for both the transmitter and receiver. Thus, by exploiting the single unit-cell design (R 3 , L 3 , C 3 ), the cell periodicity within the array and the metasurface distance with the RF coils (M 13 /M 23 terms), it is possible to manipulate both the reactive and the real components of the RF coils self-impedances. Following the model developed for the single coil-metasurface case, it is worth pointing out that both transmitter and receiver experience different magnetic permeabilities; hence, it immediately emerges that a magnetic metasurface acts differently on the RF coils constituting the system, depending on its relative position and on the coils geometrical constraints, as theoretically predicted. In particular, the transmitter-related complex permeability is coincident with the behavior reported in Fig. 4a, b; conversely, the receiver permeability is shown in Fig. 9a, b. It is apparent from the permeability values that the receiver is minimally affected by the metasurface presence; this is coherent with the greater distance that separates the receiver from the metasurface with respect to the transmitter (i.e., 95 mm against 5 mm).
To report a practical scenario, in resonant inductive Wireless Power Transfer (WPT), the inductive coupling is exploited to transfer energy from an active RF coil towards a passive receiving RF coil; consequently, the most important term to be studied is the off-diagonal one in (15). Indeed, the effective Z 11 eff and Z 22 eff (the global self-impedances of transmitter and receiver) can always be compensated by resorting to a matching network or exploiting the transmitter and receiver distances with the metasurface as an additional design parameter 15,41 . Therefore, it is worth expressing the mutual coupling term Z 12 eff in its complete form to understand some interesting features on how a magnetic metasurface interacts and modifies an inductive link. Hence, we can write: www.nature.com/scientificreports/ where jωM 12 is the classical inductive mutual coupling term between the two RF coils (Z 12 ), in this case the transmitter and the receiver. Instead, the other additional term arises because of the metasurface presence, which is described through its equivalent resonator. By manipulating the above expression, we can directly express the source related magnetic permeability of the inductive link as: where we have assumed that the total mutual coupling between transmitter and receiver can be expressed as: In Fig. 9c, d we reported the real and imaginary component of this permeability, comparing the pure analytical solution against full wave simulations and experimental measurements, obtained from the set-up depicted in Fig. 2b. Again, we observe an excellent agreement among analytical model, full-wave simulations and measurements, thus demonstrating the accuracy of the equivalent circuit in effectively representing the real scenario.
At this point, from the graphs of Fig. 9c, d, some important observations can be drawn. We immediately reveal that the metasurface is able to eliminate, almost perfectly, the mutual coupling jωM 12 between transmitter and receiver. This happens slightly beyond the metasurface resonant point at f = 6.6 MHz (zero point cross of the real part of the retrieved permeability).
If the loss component of the retrieved permeability is low, then the metasurface acts as a perfect magnetic shield between the RF coils; indeed, the off-diagonal terms in (15) are nulled and the transmitter and receiver are decoupled. This effect has been observed in the literature and already exploited in different technological areas, as an alternative solution to ferrite shields for low frequency magnetic fields or in MRI array elements decoupling 20,23 . Obviously, this operative point must be avoided if the application under study is the wireless energy transfer between the two RF coils.
On the other hand, in WPT applications, the best working frequency results to be at the metasurface selfresonance (f = 6.25 MHz in Fig. 9c, d), when the reactive component of Z 33 is nulled 19 and the off-diagonal term Z 12 eff is maximized. Indeed, in this configuration, the magnetic metasurface is acting as the intermediate coil of a classical 3-coil system 42 . Provided that the impedances at the port 1 and 2 (transmitter and receiver) can be appropriately compensated and matched, this operative point can lead to the maximum coupling between the two RF coils. Since efficiency is directly dependent on the square of the absolute Z 12 eff value 19 , this means reaching the maximum energy delivery.
These two practically interesting working conditions, i.e. shielding and power transfer configurations, have been also evaluated through full-wave simulations. In particular, Fig. 10a reports the magnetic field distribution Figure 10. Numerical magnetic field maps evaluated for the configuration shown in the inset on a plane perpendicular to the metasurface (xz plane in Fig. 2b), in the space between transmitting and receiving coils.
(a) Magnetic field distribution evaluated with the metasurface used as a magnetic field shield, at 6.6 MHz. (b) Same field distribution with the metasurface tuned to enhance the mutual coupling between transmitter and receiver, at 6.25 MHz. It must be noted that the comparison is performed with the same circulating current in the transmitter. Fig. 2b for the geometrical reference system) between transmitting and receiving coils when the metasurface is employed as a magnetic field shield (at 6.6 MHz). Conversely, Fig. 10b describes the same geometrical configuration but with the metasurface tuned to enhance the mutual coupling between transmitter and receiver (at 6.25 MHz). It must be noticed that both these numerical experiments have been carried out for the same circulating current in the transmitting coil, to obtain a fair comparison. The obtained numerical results confirmed what theoretically expected in terms of field distribution. Certainly, the doubt that fabricating a magnetic metasurface can be more problematic with respect to a simple additional repeater coil can be raised. However, some peculiar characteristics of magnetic metasurfaces cannot be achieved by a single additional coil, like enhanced misalignment robustness 37 and electric field shielding 43 .
In conclusion, when a magnetic metasurface interacts with RF coils, understanding that each coil experiences a peculiar equivalent permeability, depending on its position and design geometries, is crucial. In this way, the various RF coils behaviors can be more easily manipulated, rather than retrieving the bulk magnetic properties of the metasurface itself, which is not convenient to describe near-field interactions. By expressing such interactions with an equivalent circuit, a straightforward and more effective design process can be accomplished, significantly aiding the engineering step, as summarized in the flow-chart scheme reported in Fig. 11.
Discussion
In this paper, we presented a general equivalent-circuit interpretation of finite magnetic metasurfaces interacting with an arbitrary arrangement of RF coils operating in near-field regime. In particular, the developed model is able to provide a useful physical understanding, for which the metasurface complex magnetic permeability can be appropriately engineered in dependence of the various RF coils constituting the overall system. It is worth mentioning that arbitrary RF coils arrangements interacting with the metasurface can be described and analyzed, hence making the model general and easily extendible for several different applications.
We first recalled how to reduce similar structures interacting with RF coils to their own equivalent resonator model, further analyzing how a magnetic metasurface affects differently the surrounding RF coils, defining a proper source-related complex relative magnetic permeability matrix. Afterwards, we deeply studied two meaningful test-cases to validate the proposed circuital model. Firstly, we faced the single coil-metasurface system, which is the simplest possible configuration but extremely interesting for the related practical implications; secondly, we studied the classical transmitter-metasurface-receiver set-up, typical of Wireless Power Transfer applications. We compared the analytical predictions with full-wave simulations, obtaining excellent results and, thus, demonstrating the reliability and accuracy of the circuit interpretation. Moreover, measurements performed over the fabricated prototypes reinforced the numerical conclusions.
Although very detailed theoretical works describing such structures through full Maxwell equations are already available in the literature, a lumped elements model can be extremely useful in practical design and engineering process. Indeed, the possibility to quantify and manipulate the key parameters of a system results in a major advantage from a design point of view in a large number of applications, like Wireless Power Transfer and Magnetic Resonance Imaging.
The circuit model herein presented is general and we foresee an extension to electric near-field interactions between generic antennas and metasurfaces configurations. Figure 11. Design flowchart using the proposed equivalent circuit to facilitate the metasurface engineering step. | 2022-03-02T06:23:42.515Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "5fa9b48923467a1f6602eca719309e23b8361e31",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f21c89e52a636ce45c64197f6669e8278c186b59",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201252025 | pes2o/s2orc | v3-fos-license | Spatially resolved X-ray study of supernova remnants that host magnetars: Implication of their fossil field origin
Magnetars are regarded as the most magnetized neutron stars in the Universe. Aiming to unveil what kinds of stars and supernovae can create magnetars, we have performed a state-of-the-art spatially resolved spectroscopic X-ray study of the supernova remnants (SNRs) Kes 73, RCW 103, and N49, which host magnetars 1E 1841-045, 1E 161348-5055, and SGR 0526-66, respectively. The three SNRs are O- and Ne-enhanced and are evolving in the interstellar medium with densities of>1--2 cm$^{-3}$. The metal composition and dense environment indicate that the progenitor stars are not very massive. The progenitor masses of the three magnetars are constrained to be<20 Msun (11--15 Msun for Kes 73,<13 Msun for RCW 103, and ~13 --17 Msun for N49). Our study suggests that magnetars are not necessarily made from very massive stars, but originate from stars that span a large mass range. The explosion energies of the three SNRs range from $10^{50}$ erg to ~2$\times 10^{51}$ erg, further refuting that the SNRs are energized by rapidly rotating (millisecond) pulsars. We report that RCW 103 is produced by a weak supernova explosion with significant fallback, as such an explosion explains the low explosion energy (~$10^{50}$ erg), small observed metal masses ($M_{\rm O}\sim 4\times 10^{-2}$ Msun and $M_{\rm Ne}\sim 6\times 10^{-3}$ Msun), and sub-solar abundances of heavier elements such as Si and S. Our study supports the fossil field origin as an important channel to produce magnetars, given the normal mass range ($M_{\rm ZAMS}<20$ Msun) of the progenitor stars, the low-to-normal explosion energy of the SNRs, and the fact that the fraction of SNRs hosting magnetars is consistent with the magnetic OB stars with high fields.
Introduction
Stars with mass 8 M end their lives with core-collapse (CC) supernova (SN) explosions (see Smartt 2009, for a review). Two products are left after the explosion: a compact object (a neutron star, or a black hole for the very massive stars) and a supernova remnant (SNR). Both products are important sources relevant to numerous physical processes. Since the two objects share a common progenitor and are born in a single explosion, studying them together will result in a better mutual understanding of these objects and their origin.
Magnetars are regarded as a group of neutron stars with extremely high magnetic fields (typically 10 14 -10 15 G, see Kaspi & Beloborodov 2017, for a recent review and see references therein). To date, around 30 magnetars and magnetar candidates have been found in the Milky Way, Large Magellanic Cloud (LMC), and Small Magellanic Cloud (Olausen & Kaspi 2014). For historical reasons, these magnetars are categorised as anomalous X-ray pulsars and soft gamma-ray repeaters, based on their observational properties. However, the distinction between the two categories has blurred over the last 10-20 years. Unlike the classical rotational powered pulsars, this group of pulsars rotates slowly with periods of P ∼ 2-12 s, large period derivativesṖ ∼ 10 −13 -10 −10 s s −1 , and are highly variable sources usually detected in X-ray and soft γ-ray bands. In recent years, the extremely slowly rotating pulsar 1E 161348−5055 (P = 6.67 hr) in RCW 103 is also considered to be a magnetar, because some of its X-ray characteristics (e.g., X-ray outburst) are typical of magnetars (De Luca et al. 2006;Li 2007;D'Aì et al. 2016;Rea et al. 2016;Xu & Li 2019).
The origin of the high magnetic fields of magnetars is still an open question. There are two popular hypotheses: (1) a dynamo model involving rapid initial spinning of the neutron star (Thompson & Duncan 1993), (2) a fossil field model involving a progenitor star with strong magnetic fields (Ferrario & Wickramasinghe 2006;Vink & Kuiper 2006;Vink 2008;Hu & Lou 2009). The dynamo model predicts that magnetars are born with rapidly rotating proto-neutron stars (on the order of millisecond), which can power energetic SN explosions (or release most of the energy through gravitational waves, Dall'Osso et al. 2009). This group of neutron stars is expected to be made from very massive stars (Heger et al. 2005). The fossil field hypothesis predicts that magnetars inherit magnetic fields from stars with high magnetic fields. Nevertheless, for the fossil field model, there is still a dispute on whether magnetars originate preferentially from highmass progenitors (> 20 M , Ferrario & Wickramasinghe 2006, 2008 or less massive progenitors (Hu & Lou 2009).
Motivated by the questions about the origin of magnetars, we performed a study of a few SNRs that host magnetars. As the SNRs are born together with magnetars, studying them allows us to learn what progenitor stars and which kinds of explosion can create this group of pulsars. Therefore, we can use observations of SNRs to test the above two hypotheses.
In order to get the best constraints of the progenitor masses, explosion energies, and asymmetries of SNRs, we selected those SNRs showing bright, extended X-ray emission. Among the ten SNRs hosting magnetars (nine in Olausen & Kaspi 2014, and RCW 103), only four SNRs fall into this category. They are Kes 73, RCW 103, N49 (in the LMC), and CTB 109. CTB 37B is another SNR hosting a magnetar, but with an X-ray flux one order of magnitude fainter and with sub or near-solar abundances (Yamauchi et al. 2008;Nakamura et al. 2009;Blumer et al. 2019). Here we do not consider HB9, as the association between HB9 and the magnetar SGR 0501+4516 remains uncertain. Vink & Kuiper (2006) and Martin et al. (2014) have studied the overall spectral properties of SNRs Kes 73, N49, and CTB 109 and found that their SN explosions are not energetic. In this study, with RCW 103 included and CTB 109 excluded, we constrain the progenitor masses of the magnetars, provide spatial information about various parameters (such as abundances, temperature, density), and explore the asymmetries using a state-of-the-art binning method. We exclude the oldest member CTB 109 from our sample. 1 Therefore, our sample contains Kes 73, RCW 103, and N49, which host magnetars 1E 1841−045, 1E 161348−5055, and SGR 0526−66, respectively. Their ages have been well constrained, and the spectra of most regions could be well explained with a single thermal plasma model (see Sect. 3). The distance of Kes 73 is suggested to be 7.5-9.8 kpc using the HI observation by Tian & Leahy (2008) and 9 kpc using CO observation (Liu et al. 2017). Here we take the distance of 8.5 kpc for Kes 73. The distance of RCW 103 is taken to be 3.1 kpc according to the HI observation (Reynoso et al. 2004, the upper limit distance is 4.6 kpc). N49 in the LMC is at a distance of 50 kpc.
Data
We retrieved Chandra data of three SNRs -Kes 73, RCW 103, and N49. Only observations with exposure longer than 15 ks are used. The observational information is tabulated in Table 1. The total exposures of the three SNRs are 152 ks, 107 ks, and 114 ks, respectively.
We used CIAO software (vers. 4.9 and CALDB vers. 4.7.7) 2 to reduce the data and extract spectrum. Xspec (vers. 12.9.0u) 3 was used for spectral analysis. We also used DS9 4 and IDL (vers. 8.6) to visualize and analyze the data.
1 The X-ray emission in the western part of the SNR is almost totally absorbed, which means that only a fraction of the metals can be observed. For such an old SNR, the X-ray emission is highly influenced by the ISM. The spectra are dominated by two thermal components, and therefore the derived metal abundances and masses will be influenced by the assumed filling factors of the X-ray-emitting gas. Moreover, it might be difficult to constrain the age with good accuracy (e.g., 9-14 kyr, Vink & Kuiper 2006;Sasaki et al. 2013
Adaptive binning method
In order to perform spatially resolved X-ray spectroscopy, we dissected the SNRs into many small regions and extracted the spectrum from each region in individual observations. We employed a state-of-the-art adaptive spatial binning method called the weighted Voronoi tessellations (WVT) binning algorithm (Diehl & Statler 2006), a generalization of the Cappellari & Copin (2003) Voronoi binning algorithm, to optimize the data usage and spatial resolution. The same method has been used to analyze the X-ray data of SNR W49B and study its progenitor star (Zhou & Vink 2018). The X-ray events taken from the event file are adaptively binned to ensure that each bin contains a similar number of X-ray photons. Therefore, the WVT algorithm allows us to obtain spectra across the SNRs with similar statistical qualities.
Firstly, for each SNR, we produce a merged 0.3-7.0 keV image from all observational epochs using the command merge_obs in CIAO. This merged image is subsequently used to generate spatial bins using the WVT algorithm. Since this study focuses on the plasmas of SNRs, we exclude the magnetars' emission by removing circular regions with angular radii of 15 , 20 , and 5 (radius to encircle over 95% of the photon energy below 3.5 keV), respectively, for Kes 73, RCW 103, and N49. We also exclude the pixels with an exposure short than 40% of the total exposure. For the three SNRs, the targeted counts in each bin are 6400, 10000, and 6400, respectively, corresponding to signal-to-noise (S/N) ratios of 80, 100, and 80, respectively. We obtain 83, 293, and 96 bins within Kes 73, RCW 103, and N49, respectively. Because RCW 103 is bright and is the most extended SNR among the three SNRs, we use a larger S/N to increase the statistics of each bin and do not define the SNR boundary. For the other two SNRs, we manually defined the boundaries of SNRs in order to include all the X-ray photons located around the edges. The merged images and adaptively binned images are shown in Fig. 1.
Secondly, we extract spectra from each region (bin) in individual observations and jointly fit the spectra at each bin using a plasma model. From spectral fit, we obtain the best-fit parameters and their uncertainties at different bins. Finally, we study the distributions of the best-fit parameters across the SNRs and do further analysis (see Sect. 3).
Article number, page 2 of 13 Zhou et al.: Spatially resolved X-ray study of supernova remnants that host magnetars
Spectral fit and density calculation
The X-ray emission of the three SNRs can be generally well fitted with an absorbed non-equilibrium ionization (NEI) plasma model, although some regions might need double components to improve the fit (Miceli et al. in preparation, and Braun et al. 2019). The plasma model uses the atomic data in the ATOMDB code 5 version 3.0.9. Using the single component model, we consider that the SN ejecta and the ambient media are mixed. An appropriate NEI model to describe the shocked plasma in young SNRs is the vpshock model, which describes an under-ionized plasma heated by a plane-parallel shock (Borkowski et al. 2001). This model allows us to fit the electron temperature kT , metal abundances, and the ionization timescale τ = n e t, where n e is the electron density and t is the shock age (approximate SNR age). The Tuebingen-Boulder interstellar medium (ISM) absorption model tbabs is used for calculation of the X-ray absorption due to the gas-phase ISM, the grain-phase ISM, and the molecules in the ISM (Wilms et al. 2000). The solar abundances of Asplund et al. (2009) are adopted. We note that both singletemperature and multi-temperature components models are frequently used in SNRs. Two-or multi-temperature components are often needed for large extraction regions characterized by mixed ejecta and blast wave components, which as a result show a spatial variation of their spectral properties (such as the column density, the plasma temperature, the ionization timescale, or the gas density). Here we performed a state-of-the-art spatially resolved spectral analysis to address this complication. If two-temperature components are indeed needed everywhere in the SNR, the final best-fit parameters might be affected. Fitting a multi-thermal plasma with a single temperature causes a systematic error in the derived abundances. For example, an element whose strong lines have emissivities that peak at the derived temperature may have its abundance underestimated, while an element whose lines peak away from the derived temperature will have an overestimated abundance. Although more complicated models are indeed needed in many SNRs, the spectral decomposition is generally nonunique (Borkowski & Reynolds 2017) for many X-ray data and uncertainties are difficult to account for. The major reason for us to use the single thermal component is that it gives an adequately good fit to the spectra of most regions (in agreement with that pointed out by Borkowski & Reynolds 2017, for Kes 73).
Given the different spectral properties and environment of the three SNRs, the constrained metals are different. When the abundance of an element cannot be constrained, we fix it to the value of its environment (e.g., solar value in Kes 73 and RCW 103; LMC value in N49). For Kes 73, we fit the abundances of O (Ne tied to O), Mg, Si, S, and Ar. The soft X-rays of RCW 103 and N49 suffer less absorption, allowing us to fit the abundances of O and Fe (Ni tied to Fe), in addition to Ne, Mg, Si, and S. N49 is located in the LMC, so we used two absorption models to account for the Galactic and LMC absorption: tbabs (Gal) × tbvarabs (LMC). The H column density of the Galaxy towards N49 is fixed to 6 × 10 20 cm −2 (Park et al. 2012) and the absorption in the LMC is varied. The LMC abundances of C (0.45), N (0.13), O (0.49), Ne (0.46), Mg (0.53), Si (0.87), S (0.41), Ar (0.62), and Fe (0.59) are taken from Hanke et al. (2010, see references therein). For other elements, an averaged value of 0.5 is assumed. The spectral fit results are summarized in the top part of Table 2.
The density is estimated based on an assumption of the volume or geometry for the X-ray emitting plasma. For a uniform density and a shock compression ratio of four, mass conservation suggests that for shell-type SNR with a radius of R, the shell should have a thickness of approximately ∆R = 1/12R: 4πR 2 ∆R(4ρ 0 ) = 4π/3R 3 ρ 0 , where ρ 0 is the ambient density. The shell geometry is used to estimate the mean density n H for a given bin, combining the normalization parameter in Xspec (norm = 10 −14 /(4πd 2 ) n e n H dV, where d is the distance, n e and n H are the electron and H densities in the volume V; n e =1.2n H for fully ionized plasma). If the X-ray gas fills a larger fraction of the volume across the SNR (1/12 < f < 1), the derived n H ∝ f −1/2 . So the assumed geometry only affects the n H by a factor of up to 3.5. The
Distribution of parameters
Figures 2, 3, and 4 show the spatial distributions of the best-fit parameters across the SNRs Kes 73, RCW 103, and N49, respectively, except that the density panels are obtained from the best-fit norm and a geometry assumption. These three figures provide such ample information that it cannot be fully discussed in this work. In this paper, we will focus on the temperature, metal content, environment, and asymmetries.
We also plot the azimuthal and radial distribution of the bestfit parameters in Fig. 5. Here we briefly describe the distribution of some important parameters.
-Kes 73 ( Fig. 2): There is a temperature variation across the SNRs (kT =0.7-1.4 keV). The hottest plasma is located near the center of the SNR (kT up to 1.4 keV), while there is a cold (∼ 0.7-0.8 keV), broken-ring-like structure in the interior of the SNR (ring radius of ∼ 1 .4, ring centered at 18 h 41 m 18 s , −04 • 56 13 ). Such temperature variation is roughly anti-correlated with the plasma density and the Xray brightness (see Fig. 1).
There are abundance enhancements of the O (Ne tied to O), Mg, Si, and S elements. These elements show an east-west elongated structure, which is less clear in Ar, possibly because of the large uncertainties of abundance [Ar] and that the average [Ar] is less than one. Another possibility is a result of the degeneracy between [O] and N H in spectral fit, as the higher [O] at some regions show slightly lower N H . Assuming that the gas is uniformly distributed in each bin, the average density is found to be 7.3 +0.5 −0.4 cm −3 , suggesting an ambient density n 0 = n H /4 ∼ 1.7 cm −3 , consistent with the value obtained by Borkowski & Reynolds (2017, ∼ 2 cm −3 ). Such consistency indirectly supports that our geometry assumption is reliable to some extent. The density is enhanced in a broken-ring-like structure (∼ 10 cm −3 ), with an overall distribution similar to that of the X-ray brightness. Liu et al. (2017) suggested an interaction between the SNR and a molecular structure in the east, which may explain a larger column density N H there.
-RCW 103 (Fig. 3): The average temperature of the X-rayemitting plasma is kT = 0.63 keV. The temperature distribution is nearly uniform, except for a higher temperature in some boundary regions (outside the main shock sphere, likely related to high-speed ejecta clumps or bad fit with single-temperature component model) and colder plasmas in the north. We found that the O and Ne abundances are enhanced in RCW 103, while Borkowski & Reynolds (2017) obtained near solar abundances of them using the solar abundances of Grevesse & Sauval (1998) The average density is 5.9 ± 0.2 cm −3 . The density distribution has a barrel shape. The gas is greatly enhanced near the southeastern boundary (n H ∼ 9 cm −3 ; 3 .2 to the SNR center, 0 .9 to the main shock boundary). -N49 ( Fig. 4): There is an overall temperature gradient from the west to the east, anti-correlated with the density. The hottest bin is in the west, with kT = 0.92 keV. The position is consistent with a protrusion as shown in Fig. 1 The average density is 6.6±0.3 cm −3 . There is a clear density gradient from the southeast (∼ 10 cm −3 ) to the northwest (∼ 1 cm −3 ). This explains why the X-ray emission is brightened in the west.
Global parameters
Using the spectral fit results, we calculate a few important parameters related to the SNRs' evolution and metals: gas mass M gas , metal mass M X , SNR age t, and explosion energy E 0 . These results and the X-ray flux F X in the 0.5-7 keV band are listed in Table 2. The masses of the X-ray-emitting gas M gas are calculated with the fitted norm and assumed geometry. Using a similar method for density n H , we derived total gas masses of 46 +3 −2 M in Kes 73, 12.8 ± 0.4 M in RCW 103, and 200 +14 −10 M in N49. We note that the assumed geometry of the density distribution affects the M gas by a factor of a few. If the X-ray gas fills a larger fraction of the volume across the SNR (1/12 < f < 1), the derived M gas could be slightly increased. Therefore, if f is assumed to be 1 (not likely for shell-type SNRs), we can derive the maximum limit of hot gas masses of 61 +4 −3 M , 18.1 +0.7 −0.5 M , and 260 +17 −12 M for Kes 73, RCW 103, and N49, respectively.
Notes. The ". . . " sign indicates that the ejecta mass cannot be calculated because the abundance is lower than the solar or LMC value.
The metal masses are important parameters that can be used to compare with the supernova yields predicted from nucleosynthesis models and to test those models. We obtain the massweighted average abundances [X] and the observed ejecta mass M X as shown in the third part of Table 2. The abundance values are very similar to the bin-averaged abundances, so they are insensitive to the emission volume assumptions. The total masses of the metals are obtained by summing up the metal masses in each bin. For element X, the mass is obtained as where the interstellar abundance is [X] ISM =1 in our Galaxy and is equal to the LMC value for N49, and f m X is the mass fraction of the element in the gas. The ages of the SNRs can be estimated from the electron temperature kT or from the ionization timescale τ. In the first method, the shock velocity is derived as v s = [16kT s /(3µm H )] 1/2 , where m H is the mass of hydrogen atom and µ = 0.61 is the mean atomic weight for fully ionized plasma. The relation between the shock velocity and the electron temperature holds in case of temperature equilibrium between different particle species (and this can be the case, considering the relative high values of τ). The Sedov age is t sedov = 2R s /(5v s ). Using the averaged temperature kT in these SNRs, the ages of Kes 73, RCW 103, and N49 are found to be 2.4 kyr, 2.1 kyr, and 4.9 kyr, respectively. These values are consistent with those obtained in previous expansion measurements (Carter et al. 1997;Borkowski & Reynolds 2017, for RCW 103 and Kes 73, respectively) and X-ray studies (Park et al. 2012;Kumar et al. 2014, for N49 and Kes 73, respectively). The X-ray emission of the three SNRs is characterized by underionized plasma. The shock age of an SNR t can be inferred from the ionization timescale τ = n e t and the gas density n H = 1.2n e if the SNR is evolving in a uniform medium. We calculate a shock age in each bin and obtain the average age τ shock (range) of 0.9 (0.4-1.8) kyr for Kes 73, 1.8 (> 0.6) kyr for RCW 103, and 8.0 (> 0.8) kyr for N49. By comparing the t shock values with t sedov values, one would find that the difference is smallest in RCW 103, but much larger in Kes 73 and N49, which are evolving in very inhomogeneous environment (see density distribution in Figs. 1 and 5). In a nonuniform medium, the t shock may deviate from the shock timescale. Moreover, the t shock values can be influenced by the geometry assumption. Therefore, we suggest that t sedov better represents the SNRs' true age t.
Discussion
The major goal of this paper is to explore which progenitor stars and which explosion mechanisms produce these SNRs and magnetars. The explosion energy and metal masses are two im-portant parameters to characterize the explosion, while the distribution of metals provides information about the explosion (a)symmetries. On the other hand, the density distribution provide clues about the environment and even mass-loss histories of the progenitor star. In this section, we discuss these parameters in order to unveil the explosion and progenitor of magnetars.
Environment and clues about the progenitor
The density distribution is shown in Figs. 2, 3, 4, and 5, and gas masses are listed in Table 2. The masses of the X-ray-emitting gas are ∼ 46 M and ∼ 200 M in Kes 73 and N49, respectively, indicating that the gas is dominated by the ISM. Kes 73 is possibly interacting with molecular gas in the east (Liu et al. 2017), and N49 is interacting with molecular clouds in the southeast (Banas et al. 1997;Otsuka et al. 2010;Yamane et al. 2018). The inhomogeneous ambient medium significantly influences the Xray morphology of the SNRs. As the density increases, the X-ray emission is brightened.
Massive stars launch strong winds during their mainsequence stage and could clear out low-density cavities (e.g., Chevalier 1999). If the massive star is in a giant molecular cloud, the maximum size of the molecular-shell bubble is linearly increased with the zero-age main-sequence stellar mass: R b = 1.22M ZAMS /M − 9.16 pc (Chen et al. 2013). It is likely that the molecular shells found near the two SNRs were swept up by their progenitor winds. Therefore, taking the distance of the molecular gas to the SNR center (about the SNR radius), we can very roughly estimate the progenitor mass, which is 12±2 M for Kes 73 and 15 ± 2 for N49. We note here that the R b -M ZAMS linear relationship was obtained from the winds of Galactic massive stars, and may not be valid for LMC stars with lower metallicity. Nevertheless, the derived mass of N49 agrees with the previous suggestion of an early B-type progenitor (M ZAMS < 20M ), which created a Strömgren sphere surrounding N49 (Shull et al. 1985).
The gas mass in RCW 103 is only ∼ 13 M . Interestingly, the density distribution has a barrel shape with the shell at a distance of ∼ 3 pc to the center (Fig. 3). Between the position angle of ∼ 0 • (north) and 100 • (east), the density is reduced to 1/3-1/2 of that in other angles. In the opposite direction, the density is also relatively smaller. The density is largest in the southern and northern boundaries. This density distribution could explain why the SNR is elongated (more freely expanding) toward the low-density direction. It is likely that most of the X-ray-emitting gas has an ambient gas origin, because the density enhancement is consistent with the distribution of molecular gas in the southeast, northwest, and west (Reach et al. 2006). The existence of molecular gas with solar metallicity (Oliva et al. 1999) ∼ 3-4 pc away from the SNR center suggests that the progenitor is not very massive (M ZAMS < 13M if using the R b -M ZAMS relationship). Otherwise, the molecular gas would have been either dissociated by strong UV radiation or cleared out by fast mainsequence winds.
Although it is likely that the density distribution reflects the ambient medium, there is still a possibility that the lower density in the northeast and southwest is a result of the pre-SN winds. For example, a fast wind driven toward the northeast and southwest may clear out two lower density lobes. For a single star with M ZAMS < 15 M , its red super-giant winds are generally slow (∼ 10 km s −1 ) and the circumstellar bubble is small (< 1 pc Chevalier 2005), which cannot explain the low-density lobes. If the progenitor star was in a binary system, the accretion outflow could be fast and bipolar. However, there is no observational evidence so far to support the progenitor being a binary system.
Explosion mechanism implied from the observed metals
A common characteristic among the three SNRs is that all of them seem to be O-and Ne-enhanced, and there is no evidence of overabundant Fe (average abundance across the SNR). N49 reveals clear elevated [S], and Kes 73 shows slightly elevated [S], but [S] is sub-solar in most regions of RCW 103. Some regions with higher [O] show slightly lower N H . The degeneracy between the [O] and N H is difficult to distinguish using the current data. Future X-ray telescopes with better spectroscopic capability and higher sensitivity may resolve the O lines and solve this problem.
The abundance ratios and masses are a useful tool to investigate the SN explosion, since different progenitor stars and different explosion mechanisms result in distinct ejecta patterns. Figure 6 shows the predicted abundance ratios and yields of the ejecta as a function of the initial masses of the progenitor stars according to the one-dimensional CC SN nucleosynthesis models (Sukhbold et al. 2016, solar-metallicity model W18 for stars > 12M and zero-metallicity model Z9.6 for 9-12M by A. Heger). 6 We hereafter compare the observed abundance ra-6 There is a large variation of abundance ratios at around 20 M . As stated in Sukhbold et al. (2016) and Sukhbold & Woosley (2014), the transition from convective carbon core burning to radiative burning near tios and metal masses of the SNRs with those predicted by the nucleosynthesis models for CC SN explosions (see Figs. 7 and 8).
For the predicted abundances for Kes 73 and RCW 103, we take the shocked ISM into account by assuming that the SN ejecta are mixed with the ISM with solar abundances. As a result, the predicted abundance patterns are flatter than the pure ejecta values shown in Fig. 6. The ratios between elements with (sub)solar abundances do not provide information for the nucleosyntheis models, as we cannot extract ejecta components. Therefore, the Mg/Si ratio in Kes 73, and Si/Mg and S/Mg ratios in RCW 103 should not be taken seriously.
It is not always true that all the ejecta are mixed with the X-ray-bright ISM, especially for young SNRs. Nevertheless, we take the mass-averaged abundances across the SNRs to minimize the problems caused by the nonuniform distribution of the metals. Moreover, the observed metal masses compared to the model values in Fig. 7 provide a clue about the mixing level. For Kes 73 and RCW 103 the ISM masses are only a few to ten times the typical ejecta mass. The low metal abundances (< 2) indicate that the total metal masses are not very large and the progenitor star is probably not very massive, as the yields are generally increased with the stellar mass.
Kes 73
By fitting the abundance ratios with all the 95 models in Sukhbold et al. (2016, W18 and Z9.6, progenitor masses between 9 and 120 M ), we find that the five best-fit models for Kes 73 are the 11.75 (minimal χ 2 ν , see Fig. 7 ratios. Therefore, it is possible that not all metals are detectable in the X-ray band. If the reverse shock has not reached the SNR center, the inner part of the metals could remain cold and the total metal masses could be underestimated. The location of reverse shock (likely showing a layer of enhanced metal abundances) is not identified in Kes 73 and RCW 103. A possible reason is that the total metal masses are indeed too small to emit strong X-ray lines. The other possibility is that the reverse shock has already reached the SNR center. The ratio of reverse shock radius R r to forward shock radius R s is related to the radial distribution of the circumstellar medium (n ISM ∝ r −s ) and ejecta n ejecta ∝ r −n . For a uniform ambient medium s = 0 and an ejecta power-law index n = 7, the radius of the reverse shock R r can be estimated using the solutions by Truelove & McKee (1999) and an assumed ejecta mass of 5 M . In this case, the reverse shock should have reached the SNR center. In the s = 2 case, Katsuda et al. (2018a) Given the large uncertainties in R r /R s , we consider that the progenitor mass obtained from abundance ratios can better represent the true values for Kes 73. Nevertheless, the observed O and Ne masses allow us to exclude the progenitor models with a mass less than 11 M .
In summary, the progenitor mass of Kes 73 is 11-15M , according to the model of Sukhbold et al. (2016). The mass of ∼ 12 M estimated from the molecular environment (see Sect. 4.1) is consistent with this range. Borkowski & Reynolds the center at around this mass results in highly variable pre-SN core structures and, therefore, SN yields. (2017) also obtained a relatively low mass of 20 M by comparing the observed metals with the nucleosynthesis model of Nomoto et al. (2013). Kumar et al. (2014) used two-temperature components to fit to the X-ray data and obtained abundance ratios overlapping ours, but they obtained a larger progenitor mass of 20 M based on earlier nucleosynthesis models (Woosley & Weaver 1995;Nomoto et al. 2006) and the Wisconsin cross sections for the photo-electric absorption model.
RCW 103
The progenitor model of 11.75 M is the best-fit model for O/Mg and Ne/Mg ratios in RCW 103 (see Fig. 7), and the five best models are 11.75, 17.6, 14.7, 12.0, and 17.4 M models. Here the Si/Mg and S/Mg ratios are not fitted. The explosion energy of RCW 103 (∼ 10 50 erg) is among the weakest in Galactic SNRs. Among the five well-fit models, two have progenitor masses ≤ 12 M and relatively weak explosion energy (E 0 = 2.6-6.6 × 10 50 erg), while the other models have a canonical explosion energy.
The total plasma mass of ∼ 13 M is only a few times larger than the expected ejecta mass for a CCSN from a normal explosion ( 5 M ). If all the ejecta have been heated by the shocks, we would expect to see high metal abundances. One possibility is that most of the ejecta are cool and not probed in the X-ray band for the normal SN explosion scenario. An alternative explanation is that the ejecta mass is indeed small because of significant fallback from a weak CCSN explosion (see discussion below).
The overall [Si] and [S] are subsolar, suggesting that overall Si and S production is low in RCW 103. Although a few Si/S-rich bins are detected in the SNRs (see Fig. 3), they may correspond to some pure ejecta knots (see also Frank et al. 2015, for Si and S ejecta knots). The distribution of [Mg] gives a clue to the missing Si/S problem. As shown in Figs. 3 and 5, the Mg element is oversolar in the SNR interior, but decreases to subsolar in the outer region. This implies that the heavier elements are distributed more in the inner regions compared to the lighter elements such as O and Ne. The Si and S materials may have smaller ejection velocities and the layers might not be heated by the reverse shock. A more extreme case is that the elements heavier than Mg may fall back to the compact objects due to a weak SN explosion.
The weak explosion energy of RCW 103 means that the total ejecta mass or the initial velocity of the ejecta should be smaller than normal SNRs. According to the simulation of a CCSN explosion invoking convective engine (Fryer et al. 2018), a weaker SN explosion results in a more massive neutron star and less ejecta due to fallback. Their simulation considered 15, 20, and 25 M cases. The weakest explosion (3.4 × 10 50 erg) of a 15 M star creates a 1.9 M compact remnant, and more productions of O (0.29 M ) and Ne (0.064 M ) than observed in RCW 103. For a star more massive, a weak explosion would create a black hole. Therefore, we suggest that the progenitor star of RCW 103 has a mass of 13 M , based on a comparison with the nucleosynthesis models, and the fact that the existence of nearby molecular shells disfavor a star more massive than 13 M (see Sect. 4.1). The two-temperature analysis of RCW 103 leads to a comparable progenitor mass and low explosion energy (Braun et al. 2019). However, the progenitor mass derived here is lower than the value of 18-20 M obtained from Frank et al. (2015) using an earlier nucleosynthesis model (Nomoto et al. 2006).
The low explosion energy, the small observed metal masses, and low abundances of heavier elements such as Si and S, consistently suggest that RCW 103 is produced by a weak SN explosion with significant fallback. It has been suggested that a supernova fallback disk may be a critical ingredient in explaining the very long spin period of 1E 161348−5055 in RCW 103 (De Luca et al. 2006;Li 2007;Tong et al. 2016;Rea et al. 2016;Xu & Li 2019). Our study supports this fallback scenario. In this case, the significant amount of fallback materials increase the mass of the compact object. Therefore, we predict that 1E 161348−5055 is a relatively massive neutron star.
N49
N49 is located in the LMC, while the W18 models are applied for stars with solar metallicity. Nevertheless, the lower metallicity mainly influences the mass loss of the stars but affects less the evolution in the core; therefore, the overall results from the core may be similar to those with solar metallicity for stars below 30 M (private communication with T. Sukhbold).
The measured Si abundance of ∼ 0.6 is clearly lower than the typical value of 0.87 in the LMC (Hanke et al. 2010); this means the uncertainties of abundance ratios could be larger than the measurement values given the variation of the LMC abundance. Therefore, we only show a comparison of the measured metal masses with the nucleosynthesis model in Fig. 8. The 13 M model gives a relatively good fit to observed metal masses of O, Ne, and S. This puts a lower limit on the progenitor mass of N49, as lower mass stars produce less of these metals. The nucleosynthesis models predict that the 15-17 M stars produce abundance patterns with enhanced O and Ne relative to Si, and also enhanced S relative to O (see Fig. 6), which is the case for N49. Although a ∼ 26 M star may also produce these abundance patterns, it is not very likely to be the progenitor of N49 as its SN yields would be over one order of magnitude larger than the observed metal masses. Therefore, it is likely that N49 has a progenitor with a mass between 13 and 17 M . This is consistent with the suggestion that N49 has an early B-type progenitor (Shull et al. 1985), while the progenitor mass obtained by Park et al. (2003) is larger ( 25 M ) based on enhanced Mg (not as enhanced here) and a comparison with an earlier nucleosynthesis model (Thielemann et al. 1996). While there is so far no consensus on magnetar progenitors, there is evidence that some of them originate from very massive progenitors (M ZAMS > 30 M , see Safi-Harb & Kumar 2013, and references therein). A piece of evidence for very massive progenitors comes from the study of the magnetar CXO J164710.2−455216 in the massive star cluster Westerlund 1. The age and the stars of the stellar cluster suggest that the progenitor star of this magnetar has an initial mass of over 40 M (Muno et al. 2006). However, Aghakhanloo et al. (2019) reduced the distance of the cluster from 5 kpc to ∼ 3.2 kpc using Gaia data release 2 parallaxes. This revises the progenitor mass of the magnetar to 25 M . Another magnetar, SGR 1806−20, in a massive star cluster was likely created by a star with a mass greater than 50 M (Figer et al. 2005). On the other hand, there is evidence that magnetars and high magnetic field pulsars are from lower mass stars, in addition to the three magnetars studied here. The SNR Kes 75 that hosts a high magnetic field pulsar J1846−0258 (but shows magnetar-like bursts, Gavriil et al. 2008) was considered to have a Wolf-Rayet progenitor (Morton et al. 2007). However, the existence of a molecular shell surrounding it suggests a progenitor mass of 12 ± 2M for Kes 75 (Chen et al. 2013). A similar low mass (8-12 M ) was obtained from the far-IR observations and a comparison to the nucleosynthesis models (Temim et al. 2019). Moreover, the magnetar SGR 1900+14 in a stellar cluster is suggested to have a progenitor mass of 17±2 M (Davies et al. 2009).
Implication for the formation of the magnetars
Magnetars are likely made from stars that span a large mass range. According to current knowledge about Galactic magnetars with progenitor information, most magnetars, though not all of them, seem to result from stars with < 20 M . Among the three magnetars in our study, N49 seems to have a higher progenitor mass (13-17 M ) than RCW 103 ( 13 M ).
The SN explosion energies of the three magnetars are not very high, ranging from 10 50 erg to ∼ 1.7 × 10 51 erg, supporting the possibility that their SN explosions were not significantly powered by rapidly spinning magnetars. Particularly, RCW 103, the remnant hosting an ultra-slow magnetar with a rotational period of P = 6.67 hr, resulted from a weak explosion with energy an order of magnitude lower than the canonical value. The SNR CTB 37B that hosts CXOU J171405.7−381031 also resulted from a weak explosion (1.8 ± 0.6 × 10 50 erg, Blumer et al. 2019). Furthermore, CTB 109 (hosting 1E 2259+586) has a normal (Sasaki et al. 2004;Vink & Kuiper 2006) or even low explosion energy (2-5 × 10 50 erg, see Sánchez-Cruces et al. 2018, for a recent measurement and see references therein). The low-tonormal SN explosion energy appears to be a common property of the known magnetar-SNR systems with extended thermal Xray emission.
As pointed out in an earlier paper by Vink & Kuiper (2006), the relatively low or canonical explosion energy does not suggest that these three magnetars were born with very rapidly spinning millisecond pulsars. The rotational energy of a neutron star is E rot ≈ 3 × 10 52 (P/1 ms) −2 erg. A rapidly spinning magnetar loses its rotational energy quickly (∼ 10-100 s, Thompson et al. 2004). During the first few weeks, the magnetar energy goes into accelerating and heating the ejecta as the SN is optically thick, and at a later stage, the energy is released through radiation (Woosley 2010). This suggests that millisecond magnetars can lose some of their energies to the SN kinetic energies (∼ 40% in the model by Woosley 2010, but this fraction could be highly uncertain). Dall'Osso et al. (2009) proposed that gravitational waves might also take away the magnetar energy. The quickly rotating millisecond magnetar is regarded as a likely central engine for Type I superluminous supernovae (e.g., Woosley 2010; Kasen & Bildsten 2010). According to both theoretical studies and observations, superluminous SNe powered by millisecond magnetars should have significantly enhanced kinetic energies (2-10 × 10 51 erg, Nicholl et al. 2017;Soker & Gilkis 2017). The three SNRs studied in this paper, in addition to CTB 109 and CTB 37B, have much lower kinetic energies than those of Type I superluminous SNe, indicating that their origin is different.
The distribution of the metals reveals some asymmetries: Kes 73 likely has enhanced O, Ne, and Mg abundances in the east (but could also be a result of the degeneracy between [O] and N H in spectral fit), RCW 103 shows enhanced O abundance in the south, and N49 shows clearly enhanced O, Ne, and Mg, Si toward the east, and enhanced S in the south. The element distributions are not always anti-correlated with the density, so the di-lution due to the ejecta-gas mixing cannot be the main reason for the observed asymmetries, especially for Kes 73 and RCW 103. The nonuniform ejecta distributions reflect that the SN explosions should be aspherical to some extent.
With the above information, we can distinguish the two hypotheses about the origin of magnetars: dynamo origin or fossil field origin. The dynamo model predicts that the SN explosion is energized by the millisecond pulsar, which has been ruled out for the three magnetars discussed in this paper. Furthermore, the rapidly rotating stars are generally made from very massive stars (≤ 3 ms pulsars from 35 M stars, Heger et al. 2005). The SN rate is 5% for stars with an initial mass > 30 M and ∼ 10% for stars > 20M (Sukhbold et al. 2016). These very massive stars are suggested to collapse to form black holes rather than neutron stars (Fryer 1999;Smartt 2009). Therefore, it is likely that only a small fraction of magnetars may be formed through this dynamo channel. We obtain a normal mass range (M ZAMS < 20 M ) for the progenitor stars of the three magetars, further disfavoring the dynamo scenario for them.
The fossil field origin appears to be a natural explanation for magnetars. The magnetic field strengths of massive stars vary by a few orders of magnitude. The magnetic field detection rate is ∼ 7% for both B-type and O-type stars, with magnetic fields from several hundred Gauss to over 10 kG (e.g., Grunhut et al. 2012;Wade et al. 2014;Schöller et al. 2017). As to the origin of the strong magnetic fields in magnetic stars, the debates are almost the same as for magnetars: dynamo or fossil. The latter origin is supported from both theoretical studies and observations in recent years. Theoretical study of magnetic stars and magnetars has shown that stable, twisted magnetic fields (poloidal fields above the surface + internal toroidal fields) can have evolved from random initial fields (Braithwaite & Spruit 2004;Braithwaite 2009). Recent observations confirmed the fossil field origin (Neiner et al. 2015), because the dynamo origin would lead to a correlation between the magnetic field strength and stellar rotation speed, which is not observed. It is even suggested that the massive stars with higher magnetic fields rotate more slowly, likely due to magnetic braking (Shultz et al. 2018). Fossil magnetic fields of the stars are descendants from the seed fields of the parent molecular clouds (Mestel 1999). After the death of the stars, the neutron stars may also inherit the magnetic fields from these stars.
In our Galaxy, ten magnetars have been found in SNRs. Among the 295-383 known Galactic SNRs (Ferrand & Safi-Harb 2012;Green 2014Green , 2017, around 80% are of CC origin (0.81 ± 0.24, Li et al. 2011). This means that ∼ 3%-4% of CCSNRs are found to host magnetars. This fraction is slightly smaller than the incident fraction of magnetic OB stars with magnetic fields over a few hundred Gauss (∼ 7%), but consistent with the fraction of massive stars with higher fields (∼ 3% with B > 10 3 G, Schöller et al. 2017). Therefore, our study supports the fossil field origin as an important channel to produce magnetars, given the normal mass range (M ZAMS < 20M ) of the progenitor stars, the low-to-normal explosion energy of the SNRs, and the fraction of magnetars found in SNRs. Although our current study favors the fossil field origin and is against the dynamo origin for the three magnetars, we do not exclude the possibility that there might be more than one channel to create magnetars.
Conclusions
We have performed a spatially resolved X-ray study of SNRs Kes 73, RCW 103, and N49, aiming to learn how their magne- | 2019-08-23T12:12:40.079Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "c6b184e66bf72bfd0bbd489731109a80524fe31a",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2019/09/aa36002-19.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "97105493759eaed827aae8aff0ba47d9f2429b3f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
102950500 | pes2o/s2orc | v3-fos-license | Reduced-Graphene-Oxide with Traces of Iridium or Gold as Active Support for Pt Catalyst at Low Loading during Oxygen Electroreduction
Chemically-reduced graphene-oxide-supported gold or iridium nanoparticles are considered here as active carriers for dispersed platinum with an ultimate goal of producing improved catalysts for electroreduction of oxygen in acid medium. Comparison is made to the analogous systems not utilizing reduced graphene oxide. High electrocatalytic activity of platinum (loading up to 30 {\mu}g cm-2) dispersed over the reduced-graphene oxide-supported Au (up to 30 {\mu}g cm-2) or Ir (up to 1.5 {\mu}g cm-2) nanoparticles toward reduction of oxygen has been demonstrated using cyclic and rotating ring-disk electrode (RRDE) voltammetric experiments. Among important issues are possible activating interactions between gold and the support, as well as presence of structural defects existing on poorly organized graphitic structure of reduced graphene oxide. The RRDE data are consistent with decreased formation of hydrogen peroxide.
Introduction
There has been growing interest in the field of oxygen electroreduction, particularly with respect to potential applications in the science and technology of lowtemperature fuel cells (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11). Obviously, many efforts have been made to develop suitable alternative electrocatalysts efficient enough to replace electrocatalysts based on scarce strategic elements such as platinum-group metals (5,6,8,9,12,13). Despite intensive research in the area, there are still a number of fundamental problems to be resolved, and the practical oxygen reduction catalysts still utilize systems based on platinum.
The O 2 -reduction electrocatalysts are typically nanocomposite materials utilizing metal nanoparticles bearing the active sites dispersed on suitable supports. While exhibiting long-term stability, a useful support should facilitate dispersion, provide easy access of reactants, and assure good electrical contact with active sites. In spite of limitations related to the durability, carbon nanoparticles of approximately 20-50 nm diameters (e.g. Vulcan XC-72R) are commonly utilized as supporting materials. Because of the high specific surface area and excellent thermal, mechanical and electrical properties, graphene and graphene-based materials (14)(15)(16) have recently been considered as supports for catalysts (17)(18)(19)(20)(21). Under such conditions, the parasite effects related to agglomeration and thus degradation of catalytic nanoparticles are likely to be largely prevented.
In the present work, we consider the chemically-reduced-graphene-oxidesupported gold or iridium nanostructures as carriers for dispersed Pt nanoparticles as catalytic systems for the electroreduction of oxygen in acid medium (0.5 mol dm -3 H 2 SO 4 ). Among important issues is the ability of the proposed carriers to act as the systems effectively inducing decomposition of the hydrogen peroxide undesirable intermediate (5,22). The latter problem is expected to become an issue when the catalytic platinum would be utilized at low loadings. Here, we propose to decorate the graphene based carriers with gold nanoparticles (loading, 30 µg cm -2 ) or with traces amounts of iridium (loading, 1-2 µg cm -2 ). The usefulness of Au nanostructures during the reduction of oxygen has been recently demonstrated (23). Here application of inorganic Keggin-type heteropolymolybdates (PMo 12 O 40 3-) as capping ligands (capable of chemisorbing on both gold and carbon substrates (24-31)) facilitates deposition, nucleation, stabilization and thus controlled growth of gold nanoparticles on surfaces of both Vulcan and graphene nanostructures. Furthermore, we have utilized the so-called reduced graphene oxide which, contrary to conventional graphene, still contains oxygen functional groups regardless of subjecting it to the chemical reduction step (14)(15)(16). By analogy to graphene oxide, the existence of oxygen groups in the plane of carbon atoms of reduced graphene oxide not only tends to increase the interlayer distance but also makes the layers somewhat hydrophilic. Furthermore, during fabrication of the catalytic systems (in acid medium), the adsorbed polymolybdates (9,25) are likely to bind gold nanoparticles via the oxygen or hydroxyl groups on graphene and Vulcan surfaces. Finally, we explore here the reduced graphene oxide based carriers decorated with catalytic iridium. It is noteworthy that iridium, even at trace levels, has been found to exhibit high reactivity toward the reductive decomposition of hydrogen peroxide (32). As a rule, the electrocatalytic diagnostic experiments described herein involve comparative measurements utilizing commonly-used Vulcan (carbon) supports as carriers for Pt nanoparticles deposited at the same loadings (typically 15-30 µg cm -2 ) as in the case of hybrid systems with the reduced graphene oxide. It is apparent from the diagnostic cyclic voltammetric and rotating ring-disk measurements that the systems utilizing the reduced-graphene-oxide-supports decorated with Au-nanoparticles or traces of iridium could act as active matrices for Pt catalysts thus forming the potent O 2reduction electrocatalytic systems. In particular, the proposed systems have exhibited higher electrocatalytic currents and produced lower amounts of the undesirable hydrogen peroxide intermediate during oxygen reduction. The enhancement effect is particularly found in the high potential range (0.8-1.0 V vs. RHE). On the whole, the combined effect of the high surface area and electrical conductivity of reduced graphene oxide should also contribute to the overall enhancement effect.
Experimental
All chemicals were analytical grade materials and were used as received. Solutions were prepared from the distilled and subsequently deionized water. They were deoxygenated by bubbling with ultrahigh purified nitrogen. Experiments were carried out at room temperature (22±2 • C).
The 5% Nafion-1100 solution was purchased from Aldrich. Platinum black nanoparticles were obtained from Alfa Aesar. Sulfuric acid was from POCH (Poland). Graphene oxide sheets of 300-700 nm sizes (thickness, 1.1±0.2 nm) were from Megantech. Reduced graphene oxide (rGO) was obtained using sodium borohydride as reducing agent at 80˚C according to the procedure described earlier (33).
The syntheses of phosphomolybdate-modified gold nanoparticles supported onto Vulcan XC72R carbon and reduced graphene oxide matrices were performed in the analogous manner as described earlier (26)(27)(28)(29)(30) but in the presence of an appropriate carbon support. The stoichiometric volume of freshly prepared aqueous sodium tetrahydroborate ( NaBH 4 ) was added to the phosphomolybdate-functionalized carbon supports in order to transform the oxidized H 3 PMo 12 O 40 adsorbates into partially reduced H 3 [H 4 P(Mo V ) 4 (Mo VI ) 8 O 40 ] heteropolyblue forms. To obtain a gold loading on the level of 30 wt% of Au on the appropriate heteropolybluemodifed carbon, an equivalent volume of the aqueous 7.5 mmol dm -3 chloroauric acid (HAuCl 4 ) solution was added to the respective suspension. As a rule, appropriate amounts of the resulting catalytic inks were dropped onto surfaces of glassy carbon electrodes to obtain loadings of gold nanoparticles equal to 30 µg cm -2 .
Electrode layers were deposited on glassy carbon disk electrodes by introducing (by dropping) appropriate volumes of inks containing catalytic nanoparticles and using 2-propanol and Nafion® (20% by weight) as solvent and binder, respectively.
All electrochemical measurements were performed using CH Instruments (Austin, TX, USA) 760D workstations in three electrode configuration. The reference electrode was the K 2 SO 4 -saturated Hg 2 SO 4 electrode, and a carbon rod was used as a counter electrode. As a rule the potentials reported here were recalculated and expressed vs. Reversible Hydrogen Electrode (RHE). Glassy carbon disk (geometric area, 0.071 cm 2 ) working electrodes were utilized as substrates. The rotating ring disk electrode (RRDE) working assembly was from Pine Instruments; it included a glassy carbon (GC) disk and a Pt ring. The radius of the GC disk electrode was 2.5 mm; and the inner and outer radii of the ring electrode were 3.25 and 3.75 mm, respectively.
Morphology of samples was assessed using Libra Transmission Electron Microscopy120 EFTEM (Carl Zeiss) operating at 120 kV. The Raman spectra were collected with a confocal Raman Microscope (model DRX, Thermo Scientific) and using an excitation laser with a wavelength of 532 nm.
Results and Discussion
Physicochemical Identity of Graphene Nanostructures Graphene oxide, GO, and partially reduced graphene oxide, rGO contain various carbon-oxygen groups (hydroxyl, epoxy, carbonyl, carboxyl), in addition to the large population of water molecules still remaining in the reduced samples. Independent elemental analysis based on the C 1s and O 1s XPS spectra from XPS measurements showed that the oxygen content in rGO was in the range from 8.6 to 12.1 at%; the C-to-O ratio was on the level of 7.1-10.3. When compared to the analogous parameters of the commercially available GO, the oxygen content and the C-to-O ratio values were more than three times lower and more than three times higher, respectively. Furthermore, the Raman spectra show two large peaks in the range of 1300-1600 cm -1 : one peak near 1350 cm -1 , which stands for the D band originating from the amorphous structures of carbon, and the second one close 1580 cm -1 , which is correlated with the G band, reflects the graphitic structures of carbon. Simple comparison of intensities of G and D bands in rGO, relative to the analogous bands in GO, are lower and higher, respectively. This result implies presence of interfacial defects as well as the lower degree of organization of the graphitic structure of rGO relative to GO. The phosphomolybdate (H 3 PMo 12 O 40 ) modified Au nanoparticles (supported and unsupported) were characterized using Transmission Electron Microscopy (TEM). It is apparent from Figure 1 that, while unsupported gold particles ( Figure 1A) have diameters that are fairly uniform in the range between ca. 30 and 40 nm diameters, the rGO-supported particles (although slightly larger and less uniform) have comparable sizes typically ranging from 30 to 50 nm ( Figure 1B). On mechanistic grounds, gold nucleation may occur at the rGO "defect" sites, including surface polar groups and polyoxometallate adsorbates. It is reasonable to expect that the partially reduced (heteropolyblue) PMo 12 O 40 3sites induce generation of somewhat larger gold nanoparticles. Figure 1C illustrates TEM of iridium nanoparticles generated onto rGO-SiO 2 support. Among important issues is the low size (less than 2 nm) and the intended very low loading (<2 µg cm -2 ).
Reduction of O 2 at Pt Nanoparticles Deposited onto rGO-supported Au
The rGO-supported Au nanoparticles are obviously less active than conventional Vulcan-supported Pt during electroreduction of oxygen under conditions of the RRDE voltammetric diagnostic experiments at the comparable loadings (30 µg cm -2 ). In the present work, we also disperse (onto rGO-supported Au) bare Pt nanoparticles (sizes 7-8 nm). But comparison to the commercially available Vulcansupported Pt is not straightforward here because such carbon-supported Pt nanoparticles have sizes on the level of a few (3-4) nm whereas our Au nanoparticles are much larger (40-50 nm diameters). As mentioned above, we have dispersed the commercially available unsupported Pt nanoparticles (sizes 7-8 nm) at the loadings of 30 µg cm -2 over two different supports or catalytic systems considered here: (a) simple (bare) gold nanoparticles, and (b) reduced graphene oxide (rGO) supported gold nanoparticles. It is clear from the RRDE experiments (Figure 2A) that the disk currents have occurred to be somewhat higher during the reduction of oxygen at Pt nanoparticles deposited onto the rGO-supported-gold nanostructures relative the performance of Pt deposited onto bare gold nanoaprticles. In the case of Pt nanoparticles deposited onto the rGO-supported-gold, the ring currents ( Figure 2B) have been also lower what is consistent with the less pronounced formation of hydrogen peroxide. Figure 2C illustrates the percent amount of H 2 O 2 (%H 2 O 2 ) formed during reduction of oxygen under the conditions of RRDE voltammetric experiments of Figure 2A. The actual calculations have been done using the equation given below: % H2O2 = 200 * I ring /N / (I disk + I ring / N) [1] where I ring and I disk are the ring and disk currents, respectively, and N is the collection efficiency (equal to 0.39). The results clearly show that the production of H 2 O 2 is the lowest for system utilizing gold nanoparticles supported onto chemically-reduced graphene-oxide. The overall number of electrons exchanged per O 2 molecule (n) was calculated as a function of the potential using the RRDE voltammetric data of Figure 2A and B and using the equation mentioned below: n = 4 * I disk / (I disk + I ring / N) [2] A B The corresponding number of transferred electrons (n) per oxygen molecule ( Figure 2D) involved in the oxygen reduction was obviously higher in a case of the system utilizing Au nanoparticles supported onto rGO.
Reduction of O 2 at Hybrid Catalyst of Pt20%-CNTs Admixed with Ir2%-rGO-SiO 2 Figure 3 illustrates representative (A) disk (voltammetric) and simultaneous (B) ring (upon application of 1.2 V) steady-state currents recorded during the reduction of oxygen (in the O 2 -saturated 0.5 mol dm -3 H 2 SO 4 at 1600 rpm rotation rate and 10 mV s -1 scan rate) using the hybrid catalyst composed of Pt20%-CNTs admixed (at 1:1 ratio) with Ir2%-rGO-SiO 2 (red line) and the analogous Ir-free system (black line). It is noteworthy that loadings of Pt and Ir are on the levels 15 and 1.5 µg cm -2 , respectively. Under hydrodynamic voltammetric conditions of Figure 3, while the disk current densities are roughly comparable for all Pt-containing systems ( Figure 3A), the different ring currents ( Figure 3B) have been produced clearly implying formation of lower amounts of the undesirable H 2 O 2 intermediate in a case of the system containing traces of iridium. Furthermore, when the percent values for the hydrogen peroxide intermediate formation are compared ( Figure 3C), it becomes apparent that they are particularly low (below 1%) in the presence of the traces of Ir (rGO-SiO 2 -supported) at positive potentials (0.6-0.9 V vs. RHE). Finally, the electroreduction of oxygen proceeds now at more positive potentials ( Figure 3A) in spite of the low Pt-loading and the ultra-low addition of Ir. The above observations are of potential importance to the development of catalytic systems for low temperature fuel cells.
Conclusions
This study clearly demonstrates that the chemically-reduced graphene-oxide, while decorated with gold or iridium nanostructures, acts as a robust and activating support for dispersed Pt nanoparticles during electrocatalytic reduction of oxygen in acid medium (0.5 mol dm -3 H 2 SO 4 ). For the same loading of catalytic gold nanoparticles (30 µg cm -2 ), application of the reduced graphene oxide support results in formation of lower amounts of the undesirable H 2 O 2 intermediate. Moreover the onset potential for the oxygen reduction has been the most positive (0.9 V) in a case of the system utilizing reduced graphene oxide. Synergistic effects and activating interactions between catalytic metal nanoparticles and nanostructured graphene supports cannot be excluded here with respect to lowering the dissociation activation energy for molecular O 2 through accelerating the charge transfer from metal in the presence of graphene and by reducing stability of the H 2 O 2 intermediate species.
We have also demonstrated here that co-existence of the carbon-nanotubesupported Pt nanoparticles and the reduced graphene oxide supported iridium nanostructures at low loadings (15 and 1.5 µg cm -2 , respectively) yields highly active electrocatalytic system for the electroreduction of oxygen in acid medium. The enhancement effect coming from the addition of traces of iridium (supported onto the silica doped reduced graphene oxide) may originate from the high ability of Ir to induce decomposition of the undesirable hydrogen peroxide intermediate. The presence of carbon nanotubes may improve charge distribution at the electrocatalytic interface. Further research is needed to elucidate possible specific interactions.
Our preliminary results with platinum dispersed over the reduced-grapheneoxide-supported gold or iridium imply that such a hybrid catalyst (once optimized with respect to minimizing of particle sizes and loadings of Pt) could be of interest in the fuel cell science and technology. | 2018-05-08T16:33:49.000Z | 2017-08-24T00:00:00.000 | {
"year": 2018,
"sha1": "c0cd05f25cb8d621c0922b7ea2bed72de931383b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.03147",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c0cd05f25cb8d621c0922b7ea2bed72de931383b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
]
} |
3336583 | pes2o/s2orc | v3-fos-license | Omnidirectional Jump of a Legged Robot Based on the Behavior Mechanism of a Jumping Spider
To find a common approach for the development of an efficient system that is able to achieve an omnidirectional jump, a jumping kinematic of a legged robot is proposed based on the behavior mechanism of a jumping spider. To satisfy the diversity of motion forms in robot jumping, a kind of 4 degrees of freedom (4DoFs) mechanical leg is designed. Taking the change of joint angle as inspiration by observing the behavior of the jumping spider during the acceleration phase, a redundant constraint to solve the kinematic is obtained. A series of experiments on three types of jumping—vertical jumping, sideways jumping and forward jumping—is carried out, while the initial attitude and path planning of the robot is studied. The proposed jumping kinematic is verified on the legged robot experimental platform, and the added redundant constraint could be verified as being reasonable. The results indicate that the jumping robot could maintain stability and complete the planned task of jumping, and the proposed spider-inspired jumping strategy could easily achieve an omnidirectional jump, thus enabling the robot to avoid obstacles.
Introduction
Compared with walking [1] and crawling robots [2], the jumping robot can walk, run, and jump [3].Jumping locomotion has characteristics of isolated footholds, and powerful and explosive jumping force [4], which contribute to the quick and effective jumping locomotion of bio-inspired robots to stride over obstacles that are several times the size of their bodies, or cross a gully that is several times the length of their own step, and avoid danger in time.As for the jumping robot, the key point is usually planning the trajectory of their center of gravity (CoG) to achieve the jumping process reasonably, and realize the multi-directional jumping moment in various environments.The jumping robot can realize the jumping process by controlling the take-off speed, attitude and landing stability.When the jumping robot takes off, the robot achieves certain acceleration because its foot generates enough reactive force by impacting with the ground.Then, the robot adjusts its posture in the air, and finally lands smoothly.Hence, the study of the jumping process is an important part of the jumping robot.The jumping robot realizes the physical design of the robot and the jump movements by imitating animal body structures or biological movement mechanisms.The bionics [5] include the structure bionics, the motion bionics and the control bionics [6].Nowadays, research on the bionic robot based on a bionic structure is abundant.There are both robots that imitate mammals, such as the bionic cheetah and bionic kangaroo [7,8], and robots that imitate amphibious creatures, such as bionic frogs and bionic toads [9,10].Additionally, there are robots that imitate arthropods, such as bionic spiders, bionic locusts, bionic cockroaches and imitation water insects [11][12][13][14][15].The mammal-like robot has the characteristics of being fast at running and jumping, as well as having smooth motion and a high energy utilization rate.However, there is little research surrounding this because of its large volume and heavy weight.The amphibious robots are almost all bionic frog robots.The robot uses intermittent motion so that it may control its posture, in addition to being able to effectively control the energy accumulation and release.Therefore, this type of robot is characterized by its flexible jumping, powerful explosive force and environmental adaptability [16].Compared with these types of robots, the arthropod robot has the advantage of fast acceleration, low energy consumption and high energy efficiency in the process of jumping because of its small size, light weight and good bounce ability [17].
The structure model of the jumping robot can be divided into a single-legged model and a multi-legged model.In terms of the single-legged jumping robot, based on the high mobility requirements of jumping, some scholars have proposed a single leg motion mode driven by hydraulics [18,19].The influence posture and ground impact of the robot on its structure in the vertical jump motion are analyzed, and the overall stability evaluated, to ensure that the robot can complete the take-off task.Ge et al. [20][21][22] proposed a scheme based on the jumping mechanism of the kangaroo.They studied the jumping movement of the kangaroo and simplified their bodies into single-legged models to study and discuss.Then, they proposed three models: a rigid body jumping model, a compliant jump mechanism model and a rigid flexible hybrid model.The motion mechanisms of the three models were analyzed to see which one was most successful in making the robot jump smoothly.However, the single-legged model is a naturally unstable system, which cannot remain in the stationary state.Therefore, the single-legged robot has braced structures to maintain stability.Furthermore, it is necessary to adjust the initial attitude angle to achieve a smooth jump.In addition, the single-legged robots only realized the bionic jumping function, and were less involved in the overall movement mechanism.Animals in nature predominately have multiple legs.The biped robot, the quadruped robot and the six-legged robot are the main forms of multi-legged robots [23][24][25].Fumitaka et al. [26] have designed a quadruped robot, which can jump even in rugged external environments and can achieve the task of crossing obstacles.Some scholars [27][28][29] have proposed a rigid mode of imitating the motion principle of the cricket by studying the robot's ability to jump and kick; the robot can adjust its own dynamic balance while it is jumping.Thus, compared with the single robot, the multi-legged robot has better overall stability.
Currently, most research focuses on the single direction jump, especially the vertical and forward jump.A vertical jump analysis based on the hydraulic drive has been proposed [16,30].Surmounting ability and jumping efficiency are analyzed, and then the vertical jumping form is optimized.Thanhtam et al. [31] have proposed a new structure of quadruped robot to accomplish the task of vertical jumps or forward jumps.The joints of the robot are driven by hydraulics to meet the requirements of torque, compactness, speed and impact resistance.Hyunsoo et al. [32] proposed a quadruped jumping robot, which is based on the servo motor drive.The legs are equipped with gears, springs and other components, and the robot can complete a high jump task through two kinds of movement: spring compression and gear drive mechanism.
However, there is no in-depth discussion about the research on omnidirectional jumping.In other words, in the process of jumping, the robot jumps mainly through fixed jumping form, and it only achieves a single direction jump of height and distance.The robot must adjust its posture and jumping direction when it is trying to avoid an obstacle.The initial posture must be adjusted first if the robot is going to change the jumping direction, causing the efficiency to be greatly reduced.Hence, without changing the initial pose of the robot, multi-direction jumps become the key problem to be solved.In this paper, a bionic six-legged robot structure with the ability of omnidirectional jumping is proposed, which is based on the jumping mechanism of jumping spiders.The omnidirectional jumping form has been proposed through observing the jumping form, jumping posture and leg stretch of jumping spiders, which allows it to avoid and cross obstacles in all directions.To verify the rationality of the jumping form proposed in this paper, a series of experiments on a six-legged robot is carried out, and the results show that the proposed multi-direction jumping form has outstanding performance, which provided a good theoretical basis for jumping research.
Bio-Inspiration and Materials
Arthropods are the largest group of animals, including over one million species of invertebrates, and accounting for almost 84% of all species.Members of the arthropod family are various; they can be found from the abyssal sea to inland areas.Due to the differentiation between arthropod bodies, and the diversity of physical changes, which give arthropods a highly adaptive capacity, they have adapted to all sorts of surroundings, maintaining themselves even under the most rigorous conditions.After hundreds of millions of years of evolution, arthropods have become very flexible in their ability to move [33].Arthropods can hunt prey quickly or avoid predators, and when they run into obstacles or ravines, they can quickly run and jump on the terrain to avoid obstacles.Therefore, the agility of arthropods can provide inspiration for the exploration of jumping robots [34,35].The arthropod is composed of several different structures and functions; the body is symmetrical; its feet are evenly distributed on both sides of the body; and the legs can be coordinated with each other.This not only allows the flexibility of movement of the robot, but also the ability to jump in any direction and remain flexible in various terrains [36].Jumping spiders are the most common of these arthropods.
Since the jumping spider has multiple joints on each leg, the legs can be stretched long enough to move around or camouflage themselves to prey.Especially due to its jumping ability, the jumping spider can quickly avoid predators and overcome obstacles.In this paper, we mainly focus on the jumping spider and study its motion structure and morphology.When the jumping spider jumps, the feet fall and the legs stretch, and the effective leg length increase rapidly, before the spider accelerates and impacts with the ground.When the jumping spider skips sideways, the front spider legs shrink and the rear legs extend in the jumping direction, and the spider has a certain attitude angle.The spider accelerates highly when its legs begin to extend.When the front legs reach the maximum effective length, the spider reaches the ground velocity across the barrier, and the spider jumps off the ground.When the jumping spider skips forward, the effective length of the front legs is constant in the direction of the jump, and the effective length of the rear legs decreases, and the spider has a certain attitude angle.The spider can achieve high acceleration when its legs extend to full extension.When the hind legs of the spider reach the maximum effective length, the spider reaches the ground velocity across the barrier, and the spider jumps off the ground.
Figure 1 is a schematic diagram of the whole body and a sketch of the spider's leg joint and leg structure.Considering the characteristics of the coxal joints of spiders, the leg structure of the robot is simplified; the blue solid line indicates the leg, and the black circle indicates the joints.The coxal joint can move freely in any direction, so that the spider can jump in any direction.The robot model is designed by observing the schematic diagram of the jumping spider and the spider leg, and the robot experiment platform is shown in Figure 2a.To simplify the design, the spider robot is designed with six legs rather than eight legs.Compared with other hexapod robots that have been studied, in this paper each leg of the robot has 4DoFs, which can increase the flexibility of the robot during activity.The width and length of the body are 134 mm and 228 mm respectively; the length of the patella-tibia is 120 mm; the length of the tibia-metatarsus is 120 mm; the length of the tarsus is 160 mm; and the maximum effective length of the robot leg is 350 mm.The robot's six legs are symmetrically distributed on both sides of the fuselage; the angle between the legs of left foreleg (LF), right foreleg (RF), left hind leg (LH) and right hind leg (RH) (as seen in Figure 2a), and the axis direction of the robot is 60 • , while the legs of left middle leg (LM) and right middle leg (RM) (as seen in Figure 2a) are perpendicular to the fuselage.The robot adopts the design principles of having a bionic structure and being light weight.The total weight is only 4 Kg, since the root body, shank, foot and connector of digital motors are all made from aluminum alloy.Each leg has four rotating joints including the coxal joint, complex femur-patella joint, tibia-metatarsus joint and the metatarsus-tarsus joint.The torque of each joint is provided by a digital motor.The rotation of the digital motion of the coxal joint (θ 1 ) enables the robot to swing back and forth, providing forward power and controlling the step size.According to the robot's mechanical structure, the minimum value of θ 1 is −30 • , and the maximum value of θ 1 is +30 • .The rotation of the digital motor at the femur-patella joint (θ 2 ), the tibia-metatarsus joint (θ 3 ) and the metatarsus-tarsus joint (θ 4 ) can achieve leg extension and control the height of the body.The value of θ 2 ranges from −90 • to +75 • , the value of θ 3 ranges from 0 • to +150 • , and the range of θ 4 varies with θ 2 and θ 3 .In this paper, the robot is designed to complete tasks that require a maximum jumping height of 100 mm and a maximum jumping distance of 250 mm, while following the form of omnidirectional jumping.Furthermore, the robot can walk quickly, and turn in any direction.The D-H kinematic model was established according to the John method, as shown in Figure 2b, and the kinematic model parameters are shown in Table 1.
Appl.Sci.2018, 8 (1), 51 4 of 21 power and controlling the step size.According to the robot's mechanical structure, the minimum value of θ1 is −30°, and the maximum value of θ1 is +30°.The rotation of the digital motor at the femur-patella joint (θ2), the tibia-metatarsus joint (θ3) and the metatarsus-tarsus joint (θ4) can achieve leg extension and control the height of the body.The value of θ2 ranges from −90° to +75°, the value of θ3 ranges from 0° to +150°, and the range of θ4 varies with θ2 and θ3.In this paper, the robot is designed to complete tasks that require a maximum jumping height of 100 mm and a maximum jumping distance of 250 mm, while following the form of omnidirectional jumping.Furthermore, the robot can walk quickly, and turn in any direction.The D-H kinematic model was established according to the John method, as shown in Figure 2b, and the kinematic model parameters are shown in Table 1.power and controlling the step size.According to the robot's mechanical structure, the minimum value of θ1 is −30°, and the maximum value of θ1 is +30°.The rotation of the digital motor at the femur-patella joint (θ2), the tibia-metatarsus joint (θ3) and the metatarsus-tarsus joint (θ4) can achieve leg extension and control the height of the body.The value of θ2 ranges from −90° to +75°, the value of θ3 ranges from 0° to +150°, and the range of θ4 varies with θ2 and θ3.In this paper, the robot is designed to complete tasks that require a maximum jumping height of 100 mm and a maximum jumping distance of 250 mm, while following the form of omnidirectional jumping.Furthermore, the robot can walk quickly, and turn in any direction.The D-H kinematic model was established according to the John method, as shown in Figure 2b, and the kinematic model parameters are shown in Table 1.Table 1.Kinematic model parameters.
The architecture of the control algorithm of the robot is shown in Figure 3.During the jumping process of the robot, the whole system is composed of the time signal, path planning, foot trajectory planning, experimental prototype and the sensor signal.As the input signal of the whole system, time provides the drive for the control system.The foot trajectory planning and path planning are mainly used to plan the jumping path of the robot.When the robot jumps, the body jump trajectory is given and, correspondingly, we can get the trajectory of the robot foot.Then, the kinematic displacement of each joint is calculated by inverse kinematics.The robot can realize the motion form of the jump through the reasonable path planning of the robot.The robot is equipped with various sensors that are mounted inside the body.An attitude sensor is mounted to monitor the motion state of the robot in real time.A displacement sensor and a torque sensor are arranged on each leg joint to detect the real-time position and the torque.A force sensor is used to monitor the load on each leg. Figure 3 is the algorithm framework.We can understand clearly the control process to the jumping robot.At present, we start with the position control, which is consistent with the control mode of the robot platform we are building, and it is easy to quickly implement.The close position loop takes into account the internal torque loop, and the next step will be studied from the dynamics.
The architecture of the control algorithm of the robot is shown in Figure 3.During the jumping process of the robot, the whole system is composed of the time signal, path planning, foot trajectory planning, experimental prototype and the sensor signal.As the input signal of the whole system, time provides the drive for the control system.The foot trajectory planning and path planning are mainly used to plan the jumping path of the robot.When the robot jumps, the body jump trajectory is given and, correspondingly, we can get the trajectory of the robot foot.Then, the kinematic displacement of each joint is calculated by inverse kinematics.The robot can realize the motion form of the jump through the reasonable path planning of the robot.The robot is equipped with various sensors that are mounted inside the body.An attitude sensor is mounted to monitor the motion state of the robot in real time.A displacement sensor and a torque sensor are arranged on each leg joint to detect the real-time position and the torque.A force sensor is used to monitor the load on each leg. Figure 3 is the algorithm framework.We can understand clearly the control process to the jumping robot.At present, we start with the position control, which is consistent with the control mode of the robot platform we are building, and it is easy to quickly implement.The close position loop takes into account the internal torque loop, and the next step will be studied from the dynamics.
Forward Kinematics Analysis
Based on the D-H model established in Section 2, the length of each link and the rotation angle of each joint are known in the base coordinate system of the robot, and the trajectory equation of the foot is derived as follows:
Forward Kinematics Analysis
Based on the D-H model established in Section 2, the length of each link and the rotation angle of each joint are known in the base coordinate system of the robot, and the trajectory equation of the foot is derived as follows: is the position vector of the robot's foot relative to the reference coordinate system of the coxal joint.
Inverse Kinematics Analysis
According to the position vector of the robot foot relative to the reference coordinate system of the coxal joint, the four joint angles of the leg can be obtained.However, the four joint angles can be driven directly only through being given the position vector of the foot.Therefore, a constraint is added.Firstly, the rotation angle of the coxal joint (θ 1 ) is solved by the algebraic method ). ( The rotation angles of the femur-patella joint (θ 2 ), the tibia-metatarsus joint (θ 3 ) and the metatarsus-tarsus joint (θ 4 ) are solved by the geometric method.The base coordinate system {O} is assumed to attach to the coordinate system of the coxal joint.The mechanical leg is projected in the X-Z plane coordinate system, and the simplified model of the linkage is shown in Figure 4.
Appl.Sci.2018, 8(1), 51 7 of 21 Hence, the inverse kinematics is In this paper, we proposed a study based on bionic kinematics with redundant freedom.Figure 5 provides a sketch of the leg flexion and the extension of the spider.The attitude angle of the foot θt is required to solve the leg joint angles.Thus, the attitude angle of the foot can be obtained by using the relationships between the changes of joint angles and the foot posture during jumping.In the jumping process of the spider, the rapid extension of the leg makes the effective leg length increase rapidly to complete the fast jumping.Simultaneously, the joint angle of the spider leg changes regularly with the effective length, and the attitude angle of the foot vary regularly with the joint angle.Figure 5 shows a sketch of the leg flexion and the extension of the spider.In Figure 4, O represents the femoral-patella joint, A represents the tibia-metatarsus joint and B represents the metatarsus-tarsus joint.We have aligned the coxal joint and the femoral-patella joint at O for convenience in calculating the angles.The angle between the connecting rod BD and the X axis or the ground is θ t ; θ t is the attitude angle of the foot.The geometric relation between the links of the linkage mechanism can be obtained by θ t and θ 2 , θ 3 and θ 4 , which satisfy the constraint relation in Equation ( 3) The plane coordinate of D relative to O is (P X , P Z ).According to the projection principle, the relation between P X , P Z and 0 tip P X , 0 tip P Z is In the ∆OBE, according to cosine theorem, length L 5 between O and B is In the ∆OAB, the θ 3 holds In the ∆OBE In ∆OBF and ∆ABF, they meet the following formula ).
Thus, we can obtain ).
From the constraints relation above, θ 3 solutions can be Hence, the inverse kinematics is ) In this paper, we proposed a study based on bionic kinematics with redundant freedom.Figure 5 provides a sketch of the leg flexion and the extension of the spider.The attitude angle of the foot θ t is required to solve the leg joint angles.Thus, the attitude angle of the foot can be obtained by using the relationships between the changes of joint angles and the foot posture during jumping.In the jumping process of the spider, the rapid extension of the leg makes the effective leg length increase rapidly to complete the fast jumping.Simultaneously, the joint angle of the spider leg changes regularly with the effective length, and the attitude angle of the foot vary regularly with the joint angle.Figure 5 shows a sketch of the leg flexion and the extension of the spider.
Hence, the inverse kinematics is In this paper, we proposed a study based on bionic kinematics with redundant freedom.Figure 5 provides a sketch of the leg flexion and the extension of the spider.The attitude angle of the foot θt is required to solve the leg joint angles.Thus, the attitude angle of the foot can be obtained by using the relationships between the changes of joint angles and the foot posture during jumping.In the jumping process of the spider, the rapid extension of the leg makes the effective leg length increase rapidly to complete the fast jumping.Simultaneously, the joint angle of the spider leg changes regularly with the effective length, and the attitude angle of the foot vary regularly with the joint angle.Figure 5 shows a sketch of the leg flexion and the extension of the spider.Figure 6 shows the angle curve for the duration of the acceleration phase.During the jump process, the spider completes the jump in a fixed pattern.The effective leg length, the body attitude angle, the joint angle and the foot stance angle determine the maximum values of the take-off form, direction, height and distance.The leg extends in the jumping direction during its contact phase, which allows the spider to quickly accumulate speed, while the posture of the spider's body changes depending on the jumping direction, and the attitude angle varies with different jumping forms.When a spider jumps upward, the effective stretch length of each leg is the same, and the body posture hardly changes.When the spider jumps sideways, the body has a certain initial roll angle.At the same time, the length of the hind leg is rapidly stretched along the back of the jumping direction and the effective length of the leg increases rapidly.Then, the forelegs stretch in the jumping direction to achieve the sideways jump.In the process of the sideways jump, the length of the hind legs is longer than that of the forelegs of the spider in the jumping direction and, as a result, the rolling angle of the spider is larger.When the spider jumps forward, it has a certain initial pitch angle in the jump direction.At the same time, the hind legs extend rapidly at first; the effective length of the legs increases rapidly, and then the forelegs rapidly extend to achieve the forward jump.In the whole process of the forward jump, the pitch angle of the body posture becomes bigger.In this paper, we study the kinematic of the jumping spider [7] by observing and studying the relationships between the changes of joint angles and effective leg length during the jumping process.Using the experimental data and the curve of the jumping process of the spider, some groups of jumping motion curves are analyzed, and then several sets of the joint angle curve are fitted out in the interval angle of each joint.According to the D-H kinematic model in the last section and the definition of joint angles, the mathematical relationship between the joint angles of the jumping spider and the joint angles of the robot can be obtained.A suitable angle curve and the attitude angle of the foot are used for the spider-inspired robot.Figure 6 shows the angle curve of the femur-patella joint, femur-patella joint, the metatarsus-tarsus joint and the attitude angle of the foot for the robot during the acceleration phase.Similarly, in the robot's acceleration phase during take-off, the angles of each joint change with the extension of the leg, including the angle of the femoral-patella joint which increases gradually by approximately 90 • .The tibia-metatarsal angle decreases gradually by approximately 55 • ; the metatarsus-tarsus angle decreases gradually and the angle varies by 35 • ; and the attitude angle of the feet is kept within 70-80 • .Through observing the black solid line, the constraint is obtained Hence, we propose the method based on bionic kinematics with redundant freedom as it offers an excellent approach to solve the inverse kinematics for a bionic structure with redundant freedom.
Path Planning
There are three phases in the whole jumping process: take-off, flying and touchdown.In this paper, it is mainly the take-off phase of the robot that is studied.The take-off phase refers to the whole movement process of the robot, from a stationary state to jumping away from the ground.In other words, from the moment the robot foot has been subjected to the ground force Fg in the take-off phase until the moment the robot's feet leave the ground.The take-off process of jumping consists of flexing the leg to store energy and extending to release energy.In the first phase, the legs flex and the CoG drops.Then, the initial attitude angle is adjusted with stored energy to take off.In the phase of releasing energy, the robot legs extend, the body mass center rises, and the robot adjusts to reach a proper take-off stance.Then, the legs continue to extend, hit the ground violently and cause impact force, which makes the robot accelerate and realize the jumping movement.Therefore, the trajectory of the mass center decreases slowly and then rises rapidly.When the robot jumps, the speed of the CoG is an important index of the take-off performance of a robot, as it determines the jumping height and the jumping distance when the robot jumps off the ground.Suppose that the moment the robot takes off from the ground is tf, the velocity of the CoG is [ , and the acceleration is [ The foot of the robot generates the impact force via continuous contact with the ground so that the robot will accumulate acceleration and speed and finally realize the jumping task.
Path Planning
There are three phases in the whole jumping process: take-off, flying and touchdown.In this paper, it is mainly the take-off phase of the robot that is studied.The take-off phase refers to the whole movement process of the robot, from a stationary state to jumping away from the ground.In other words, from the moment the robot foot has been subjected to the ground force F g in the take-off phase until the moment the robot's feet leave the ground.The take-off process of jumping consists of flexing the leg to store energy and extending to release energy.In the first phase, the legs flex and the CoG drops.Then, the initial attitude angle is adjusted with stored energy to take off.In the phase of releasing energy, the robot legs extend, the body mass center rises, and the robot adjusts to reach a proper take-off stance.Then, the legs continue to extend, hit the ground violently and cause impact force, which makes the robot accelerate and realize the jumping movement.Therefore, the trajectory of the mass center decreases slowly and then rises rapidly.When the robot jumps, the speed of the CoG is an important index of the take-off performance of a robot, as it determines the jumping height and the jumping distance when the robot jumps off the ground.Suppose that the moment the robot takes off from the ground is t f , the velocity of the CoG is [ .
Z(t)] T .
The foot of the robot generates the impact force via continuous contact with the ground so that the robot will accumulate acceleration and speed and finally realize the jumping task.
In this paper, three different types of jumping-vertical jumping, side jumping and forward jumping-are discussed.The position vector of the robot foot relative to the base coordinate is [X Y Z] T .When the robot jumps vertically, the whole body has an upward acceleration and velocity relative to the ground.The robot has acceleration and velocity in the Z direction.When the robot jumps sideways, the robot has acceleration and velocity in the X and Z direction.When the robot jumps forward, the robot has acceleration and velocity only in the Y and Z direction.Thus, the constraint conditions of the motion of the CoG in these jumping forms of the robot must be discussed.At first, the reacting force of the ground to the foot of the robot gradually decreases to zero when the foot of the robot leaves the ground.At this instant, the contact force of the foot fulfills the condition The acceleration of CoG fulfills the following constraints: ..
At the take-off moment t f , the robot accumulates sufficient velocity to achieve three types of jumps and the robot leaves the ground.The velocity constraints are as shown in Table 2. Here, V xuf , V yuf , and V zuf denote the velocity of the vertical jump.V xsf , V ysf , and V zsf denote the velocity of the sideways jump.V xff , V yff , and V zff denote the velocity of the forward jump.Therefore, Table 2. Velocity constraints.
Upward Jumping Sideway Jumping
Forward Jumping During the whole process of the take-off (0 ≤ t ≤ t f ), it is necessary to ensure that the robot does not leave the ground in advance and that it gains sufficient take-off speed.a xuf , a yuf , and a zuf are used to indicate the acceleration of the robot when the robot jumps vertically.a xsf , a ysf , and a zsf are used to indicate the acceleration of the robot when the robot jumps sideways.a xff , a yff , and a zff are used to indicate the acceleration of the robot when the robot jumps forward.The acceleration constraints are as shown in Table 3.Therefore, Table 3. Acceleration constraints.
Upward Jumping Sideway Jumping
Forward Jumping When the robot is ready to take off, the motion of the foot relative to the base coordinate {O} should fulfill the following boundary conditions where [X 0 Y 0 Z 0 ] T is the position vector of the foot relative to the base coordinate {O} at the initial stage of jumping, and [X f Y f Z f ] T is the position vector of the foot relative to the base coordinate {O} when the robot leaves the ground.Then, the robot has the following relation when it is in the flying phase: where H v is the maximum jumping height of the robot, L y is the maximum distance along the longitudinal body axis, and L x is the maximum distance along the body lateral axis.During the take-off process, the rational trajectory planning of the foot enables the robot to complete the jump task successfully.The acceleration of the robot while jumping is planned.To make sure that there is no impact between the foot and the ground when the robot makes contact and lifts off, the contact force must be smooth.Here, the quadric curve is used to plan the acceleration of the robot.Then, the acceleration curve equations are where the symbols t where [T] is time matrix, [A] is coordinate matrix.According to the planning of the acceleration, the instantaneous velocity when the robot foot hits the ground and the trajectory of the foot tip can be obtained as According to the position vector of the robot foot relative to the base coordinate [X Y Z] T , we can obtain all joint angles of the robot by inverse kinematics.The acceleration of the robot becomes zero at the exact moment of the foot's take-off, and the velocity of the robot reaches its maximum at the same time.During the flying phase of the robot, the maximum jumping height of the robot is 100 mm, the maximum distance along the longitudinal body axis is 250 mm, and the maximum distance along the body lateral axis is 250 mm.
Attitude Planning
In the jump process of the robot, the reasonable initial attitude angle of the robot can affect the jumping height and distance.The initial jumping posture mainly includes the pitch angle, yaw angle and roll angle of the robot when taking off.By controlling the pitch angle of the robot, the maximum height and maximum distance of the robot can be reached when the robot jumps forward.The roll angle affects the sideways jump, while the yaw angle can adjust the jumping direction of the robot, allowing the robot to jump towards the target.During vertical jumps, the yaw angle and pitch angle of the robot are 0 • .The initial joint position and angles of each leg of the robot are the same, correspondingly.The robot can jump vertically by reacting to the ground.Before the robot begins jumping sideways, the body leans to the left due to a slight flexion of the left legs, and the robot has a certain roll angle of θ r in the jumping direction.Meanwhile, the yaw angle and pitch angle are zero.A simple model of the robot's posture during a sideways jump is shown in Figure 7a.When the robot jumps to the right, the effective length of the three left legs of the robot is larger than that of the right legs in the initial state to satisfy a certain proportion.Therefore, we can obtain where Z l is the Z coordination of the three left legs opposite to the base coordinate {O}.Z r is for the right legs.K 1 is a constant, and it is determined by the initial attitude of the robot.In the jump process of the robot, the reasonable initial attitude angle of the robot can affect the jumping height and distance.The initial jumping posture mainly includes the pitch angle, yaw angle and roll angle of the robot when taking off.By controlling the pitch angle of the robot, the maximum height and maximum distance of the robot can be reached when the robot jumps forward.The roll angle affects the sideways jump, while the yaw angle can adjust the jumping direction of the robot, allowing the robot to jump towards the target.During vertical jumps, the yaw angle and pitch angle of the robot are 0°.The initial joint position and angles of each leg of the robot are the same, correspondingly.The robot can jump vertically by reacting to the ground.Before the robot begins jumping sideways, the body leans to the left due to a slight flexion of the left legs, and the robot has a certain roll angle of θr in the jumping direction.Meanwhile, the yaw angle and pitch angle are zero.A simple model of the robot's posture during a sideways jump is shown in Figure 7a.When the robot jumps to the right, the effective length of the three left legs of the robot is larger than that of the right legs in the initial state to satisfy a certain proportion.Therefore, we can obtain where Zl is the Z coordination of the three left legs opposite to the base coordinate {O}.Zr is for the right legs.K1 is a constant, and it is determined by the initial attitude of the robot.When the robot jumps forward, the forelegs extend and the hind legs flex during their contact phase, and the middle legs remain in slight extension.Therefore, the robot has a certain pitch angle in the jump direction; the pitch angle is θp.The roll angle and raw angle are 0°.The simple model of the robot posture of the forward jump is shown in Figure 7b.At the beginning of the jump, the effective length of the forelegs of the robot is larger than that of the middle legs in the initial state; meanwhile, the effective length of the middle legs of the robot is larger than that of the hind legs in the initial state, and they satisfy two proportions:
Body
where Zh is the Z coordination of the hind legs opposite to the base coordinate {O}; Zf is the forelegs; and Zm is the middle legs.K2 is a constant and it is determined by the initial attitude of the robot.When the robot jumps forward, the forelegs extend and the hind legs flex during their contact phase, and the middle legs remain in slight extension.Therefore, the robot has a certain pitch angle in the jump direction; the pitch angle is θ p .The roll angle and raw angle are 0 • .The simple model of the robot posture of the forward jump is shown in Figure 7b.At the beginning of the jump, the effective length of the forelegs of the robot is larger than that of the middle legs in the initial state; meanwhile, the effective length of the middle legs of the robot is larger than that of the hind legs in the initial state, and they satisfy two proportions: where Z h is the Z coordination of the hind legs opposite to the base coordinate {O}; Z f is the forelegs; and Z m is the middle legs.K 2 is a constant and it is determined by the initial attitude of the robot.
Results
To further verify the algorithm of omnidirectional jumping, a series of experiments is conducted using a hexapod robot platform.There are three groups of experiments: the vertical jump, sideways jump, and forward jump.In these experiments, the joint angles as input of the simulation system, and the CoG and the attitude angle as output of the simulation system are used to verify the design requirement and stability of the robot.To avoid the slip phenomenon in the jumping process, and to achieve a certain friction between the ground and the robot foot, we use a wooden floor as the ground.We assume that there is no air resistance and no slipping phenomenon when the robot jumps, since the beginning of the jump is our main concern and the structure is slim with little air resistance.Friction is supposed as a load torque.Robot links are in rigid connections.In this paper, the simulation results are obtained under Matlab-Adams, relevant parameters are set in Adams, and the parameters include static coefficient, dynamic coefficient and stiffness.Experiment parameters are shown in Table 4.
Vertical Jump
In this section, we obtain data on the robot jumping vertically; these data include the joint angle, attitude angle, joint velocity, foot contact force, take-off velocity, trajectory of CoG, jumping height and jumping distance.These data are based on the simulation result under Matlab-Adams, and we can prove the reliability and rationality of vertical jump by analyzing it.
Figure 8a,b shows the simulation platform of the virtual jump prototype.At the beginning of the vertical jump, the body slowly moves downward due to a slight flexion of the robot leg, and the robot begins to store energy.During the acceleration phase of the vertical jump, the effective leg length extends quickly, and the robot starts to take off. Figure 8c shows a sketch of the robot jumping vertically.The initial height of the CoG of the robot is 180 mm and the jumping height is 100 mm.In the vertical jumping experiment, we measure the contact force of the foot and the joint force; we observe the change of joint forces to verify whether the motors meet the requirement.
During the vertical jumping, each joint of the six legs has the same rotation angle.Here, we can see the set of angle curves of the robot leg in Figure 8c.Coxal joint angles of all legs always remain the same during the vertical jump: they are 0 rad.Before the acceleration phase begins, the femur-patella joint angle, tibia-metatarsus joint angle and metatarsus-tarsus joint angle begin to slowly change, and the robot body slows down and adjusts the attitude.Then, the foot moves down quickly, and the robot starts to accelerate by extending its legs at 0.8 s.At 1.0 s, the height of the CoG is 300 mm, and the robot leaves the ground and takes off.At 1.2 s, the robot rises to its maximum height of approximately 400 mm, the robot completes the task of jumping 100 mm in height.From 0.8 s to 1.2 s, the femur-patella joint angle changes by approximately 1.5 rad, the tibia-metatarsus joint angle changes by approximately 1.2 rad and the metatarsus-tarsus joint angle changes by approximately 2.0 rad.As mentioned, the effective length of the robot's legs always changes during the vertical jump.For the robot to achieve stability, the change of the attitude angle is analyzed.The initial attitude angle of the robot is not 0 rad, which is due to the initial pose of the robot in Figure 8e.The roll angle holds at near 0 rad when the robot jumps.The pitch angle holds at 0.002 rad at approximately 0-1 s, and the roll angle increases gradually to 0.01 rad when the robot lifts off the ground.In the process of jumping, the yaw angle of the robot always holds at approximately −0.025 rad.According to the simulation results, the robot has good stability in the vertical jump process.Figure 8f is the contact force of the foot tip for the robot in the vertical jump process.When the robot jumps vertically, it completes the upward jumping task through the contact force in the Z direction.The forces all cancel each other out which acts on the entire robot in the X and Y directions.As shown in Figure 8f, the initial contact force is 0 N, which is due to the initial state of the robot.At 0.3-0.8s, the CoG slowly drops and the contact force remains constant; the contact force of X is approximately 10 N, the contact force of Y is 0. The contact force FZ of Z is approximately 12 N, which is 1/6 of the gravity of the robot.At 0.3-0.8s, the contact force of the foot tip increased rapidly, and the Z directional force As mentioned, the effective length of the robot's legs always changes during the vertical jump.For the robot to achieve stability, the change of the attitude angle is analyzed.The initial attitude angle of the robot is not 0 rad, which is due to the initial pose of the robot in Figure 8e.The roll angle holds at near 0 rad when the robot jumps.The pitch angle holds at 0.002 rad at approximately 0-1 s, and the roll angle increases gradually to 0.01 rad when the robot lifts off the ground.In the process of jumping, the yaw angle of the robot always holds at approximately −0.025 rad.According to the simulation results, the robot has good stability in the vertical jump process.Figure 8f is the contact force of the foot tip for the robot in the vertical jump process.When the robot jumps vertically, it completes the upward jumping task through the contact force in the Z direction.The forces all cancel each other out which acts on the entire robot in the X and Y directions.As shown in Figure 8f, the initial contact force is 0 N, which is due to the initial state of the robot.At 0.3-0.8s, the CoG slowly drops and the contact force remains constant; the contact force of X is approximately 10 N, the contact force of Y is 0. The contact force F Z of Z is approximately 12 N, which is 1/6 of the gravity of the robot.At 0.3-0.8s, the contact force of the foot tip increased rapidly, and the Z directional force increased to 24 N at 1.0 s, and decreased to 12 N after 1.0 s.Overall, the vertical acceleration of the robot does not reach 0 m/s 2 , the speed increases and reaches its maximum at 1 s.After the robot jumps off the ground, the contact force is 0 N until the robot lands.Figure 8g is the joint torque of the robot in the vertical jump process.At the initial stage of simulation, the robot foot does not touch the ground and the joint torque of the robot is 0 Nm.When the robot touches the ground, the joint torques changes slowly between 1 Nm and 3 Nm until the robot begins to adjust its posture at 0.5 s.At 0.8-1.2s, the joint torque of the robot begins to increase.At 1.4 s, the joint torque of the robot fluctuates wildly, and the maximum valve is approximately 8 Nm, which is caused by the impact of the robot landing.
According to these curves, we can see that the experimental data are consistent with what we expect from our projections; the robot could complete the task of jumping 100 mm in height while maintaining good stability.
Sideways Jump
The simulation experiment of the robot jumping sideways is carried out to verify whether the joint angle, attitude angle, jumping height and jumping distance are consistent with motion planning or not.Unlike the vertical jump, there is much variability in the joint kinematics and attitude with the sideways jump.At the beginning of the sideways jump, the lateral body axis began to rotate sideways in the jumping direction.The initial attitude is adjusted to prepare to jump.At the same time, the CoG drops and begins to store energy.During the acceleration phase of the sideways jump, the effective leg length of the robot extends quickly and the foot impacts the ground quickly, then the robot starts to take off. Figure 9b shows a sketch of the robot jumping sideways.The initial height of the robot is 180 mm, the jumping height is 100 mm, and the jumping distance is 250 mm.
During the robot's sideways jump, the legs in contact with the ground are first flexed and then extended.Each joint of the six legs has a different rotation angle.Before the acceleration phase begins, the CoG begins to drop and store energy at 0.3 s (Figure 9d); the legs on one side of the robot's body are obviously flexed, but those on the other side remain relatively small in the jumping direction.From 0.3 s to 0.8 s, the robot body slows down and adjusts the attitude.At this instant, the femur-patella joint angle, tibia-metatarsus joint angle and metatarsus-tarsus joint angle begin to slowly change, and the coxal joint angle remain constant until the jump ends (Figure 9c).Then, the foot impacts the ground quickly the robot's legs start to extend and the robot begins to accelerate at approximately 0.8 s.The joint angle begins to change quickly (Figure 9c), while the coxal joint angle of the middle leg remains unchanged; this is due to the robot's structure and path planning of the sideways jump.The displacement of the robot in the jumping direction begins to change, depending on the time at which the robot is in the air.Then, the robot leaves the ground and takes off, with the height of the CoG at 320 mm in this moment.The robot rises to its maximum height of approximately 420 mm and completes the task of jumping 100 mm (Figure 9d).At 1.4 s, the robot will touch down.The robot is in the air for approximately 0.4 s, while the jumping distance is 250 mm (Figure 9d).The robot jumps to the right.From 0.8 s to 1.2 s, the femur-patella joint angle changes by approximately 2 rad, the tibia-metatarsus joint angle changes by approximately 2.5 rad and the metatarsus-tarsus joint angle changes by approximately 0.5 rad in the three left legs (Figure 9c).Concurrently, the femur-patella joint angle changes by approximately 1.5 rad, the tibia-metatarsus joint angle changes by approximately 2.5 rad and the metatarsus-tarsus joint angle changes by approximately 1.0 rad in the three right legs (Figure 9c).The attitude angle directly influences the stability of the robot when it jumps sideways in Figure 9c.The initial attitude angle of the robot depends on attitude planning.The roll angle always holds near 0.02 rad when the robot jumps, and the roll angle changes 0.1 rad until the robot has landed.The pitch angle holds at near 0.2 rad.In the process of jumping, the yaw angle of the robot always holds at 0.02 rad; it increases gradually to 0.03 rad when the robot has landed.According to the simulation results, the attitude angle of the robot is nearly invariable.Hence, the robot has good stability in the sideways jump process, which provides a good theoretical basis for the forward jump of the robot.According to the simulation results, we can see that the experimental data are consistent with what we expect from our projections for the sideways jump.Therefore, the robot could complete the task of jumping 100 mm in height and 250 mm in distance, while maintaining good stability.According to the simulation results, we can see that the experimental data are consistent with what we expect from our projections for the sideways jump.Therefore, the robot could complete the task of jumping 100 mm in height and 250 mm in distance, while maintaining good stability.
Forward Jump
The simulation of the robot jumping forward is carried out to verify whether the joint angle, attitude angle, jumping height and jumping distance is consistent with motion planning or not.The simulation results are seen in Figure 10.In the forward jump, the robot's body slowly moves backwards and downwards due to a slight flexion of the hind legs and the middle legs before the acceleration phase begins.The robot begins to store energy with the CoG dropping.At the same time, the initial attitude is adjusted in preparation to jump.Then, the robot starts to accelerate by extending its legs; the effective leg length increases quickly.Eventually, the robot takes off.Figure 10b shows a sketch of the robot jumping forward.The initial height of the CoG of the robot is 180 mm, the jumping height is 100 mm, and the jumping distance is 250 mm.
While the robot jumps forward, the legs that have contact with the ground are first flexed and then extended.Each joint of the six legs has a different rotation angle (Figure 10c).Before the acceleration phase of the forward jump, the body rotates backwards as the hind legs and the middle of the robot body slightly flex in the jumping direction.Meanwhile, the robot adjusts its attitude by moving backwards and downwards.Then, the foot impacts the ground quickly, and the robot's legs start to extend and, at 0.8 s, the robot begins to accelerate.At the same instant, the joint angle begins to quickly change (Figure 10c).At 1.0 s, the robot loses contact with the ground and takes off; due to the accumulation of velocity, the rotation of the body is reversed.At this instant, the height of the CoG is 320 mm.The robot rises to its maximum height of approximately 420 mm at 1.2 s; it then completes the task of jumping 100 mm in height (Figure 10d).After this instant, none of the joint angles change (Figure 10c).
During the robot's forward jump, and the Coxal joint angles change by approximately 0.8 rad.The femur-patella joint angle changes by approximately 2 rad, the tibia-metatarsus joint angle changes by approximately 1.5 rad and the metatarsus-tarsus joint angle changes by approximately 1 rad.At 1.4 s, the robot lands on the ground.The robot is in the air for approximately 0.4 s, and the jumping distance is 250 mm (Figure 10d).In Figure 10e, the initial attitude angle is not 0, and the initial roll angle is 0.01 rad; the initial yaw angle is 0.025 rad; and the initial pitch angle is 0.02 rad, due to the initial pose of the robot.The roll angle changes by 0.02 rad in the jumping process, and the roll angle decreased by 0.1 rad when the robot landed.The pitch angle holds at 0.01 rad in the jumping process, and it generated a fluctuation of 0.1 rad when the robot landed.The yaw angle was almost constant until the robot landed, maintaining 0.015 rad.According to the simulation results, the robot has good stability in the forward jump process, which provides a good theoretical basis for the forward jump of the robot.
Observing the simulation result, we find that the experimental data are consistent with our projections of the forward jump.The robot could complete the task of jumping 100 mm in height and 250 mm in distance; it maintains good stability in the flying phase.
Discussion
In this paper, research on the omnidirectional jump control of the hexapod robot based on the behavior mechanisms of jumping spiders was undertaken.In the first section, the jumping forms of several typical legged robots [16,25,28,34] were described in detail.Normally, the more jumping force a robot has, the more difficult it is to control its jumping movement.However, the robot with more jumping force has a better capacity for avoiding obstacles.In the past decade, research on jumping robots has mainly focused on single-direction jumps, such as the vertical jump and the
Discussion
In this paper, research on the omnidirectional jump control of the hexapod robot based on the behavior mechanisms of jumping spiders was undertaken.In the first section, the jumping forms of several typical legged robots [16,25,28,34] were described in detail.Normally, the more jumping force a robot has, the more difficult it is to control its jumping movement.However, the robot with more jumping force has a better capacity for avoiding obstacles.In the past decade, research on jumping robots has mainly focused on single-direction jumps, such as the vertical jump and the forward jump.However, this type of jumping robot must first adjust its posture and jumping direction when trying to avoid an obstacle.Therefore, the efficiency of the robot is greatly reduced.We study the change of the effective leg length and joint angle of the jumping spider, before proposing an omnidirectional jump control of the hexapod robot based on the behavior mechanisms of jumping spiders.Through a series of simulation experiments, we verify the possibility of an omnidirectional jump by comparing the simulation and experimental data with the data expected from our projections.Finally, the robot realizes rapid jumping in all directions under any circumstances.
There are several forms of DoF for robot legs, including 1DoF, 3DoFs and 4DoFs.Large DoF will greatly increase the complexity of motion controls, but smaller ones cannot satisfy the amount of motion required.The 1DoF model of a jumping robot leg is usually made by a hydraulic component [37] or an elastic component [38].However, it is difficult to guarantee the stability of motion with such robots.Currently, 3DoFs for jumping robot legs is the main form, but does not have enough flexibility when the robot jumps omnidirectionally.In this paper, the jumping robot has adapted 4DoFs-mechanical legs, which increases the flexibility of the robot when it jumps omnidirectionally.The attitude angle of the foot tip is obtained based on bionic kinematics, and the constraint of the foot tip's attitude angle is added to calculate the inverse kinematic.Since most studies are in a unidirectional jumping stage to the jumping robot, this type of robot cannot realize omnidirectional jumping, especially avoid obstacles quickly.In this paper, omnidirectional jumping has been proposed as it allows the robot to avoid and cross obstacles in all directions.We proposed a study based on bionic kinematics with redundant freedom, which could potentially solve the inverse kinematics for a bionic structure with redundant freedom and achieve to jump in many different types to robot.Furthermore, we proposed locomotion planning, especially attitude planning, by observing the behavior of the jumping spider.
Conclusions
In this paper, the difficulty of omnidirectional jumping for a jumping bio-mimetic spider robot was addressed.The theoretical contribution and novelty of this paper can be summarized as follows: (1) The path of the robot, the initial attitude and the trajectory of the foot tip must be planned to complete the jumping task.In particular, the reasonable initial attitude angle of the robot can affect jumping height and distance.(2) To satisfy the diversity of motion forms in robot jumping, each leg has 4DoFs.
However, the 4DoFs-mechanical leg is a redundant structure and we must find a constraint condition.According to the change curve of each joint angle in the process of spider jumping, we can obtain the attitude angle curve as the added constraint condition.(3) Three kinds of jumps are verified on the jumping robot prototype: vertical jumps, sideways jumps and forward jumps.The proposed method is verified by a series of simulation experiments.
The results indicate that the jumping robot could maintain stability and complete the task of jumping we planned, and the proposed spider-inspired jumping strategy could easily achieve an omnidirectional jump, and robot able to avoid obstacles quickly.
The results indicate that the robot can perform the omnidirectional jump according to the path planning of the CoG and the initial jumping attitude.The robot also has better stability, as observed by the attitude angle of the jumping robot during its jumps.Therefore, the robot can jump in any direction by providing the trajectory of the CoG and the initial attitude.
Figure 1 .
Figure 1.(a) The schematic diagram of the whole body; and (b) sketch of the spider's leg joint and leg structure.
Figure 2 .Figure 1 .
Figure 2. (a) The schematic diagram of the robot; and (b) sketch of the joint and structure.
Figure 1 .
Figure 1.(a) The schematic diagram of the whole body; and (b) sketch of the spider's leg joint and leg structure.
Figure 2 .Figure 2 .
Figure 2. (a) The schematic diagram of the robot; and (b) sketch of the joint and structure.
Figure 4 .
Figure 4.The simplified model of the linkage.O represents the femoral-patella joint, A represents the tibia-metatarsus joint, B represents the metatarsus-tarsus joint, and D represents the foot tip of the robot.
Figure 5 .
Figure 5. Schematic of the leg at defined instances.Black circles indicate the joint and the red circle indicates the foot.The arrows indicate the movements of the joints.The red line represents the axis of the spider's body, the blue line represents the leg of spider, and black line represents the ground.
Figure 4 .
Figure 4.The simplified model of the linkage.O represents the femoral-patella joint, A represents the tibia-metatarsus joint, B represents the metatarsus-tarsus joint, and D represents the foot tip of the robot.
Figure 4 .
Figure 4.The simplified model of the linkage.O represents the femoral-patella joint, A represents the tibia-metatarsus joint, B represents the metatarsus-tarsus joint, and D represents the foot tip of the robot.
Figure 5 .
Figure 5. Schematic of the leg at defined instances.Black circles indicate the joint and the red circle indicates the foot.The arrows indicate the movements of the joints.The red line represents the axis of the spider's body, the blue line represents the leg of spider, and black line represents the ground.
Figure 5 .
Figure 5. Schematic of the leg at defined instances.Black circles indicate the joint and the red circle indicates the foot.The arrows indicate the movements of the joints.The red line represents the axis of the spider's body, the blue line represents the leg of spider, and black line represents the ground.
Figure 6 .
Figure 6.Angle curve for the duration of the acceleration phase: (a) femur-patella angle; (b) femur-patella angle; (c) metatarsus-tarsus angle; and (d) attitude angle of the foot.The red dashed line indicates the optimal curve, and the shadow area indicates the range of the angle of the joint indicated by the Quartile Range (IQR) method.The black solid line indicates the optimal curve of the attitude angle of the foot.The other solid line indicates the angle curve for the duration of the acceleration phase.
Figure 6 .
Figure 6.Angle curve for the duration of the acceleration phase: (a) femur-patella angle; (b) femur-patella angle; (c) metatarsus-tarsus angle; and (d) attitude angle of the foot.The red dashed line indicates the optimal curve, and the shadow area indicates the range of the angle of the joint indicated by the Quartile Range (IQR) method.The black solid line indicates the optimal curve of the attitude angle of the foot.The other solid line indicates the angle curve for the duration of the acceleration phase.
Figure 7 .
Figure 7. Simplified model of the robot posture.(a) Sideways jump: The red solid line indicates the transverse axis of the robot body, the green solid line indicates the left leg of the robot body, the blue solid line indicates the right leg, the purple dashed line indicates the effective length, and the black circles indicate the joints.(b) Jump forward: Green solid lines indicate the hind legs of the robot body, the blue solid line indicates the foreleg, and the yellow solid line indicates the middle leg.
Figure 7 .
Figure 7. Simplified model of the robot posture.(a) Sideways jump: The red solid line indicates the transverse axis of the robot body, the green solid line indicates the left leg of the robot body, the blue solid line indicates the right leg, the purple dashed line indicates the effective length, and the black circles indicate the joints.(b) Jump forward: Green solid lines indicate the hind legs of the robot body, the blue solid line indicates the foreleg, and the yellow solid line indicates the middle leg.
Figure 8 .
Figure 8.(a) Simulation platform of the virtual jump; (b) sketch of the robot vertically jumping; (c) joint angle curves of the robot leg; (d) trajectory of center of gravity (CoG); (e) attitude angle of robot; (f) contact force; and (g) joint torque.
Figure 8 .
Figure 8.(a) Simulation platform of the virtual jump; (b) sketch of the robot vertically jumping; (c) joint angle curves of the robot leg; (d) trajectory of center of gravity (CoG); (e) attitude angle of robot; (f) contact force; and (g) joint torque.
Figure 9 .
Figure 9. (a) Simulation platform of the sideways jump; (b) sketch of robot jumping sideways; (c) joint angle curves of the robot leg; (d) trajectory of the CoG; and (e) attitude angle of the robot.
Figure 9 .
Figure 9. (a) Simulation platform of the sideways jump; (b) sketch of robot jumping sideways; (c) joint angle curves of the robot leg; (d) trajectory of the CoG; and (e) attitude angle of the robot.
Figure 10 .
Figure 10.(a) Simulation platform of the forward jump; (b) sketch of the robot forward jumping; (c) joint angle curves of the robot leg; (d) trajectory of the CoG; and (e) attitude angle of robot.
Figure 10 .
Figure 10.(a) Simulation platform of the forward jump; (b) sketch of the robot forward jumping; (c) joint angle curves of the robot leg; (d) trajectory of the CoG; and (e) attitude angle of robot.
1 , t 2 , and t 3 , respectively, indicate the initial time t 0 , the ground departure time t f and the intermediate time t m of the body.The symbols (a x1 ,a y1 ,a z1 ), (a x2 ,a y2 ,a z2 ), and (a x3 ,a y3 ,a z3 ), respectively, indicate the acceleration vectors corresponding to the three moments.The coefficients of each component are B | 2018-02-18T05:50:29.818Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "65219c67d046c63decf73f4d45c475b8a76a556f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/8/1/51/pdf?version=1514896651",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "65219c67d046c63decf73f4d45c475b8a76a556f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
243606153 | pes2o/s2orc | v3-fos-license | Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces
: Shadow often results in di ffi culties for subsequent image applications of multispectral satellite remote sensing images, like object recognition and change detection. With continuous improvement in both spatial and spectral resolutions of satellite remote sensing images, a more serious impact occurs on satellite remote sensing image interpretation due to the existence of shadow. Though various shadow detection methods have been developed, problems of both shadow omission and nonshadow misclassification still exist for detecting shadow well in high-resolution multispectral satellite remote sensing images. These shadow detection problems mainly include high small shadow omission and typical nonshadow misclassification (like bluish and greenish nonshadow misclassification, and large dark nonshadow misclassification). For further resolving these problems, a new shadow index is developed based on the analysis of the property di ff erence between shadow and the corresponding nonshadow with several multispectral band components (i.e., near-infrared, red, green and blue components) and hue and intensity components in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), respectively. The shadow mask is further acquired by applying an optimal threshold determined automatically on the shadow index image. The final shadow image is further optimized with a definite morphological operation of opening and closing. The proposed algorithm is verified with many images from WorldView-3 and WorldView-2 acquired at di ff erent times and sites. The proposed algorithm performance is particularly evaluated by qualitative visual sense comparison and quantitative assessment of shadow detection results in comparative experiments with two WorldView-3 test images of Tripoli, Libya. Both the better visual sense and the higher overall accuracy (over 92% for the test image Tripoli-1 and approximately 91% for the test image Tripoli-2) of the experimental results together deliver the excellent performance and robustness of the proposed shadow detection approach for shadow detection of high-resolution multispectral satellite remote sensing images. The proposed shadow detection approach is promised to further alleviate typical shadow detection problems of high small shadow omission and typical nonshadow misclassification for high-resolution multispectral images.
Introduction
More complex details of land covers (e.g., buildings, towers, vegetation, farms and roads) are obtained easily from high spatial resolution (HSR) multispectral satellite remote sensing images which the original SRI images of Tsai showed that the enhanced shadow detection method improved the shadow omission problem in the visual aspect. On the foundation of Tsai's efficient shadow detection algorithm, Chung et al. [23] proposed a modified ratio map by applying an exponential function to the SRI by Tsai, and presented a successive thresholding scheme (STS) rather than only using a global threshold [20]. Experiments in color aerial images revealed that the proposed algorithm by Chung et al. [23] showed an improved performance in detecting shadow in images containing low brightness objects. Inspired by the STS procedure by Chung et al. [23]. Silva et al. [24] extended the SRI method by Tsai [20] specifically in the CIELCh color space by applying a natural logarithm function to the original ratio map to compress the original values, resulting in the logarithmic spectral ratio index (LSRI) algorithm. Then, the ratio map was segmented by applying multilevel thresholding. This modified ratio method performed better in color aerial images by accurately detecting shadow and avoiding misclassifying dark areas compared with the original ratio method by Tsai [20] and the STS method by Chung et al. [23]. In addition, Ma et al. [25] presented a similar shadow detection method based on the normalized saturation-value index (NSVDI) in the HSV color space. A rough shadow index image was formed at first with the NSVDI method. Then the rough shadow index image was segmented to obtain the final shadow image with a certain threshold. This NSVDI method performed well in detecting large shadow in IKONOS multispectral images despite omitting some small shadow. Mostafa et al. [26] also presented a shadow detector index (SDI) for shadow detection in HSR multispectral satellite remote sensing images. The SDI algorithm was developed by first analyzing the difference between shadow and typical nonshadow, particularly for vegetation, in terms of green and blue components, and subsequently applying the neighborhood valley-emphasis method (NVEM) to binarize the SDI index image for obtaining the shadow image [27]. The SDI approach performed well in classifying shadow from vegetation, and acquired high shadow detection accuracies, except for the shortcomings of some small shadow omission and misclassification of some dull red roof.
Though an increasing number of shadow detection algorithms have been put forward for detecting shadow in HSR multispectral satellite remote sensing images and color aerial images in recent years, shadow detection problems still need a further settlement, mainly including high small shadow omission and typical nonshadow misclassification (like bluish and greenish dark nonshadow misclassification, as well as large dark nonshadow misclassification). Therefore, shadow detection is still challenging for HSR multispectral satellite remote sensing images.
In this paper, we first construct a logarithmic shadow index (LSI) and subsequently develop an LSI shadow detection approach for shadow detection of HSR multispectral satellite remote sensing images, particularly for further settling problems of high small shadow omission and typical nonshadow misclassification (like bluish and greenish dark nonshadow misclassification, as well as large dark nonshadow misclassification). Our presented LSI shadow detection algorithm employs special properties of shadow, namely, the dramatical decrease of NIR component, the higher hue value and the lower intensity value, by further studying properties of shadow in terms of both multispectral band components (mainly including visible bands and NIR band) and invariant color components in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr, YIQ) compared with the corresponding nonshadow. Based on the proposed LSI, we acquire the shadow image by firstly segmenting the shadow index image automatically with an optimal threshold determined with the NVEM thresholding method [27] and subsequently optimizing the initial shadow image with a certain morphological operation. For verifying the shadow detection performance of our proposed LSI algorithm, comparative experiments are carried out with many images from WorldView-3 and WorldView-2 acquired at different time and sites, and shadow detection performance is particularly assessed both qualitatively and quantitatively against several standard shadow detection algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) with two WorldView-3 test images of Tripoli, Libya.
The rest proceeds as follows. The LSI shadow detection is detailly developed step by step in Section 2. Comparative experiments and performance assessments are conducted both qualitatively and quantitatively in Section 3. The influential elements and sensitivity factors are separately discussed in Section 4. Finally, conclusions are drawn in Section 5.
Method
In accordance with the Phong illumination model [19] and contributions in other studies [14,15,20,28], compared with nonshadow regions, similar ground objects in shadow regions often obviously possess the following properties: 1.
Dramatic decrease in terms of NIR component compared with R, G and B components.
These shadow properties above are easily found in multispectral images and several invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). Taking these properties into consideration, NIR, H and I components are particularly employed in our presented shadow detection approach that is accomplished step by step from Step 1 to Step 4, as depicted in Figure 1 and stated in detail as follows.
Step 1: Color Space Conversion
Chromaticity and luminance are powerful descriptors for color images [28]. The appropriate description of both chromaticity and luminance simplifies image characteristic extraction and image interpretation [29]. Colors for image expression are often regarded as a certain combination of R, G and B stimuli in RGB color space in accordance with the provision of the Commission Internationale del'Eclairage (CIE) [20,29]. Several color spaces are briefly introduced in terms of the RGB color space as follows, in which chromaticity and luminance components are usually well decoupled.
In particular, the HSV color space consists of value (V), saturation (S) and hue (H) components. Smith described the arithmetic relation between components of the HSV color space and those of the RGB color space as Equations (1)-(3) [20,29]: where θ is obtained with the following equation.
Similarly, the HIS color space describes the color image in terms of intensity (I), saturation (S) and hue (H) components, in which saturation and hue components together constitute the chromaticity term and intensity is also known as luminance [29]. The HIS color space is usually computed from the RGB color space with Equations (5)-(7) [20]: where H is undefined under the condition of V 1 = 0. In addition, the YCbCr color space is often employed in JPEG, MPEG and H2.63 [20,30]. Equation (8) describes the linear relations between components in the YCbCr color space and those in the RGB color space.
Besides, the YIQ color space is regarded as a regulation widely utilized in the National Television Standards Commission (NTSC) [31]. During the color image description, Y component is in proportion to luminance used in gamma correction, and I and Q components together represent chromaticity, namely, saturation and hue components [20,29]. The YIQ color space is obtained with Equation (9) in terms of the RGB color space.
Additionally, the CIELCh color space is a polar representation of the CIELAB color space by the CIE to imitate how human eyes perceive color information. L and h components are often taken as luminance and hue components, respectively. For more details about the CIELCh color space, please refer to the work by Gonzalez [29] and Silva [24]. The arithmetic relation between the CIELCh color space and the RGB color space is described with Equations (10)- (16): where X n = 95.047, Y n = 100.00 and Z n = 108.883 respectively refer to reference values of XYZ, and atan2 is used in many standard libraries well coining with the condition a = 0 [32].
Step 2: NIR, H and I Extraction
In addition to the often utilized R, G and B components of the target image, NIR information attracts more attention ever than before along with the spectral resolution improvement of HSR remote sensing images by recently launched optical satellites [6,28,33]. Theoretically, in accordance with the Phong's illumination model [19] and the Huang's imaging model [15], the diffusion part of the incident light maintains the difference between shadow and nonshadow. Based on the diffusion part expression shown in Equation (17) and the electromagnetic wave theory in which the surface albedo is positively proportional to the wavelength, namely, the NIR component obtains a bigger surface albedo value than those of R, G and B components. The decrease values between shadow and nonshadow can be described with Inequation (18) in terms of NIR, R, G and B components [34,35]: (17) where C d is sensor response to the diffusion part of the incident light, m d is a parameter only depending on the geometry information, f c (λ) denotes the spectral sensitivity in the function of wavelength λ, e(λ) is the quantity of incident light, and c d (λ) is the surface albedo.
where NIR d is the decreased value between shadow and nonshadow in terms of NIR component, and Γ d ∈ {R d , G d , B d } are the decrease values between shadow and nonshadow in terms of R, G and B components, respectively. In order to effectively decouple chromaticity and luminance, input images are at first converted to express in several typical invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with the usually utilized R, G and B components in the RGB color space. Chromaticity and luminance are usually well decoupled in these invariant color spaces described above. Note that the Q component in the YIQ color space and the Cr component in the YCbCr color space are often regarded as the equivalent term with the H component in the HSV, HIS and CIELCh color spaces, which are together denoted as hue-equivalent (H) component. Similarly, the V component in the HSV color space, the Y components in both the YCbCr and YIQ color space, and the L component in the CIELCh color space are usually regarded as equivalent representations of the I component in the HIS color space, which will be expressed as intensity-equivalent (I) components [14,20]. H and I components are respectively extracted from these invariant color spaces. Additionally, Huang et al. [15] provide derivations about hue and intensity components between shadow and nonshadow, as presented in Equations (19) and (20) with which conclusions are drawn that bigger hue values and lower intensity values are usually obtained for shadow compared with the nearby nonshadow shown in Inequations (21) and (22): H shw > H nshw (21) I shw < I nshw (22) where H shw and I shw are hue and intensity components of shadow, R nshw , G nshw and B nshw are R, G and B components of the nearby nonshadow. Consequently, a dramatical decrease often appears in terms of NIR component compared with R, G and B components for surface features in shadow regions compared with the same type surface features in the nearby nonshadow regions, as illustrated in Figure 2a with samples from typical objects in HSR images (taking the WorldView-3 as an example). Accordingly, the NIR component of input images is additionally extracted to further coordinate with the shadow index construction described as follows. Accordingly, H and I of shadow possess the properties above, as illustrated in Figure 2b,c with samples from typical objects in HSR images (taking the WorldView-3 as an example). Hence, both H and I in these invariant color spaces are employed in the proposed shadow detection approach presented in the following.
Step 3: LSI Construction
Coupled with the NIR component and H and I components obtained with various invariant color spaces in Step 3, we construct a logarithmic shadow index (LSI) in this step to further enhance the difference between shadow and the corresponding nonshadow based on the shadow properties mentioned previously.
In particular, an initial shadow index (ISI) is first constructed with NIR, H and I components as follows: where NIR indicates the near-infrared component, H implies the equivalent hue component and I refers to the equivalent intensity component.
The developed ISI fully employs shadow properties of higher hue, lower intensity and dramatical decrease in terms of NIR component when compared with the corresponding nearby nonshadow containing the same type features.
Additionally, an obvious distinction appears between the linear function f (x) = x and the natural logarithm function f (x) = ln(x + 1) in compressing the data scale, as shown in Figure 3. Thus, the difference between the linear function and the natural logarithm function in compressing the data scale is further considered in LSI construction. Subsequently, in order to further improve the distinction between shadow and the corresponding nonshadow, a certain natural logarithmic operation is particularly applied over ISI further compressing ISI to a narrower scale [24] at the pixel level as follows: where "+1" is aimed at avoiding the calculation of ln(0). Additionally, a significant importance appears in real-time and approximate real-time image processing (taking the shadow detection as an example) for HSR satellites [14,28]. A great attention is attached to the timesaving shadow detection algorithm for shadow processing on HSR satellites. Accordingly, the proposed LSI algorithm is promised to be a timesaving one, because the shadow index of the proposed LSI algorithm is simply constructed with equivalent hue and intensity components as well as the NIR component.
Step 4: Binarization
A shadow mask is often accomplished by binarizing the previously acquired shadow index image with a certain threshold manually selected or automatically with a certain thresholding algorithm [20,23,24]. Several thresholding methods are widely used in the image binarization stage, such as the Otsu method [21], the valley-emphasis method (VEM) [36], and the (NVEM) thresholding method [27]. The Otsu thresholding method is a typical automatic one widely used in image binarization for images with a histogram distributed in a bimodal form [21]. However, difficulties occur when image histogram appears in a unimodal or approximately unimodal distribution. Additionally, in order to determine optimal threshold values for both unimodal and bimodal distributions, Ng [36] attempts to revise the Otsu method by applying a weight to the Otsu method resulting in the VEM thresholding method. Based on the study by Otsu and Ng [21,36], Fan et al. [27] propose the NVEM thresholding method, in which the between-class variance is further modified with the sum of the neighborhood gray probability with an interval of 2m + 1. According to the description in the work by Fan et al. [27], the NVEM thresholding method is briefly introduced as follows.
The gray probability of a certain gray value g is calculated with Equation (25), and the sum of the neighborhood gray probability with an interval of 2m + 1 is calculated with Equation (26): where f (g) is the pixel number of gray value g, L is the image gray level, and n is the total pixel number. The image is initially divided into two classes (background and object, or object and background) with a certain threshold t. The probabilities of the two classes are calculated with Equation (27).
Then, the mathematical expectations of the two classes are computed with Equation (28).
With consideration of the sum of the neighborhood gray probability in an interval of 2m + 1, the between-class variance is modified by Fan et al. as shown in Equation (29).
Finally, the optimal threshold T is determined by maximizing the modified between-class variance with t in the range of 0 to L − 1, as shown in Equation (30).
As described above, we particularly employ the NVEM thresholding method for its efficiency and automation. Consequently, a shadow candidate is generated by binarizing the LSI image with the NVEM thresholding method. Accordingly, the LSI index image is particularly segmented with the solution of Equation (31): where T is the optimal threshold determined with the NVEM thresholding method for the binarization of the LSI index image, and A is the binarized result with the acquired optimal threshold T. Additionally, we optimize the shadow candidate by applying a series of morphological operations over the binary shadow candidate. In particular, the morphological opening and closing operations are mainly employed with a certain structuring element [29], as presented in Equations (32) and (33). The morphological operation contributes to the final optimized shadow image.
where B is the morphological structuring element, A open is the shadow result by the opening operation with the morphological structuring element B, and A close is the corresponding shadow result by applying the closing operation with the morphological structuring element B on the opening result of the initial shadow image.
Test Images
The proposed LSI shadow detection approach is developed over a DELL personal computer under the 64-bit Windows7 operation system equipped with a 3.2 GHz CPU and 4 GB RAM. In order to verify the shadow detection performance of the proposed LSI algorithm, comparative experiments are carried out with many test images from WorldView-3 of Tripoli, Libya and Rio de Janeiro, Brazil, and WorldView-2 of Washington DC, USA captured at a different time (called WV3-Tripoli, WV3-Rio and WV2-WDC respectively), which is discussed in next section (Section 4: Discussion). In this section, both qualitative and quantitative assessments are especially provided in the following to evaluate the shadow detection performance of the proposed LSI method and several standard shadow detection algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) with two WorldView-3 test images of Tripoli, Libya [37], as shown in Figure 4a,b. Additionally, reference images of shadow regions are also provided with the corresponding panchromatic versions of test images in Figure 4a,b with a spatial resolution of 0.31 m, as shown in Figure 5a,b. Particularly, the test image Tripoli-1 in Figure 4a is a 400 × 300 pixel image that covers typical ground objects, such as shadow, various scale urban buildings, asphalt roads, bare land and grass. The test image Tripoli-2 in Figure 4b is a 260 × 195 pixel image mainly consisting of shadow, buildings, asphalt roads, grass, playgrounds and parks. Specific details can be further discussed through qualitative visual comparison in the subjective evaluation way. Moreover, the shadow detection performance of a certain shadow detection algorithm is also quantified with shadow detection accuracy measurements by employing the objective evaluation method. Qualitative and quantitative evaluation are both carried out over the shadow detection results by the proposed LSI approach and five other comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20].) in the following comparative experiments.
Qualitative Visual Sense Comparison
Figures 6 and 7 respectively present the binary shadow detection results of test images Tripoli-1 and Tripoli-2 by the proposed LSI shadow detection approach and five other comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) in comparative experiments. Particularly, Figures 6a-e and 7a-e list shadow detection results by the proposed LSI shadow detection approach in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). Figures 6f-i and 7f-i illustrate shadow detection results by five other comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]). Shadow detection results are usually intuitively evaluated through visual comparison [14,20]. In order to evaluate the ability of different color spaces in decoupling chromaticity and luminance, shadow detection results are first compared through the qualitative visual sense comparison, respectively, which are processed by the proposed LSI shadow detection approach in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), as presented in Figures 6a-e and 7a-e.
In Figure 6a-e, shadow is correctly classified to a great extent. Specifically, most ground objects in nonshadow regions are well distinguished from shadow, such as bluish housetops (region A in Figure 6a-e), dark asphalt roads and bare areas (regions B1 and B2 in Figure 6a-e), grass and isolated vegetation (regions C1 and C2 in Figure 6a-e). Moreover, continuous shadow (region E in Figure 6a-e) and shadow containing highlight ground objects (regions F1 and F2 in Figure 6a-e) are also identified properly. Good coherence occurs among shadow detection results by the proposed LSI shadow detection approach in these five invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ).
Similarly, shadow detection results by the LSI algorithm for the test image Tripoli-2 in Figure 7a-e also declare a good agreement between these shadow detection results and the corresponding reference image in Figure 5b. In Figure 7a-e, shadow is also specifically distinguished from typical ground objects, like greenish parts in the playground (region A in Figure 7a Based on the good coherence among shadow detection results in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ), shadow detection results by the proposed LSI approach for test images Tripoli-1 and Tripoli-2 in the HSV color space in Figures 6b and 7b are particularly selected for the comparison with shadow detection results by other five comparative methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) shown in Figures 6f-i and 7f-i.
As described above, shadow is well distinguished from most typical ground objects in the shadow detection result for the test image Tripoli-1 by the proposed LSI approach, as shown in Figure 6b. In Figure 6f, the shadow detection result by MC3 also shows good shadow detection effect on grass and large continuous shadow. However, many parts of bluish housetops (region A in Figure 6f) and partial dark asphalt roads (region B1 in Figure 6f) are wrongly classified as shadow in Figure 6f. Moreover, in Figure 6g, the shadow detection result by NSVDI show more serious misclassification of bluish housetops and dark asphalt roads, although big shadow regions are detected. Similarly, most large shadow regions are well detected by SDI and SRI, like building shadow, as shown in Figure 6i,j. However, bluish housetops and dark asphalt roads are still mostly wrongly identified as shadow.
Moreover, parts of grass and isolated vegetation are also identified as shadow by SDI and SRI (regions C1 and C2 in Figure 6i). Different from shadow detection results in Figure 6f,g,i,j, the nonshadow misclassification problem is mostly avoided in the shadow detection result by LSRI, as shown in Figure 6h. However, shadow is not always detected completely (region E in Figure 6h), and highlight parts in shadow regions are partially omitted (regions F1 and F2 in Figure 6h), which reveals that LSRI can not deliver a excellent shadow detection performance. Compared with shadow detection results by these five comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) for the test image Tripoli-1, the shadow detection result by the proposed LSI algorithm alleviates problems of shadow omission and typical nonshadow misclassification to a greater extent. Accordingly, a better visual sense is acquired by LSI.
As shown in Figure 7b, shadow is effectively distinguished from bluish parts of the artificial playground (region A in Figure 7b), dark asphalt roads and tops of buildings (regions B1 and B2 in Figure 7b) and continuous distributed greenish grass (region C in Figure 7b). Moreover, highlight parts in shadow areas are also correctly identified (region F in Figure 7b). Shadow is also well separated from grass by MC3 as shown in Figure 7f. However, there are still too many nonshadow regions misclassified, such as most bluish parts in the playground (region A in Figure 7f) and dark asphalt roads and tops of buildings (regions B1 and B2 in Figure 7f). Similarly, although most shadow regions are well identified by NSVDI, SDI and SRI, the nonshadow misclassification problem is still obvious in Figure 7g,i,j, like bluish parts of the playground (region A in Figure 7g,i,j), dark asphalt roads (region B1 in Figure 7g,i,j) and greenish grass (region C in Figure 7g,i,j). By contrast, in Figure 7h, most shadow regions and nonshadow regions are well separated, like bluish parts of the playground (region A in Figure 7h), dark asphalt roads and tops of buildings (regions B1 and B2 in Figure 7h) and continuously distributed greenish grass (region C in Figure 7h), which shows that a relatively good detection effect is achieved by LSRI. Satisfactory overall shadow detection effect is obtained in Figure 7h, even though parts of the highlighted shadow are still omitted. As can be observed in Figure 7b,f-h, results by LSI and LSRI show a better visual sense.
In general, compared with shadow detection results for test images Tripoli-1 and Tripoli-2 by other five shadow detection methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20].), the proposed LSI approach effectively distinguish shadow from several typical nonshadow (like bluish and greenish nonshadow misclassification, and large dark nonshadow misclassification), and well detect most highlighted parts of shadow. A conclusion can be drawn that the proposed LSI algorithm further resolves problems of shadow omission and typical nonshadow misclassification, and delivers a better visual sense.
Quantitative Evaluation
Different from the qualitative visual sense comparison mentioned above, a quantitative assessment is also performed by calculating the confusion matrix for shadow detection results of both test images Tripoli-1 and Tripoli-2. Particularly, several shadow detection accuracy measurements utilized in the objective assessment are specifically calculated with the confusion matrix [26,[38][39][40]. These corresponding measurements are computed at the pixel level with Equations (34)- (38) [9,14,20], including the producer's accuracy (ρ s and ρ n ), the user's accuracy (µ s and µ n ), the committed error (e c ), the omitted error (e o ), and the overall accuracy (τ): where TP (true positive) indicates the total number of true shadow pixels correctly identified, TN (true negative) refers to the number of true nonshadow pixels correctly classified, FP (false positive) is the number of true nonshadow pixels wrongly identified as shadow ones, FN (false negative) reveals the number of true shadow pixels wrongly classified as nonshadow ones, TP + FN and TN + FP respectively denote the number of true shadow pixels and true nonshadow pixels in the original image, TP + FP and TN + FN respectively indicate the number of shadow pixels and nonshadow pixels in the classified resulting image, and TP + TN + FP + FN is the total number of the whole image. Ideal shadow detection methods usually own high values of the producer's accuracy, the user's accuracy and the overall accuracy, as well as low values of the committed error and the omitted error. In particular, the overall accuracy is the most important measurement among these shadow detection accuracy measurements described above, which states the overall shadow detection ability of a certain shadow detection algorithm. Accordingly, these shadow detection accuracy measurements are mainly employed for evaluating the performance of the proposed LSI shadow detection approach in comparative experiments. Additionally, these shadow detection accuracy measurements are respectively presented in Tables 1 and 2 for shadow detection results by the LSI algorithm in five invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) and five comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) in comparative experiments with test images Tripoli-1 and Tripoli-2. As shown in Table 1, high values are achieved for shadow detection results by the LSI algorithm in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) for the test image Tripoli-1 in terms of the nonshadow producer's accuracy (about 95%), the nonshadow user's accuracy (about 94%) and the overall accuracy (over 92%). Additionally, relatively high and stable values are also obtained in terms of the shadow producer's accuracy and the shadow user's accuracy, and relatively low values are also acquired in terms of the committed error and the omitted error. Generally speaking, ideal shadow detection accuracy measurements are achieved for shadow detection results by the LSI algorithm in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) for the test image Tripoli-1, which not only reveals the good capability of these invariant color spaces in decoupling chromaticity and luminance, but also states the excellent performance and robustness of the LSI algorithm.
Similarly, as presented in Table 2, relatively high and consistent accuracy measurements are also acquired for the test image Tripoli-2 for shadow detection results by the proposed LSI algorithm in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). In particular, not only very high values are obtained in terms of the nonshadow producer's accuracy (about 98%), the shadow user's accuracy (about 94%) and the overall accuracy (approximately 91%), but also relatively low values (less than 2%) is acquired for the committed error measurement. In general, the proposed LSI approach acquires relatively ideal and stable shadow detection accuracy measurements for the test image Tripoli-2 in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). Accordingly, the time consumption is respectively summarized for shadow detection by the LSI algorithm in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) and five comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) for test image Tripoli-1 and Tripoli-2, as presented in Table 3. As can be observed in Table 3, time consumption values of the LSI algorithm are relatively small for the shadow detection in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) for test images Tripoli-1 and Tripoli-2 because of the simple computation of these invariant color spaces except for the time consumption of shadow detection in the CIELCh color space due to its complex calculation from the RGB color space. Particularly, the least time is consumed for shadow detection in the HSV color space by the proposed LSI algorithm for both test images Tripoli-1 and Tripoli-2. Hence, the proposed LSI shadow detection algorithm delivers the most timesaving performance in the HSV color space. Considering the excellent and stable performance in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) and the most timesaving performance of the proposed LSI algorithm in the HSV color space, for the sake of simplicity, shadow detection performance comparison is particularly conducted between shadow detection results by the LSI algorithm in the HSV color space and five comparative shadow detection algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) for both test images Tripoli-1 and Tripoli-2.
For the test image Tripoli-1, a higher value of the overall accuracy (over 92%) is acquired for the shadow detection result by the proposed LSI algorithm in the HSV color space, compared with the overall accuracy of the result by other five contrast methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]). Although relatively high values of the overall accuracy are also obtained by MC3 and LSRI, an obvious difference (about 3%) is also found compared with that of the proposed LSI approach, which indicates that the proposed LSI method performs better for the shadow detection of the test image Tripoli-1. In addition, not only relatively low values of the committed error and the omitted error but also high values of the shadow user's accuracy are acquired by MC3, which reveals that the MC3 method performs relatively well for shadow detection of the test image Tripoli-1. However, even though relatively high values of the shadow producer's accuracy and the nonshadow user's accuracy, as well as relatively low values of the omitted error are obtained by NSVDI, SDI and SRI, both relatively low overall accuracy and high committed error still obstacle the effective shadow detection performance for the test image Tripoli-1, which indicates the poor performance of NSVDI, SDI and SRI in effectively detecting shadow of the test image Tripoli-1. Therefore, there is still further study for NSVDI, SDI and SRI in detecting shadows of HSR satellite images. By contrast, relatively high overall accuracy and low committed error are acquired by LSRI for the test image Tripoli-1, revealing that the LSRI method performs well in correctly distinguishing shadow against easily-confused nonshadow for the test image Tripoli-1. In general, the proposed LSI algorithm delivers higher values of the nonshadow producer's accuracy (over 95%), the nonshadow user's accuracy (about 94%) and the overall accuracy (over 92%), stable values of the shadow producer's accuracy (about 83%) and the shadow user's accuracy (over 87%), and lower committed error (less than 5%), which reveals the excellent shadow detection performance and robustness of the proposed LSI algorithm for the test image Tripoli-1.
Similarly, for the test image Tripoli-2, the proposed LSI approach also achieves a higher overall accuracy value than that of the other five contrast methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]). Additionally, relatively low values of the overall accuracy are obtained by MC3, NSVDI and SDI, although the corresponding omitted error values are relatively low, which indicates the poor performance of MC3, NSVDI and SDI for shadow detection of the test image Tripoli-2.
Relatively low values of the overall accuracy and the user's accuracy are also acquired by SRI, revealing that great room for improvement remains for SRI for shadow detection of the test image Tripoli-2. In contrast, better performance is shown in the result by LSRI with relatively high values of the overall accuracy (close to 89%) and the user's accuracy as well as low omitted error (about 5%), even though various shadow detection accuracy measurements are slightly inferior to those of the proposed LSI approach. Consequently, the proposed LSI algorithm presents a better performance for the test image Tripoli-2.
Through comparing shadow detection results of test images Tripoli-1 and Tripoli-2 by the proposed LSI approach and other five methods (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) both qualitatively and quantitatively, a conclusion can be drawn that the proposed LSI shadow detection approach further settles typical shadow detection problems of high shadow omission and typical nonshadow misclassification (like bluish and greenish nonshadow misclassification, and large dark nonshadow misclassification), and delivers a relatively excellent, robustness and timesaving performance for shadow detection of HSR satellite images.
Discussion
The proposed LSI shadow detection algorithm performs well in several invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) in the comparative experiments previously with test images Tripoli-1 and Tripoli-2. Notably, the LSI algorithm performance is mainly affected by operations in the latter two steps of the workflow (i.e., Step 3 and Step 4). In this section, corresponding discussions are provided to analyze both the influence of the logarithmic operation and the sensitivity of the threshold parameter m as well as the structuring element of the morphological operation. Accordingly, additional experiments are conducted to analyze the influential factors above on shadow detection results with test images Tripoli-1 and Tripoli-2.
Influence Analysis of the Logarithmic Operation
As described in Step 3, the initial shadow index is additionally refined with a logarithmic operation resulting in the logarithmic shadow index for further improving the capability of separating shadow against nonshadow. In particular, the logarithmic operation compresses the initial shadow index and expand the discrimination between pixel values of shadow and nonshadow [24]. In this part, the impact of the logarithmic operation is analyzed by comparing the performance distinction between shadow detection results respectively by the initial shadow index and the logarithmic shadow index in several variant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) in the additional experiments with test images Tripoli-1 and Tripoli-2. Figure 8 illustrates the overall accuracies of shadow detection results by the initial shadow index and the logarithmic shadow index in the additional experiments with test images Tripoli-1 and Tripoli-2, respectively. As illustrated in Figure 8a,b, it can be observed that higher overall accuracies are acquired for shadow detection results in these invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with the LSI shadow index for both test image Tripoli-1 and test image Tripoli-2, when compared with the corresponding overall accuracies for shadow detection results with the ISI shadow index. Accordingly, relatively high overall accuracies are obtained for shadow detection results with both ISI and LSI indices for test images in most invariant color spaces mentioned above, which delivers significant information that the distinction between shadow and nonshadow is significantly expanded by the employed shadow properties against the corresponding nonshadow (i.e., higher hue, lower intensity and dramatical decrease in NIR component). Furthermore, the obvious distinction of the overall accuracy between shadow detection results with LSI shadow index and the overall accuracy with ISI shadow index reveals that the applied logarithmic operation further reinforce the difference between shadow and nonshadow for LSI construction in Step 3, which contributes to a good performance of the LSI shadow detection algorithm. Therefore, we finally accomplish the shadow detection of test images in various invariant color spaces based on the LSI shadow index.
Sensitivity Analysis of the Neighborhood Parameter
In this study, the shadow detection result is initially acquired through binarizing the shadow index image with a certain optimal threshold by the NVEM thresholding algorithm, as presented in Step 4 of the workflow. However, according to the thresholding solution of Equations (26)-(30), the optimal threshold is sensitive to the neighborhood parameter m. As noted in related studies, uncertainties appear in the binarization of natural images while determining the optimal threshold with different neighborhood parameter m values [27]. Hence, in order to further explore the impact of the neighborhood parameter m on the shadow detection performance of HSR multispectral satellite remote sensing images, we respectively run additional experiments in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with the neighborhood parameter m set from 1 to 40 with an interval of 1 for test images Tripoli-1 and Tripoli-2. Figure 9 depicts the sensitivity of the LSI algorithm performance to the neighborhood parameter m of the NVEM thresholding method for test images Tripoli-1 and Tripoli-2, respectively. As illustrated in Figure 9a,b, the overall accuracies keep relatively high values and a stable trend in various invariant color spaces with the neighborhood parameter m from 1 to 28 for Tripoli-1 and with the neighborhood parameter m from 1 to 20 for Tripoli-2, which together states that excellent performance and robustness are acquired with a not-very-big neighborhood parameter m for test images in these invariant color spaces. The difference between Figure 9a,b also explains that the neighborhood parameter m depends on the target image. Accordingly, we process Tripoli-1 with an optimal neighborhood parameter m = 25, and Tripoli-2 with m = 2, respectively.
Sensitivity Analysis of the Morphological Operation
The shadow detection results are usually subsequently processed with a certain denoising algorithm, such as the morphological operation [29] and the box filtering process [22]. In our study, the final shadow detection results are achieved through optimizing shadow candidates with a certain morphological operation. However, the structuring element is a significant influential factor for the effective utilization of the morphological operation. Therefore, both the morphological structuring element type and the morphological structuring element scale α should be taken into consideration. In this part, we deliver the sensitivity analysis about the impact of the morphological structuring element on the LSI shadow detection algorithm through carrying out additional experiments in several invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) with various structuring element types (i.e., cube, diamond, disk, sphere and square types) and different structuring element scales α set from 1 to 20 with an interval of 1 for test images Tripoli-1 and Tripoli-2. Figure 10 presents the sensitivity of the LSI algorithm to the morphological structuring element for test images Tripoli-1 and Tripoli-2, respectively. As depicted in Figure 10a,c,e,g,i for Tripoli-1, the higher overall accuracies of shadow detection results by the morphological operation with the structuring element type of both cube and square state the better performance of the LSI algorithm optimized with the morphological operation of the structuring element types of cube and square, and the decreasing trend of the overall accuracy along with the increase of the structuring element scale α reveals that more effective information is processed as noise with a bigger structuring element scale. Additionally, the similarity of the overall accuracy in Figure 10a,c,e,g,i for Tripoli-1 proves the excellent performance and good stability of the LSI algorithm for Tripoli-1 in these invariant color spaces. Similarly, the same phenomenon appears for Tripoli-2, as presented in Figure 10b,d,f,h,j. In accordance with the decreasing trend of the overall accuracy along with the increase of the structuring element scale for various structuring element types in these invariant color spaces for test images presented in Figure 10a-j, we optimize the binary shadow detection results by applying the morphological operation with a structuring element type of cube and a structuring element scale α = 1, which results in the final shadow detection image.
LSI Method Generalization Analysis
As described in Section 2, many test images (i.e., WV3-Tripoli, WV3-Rio and WV2-WDC) are employed to explore the validity of the proposed LSI method. The LSI method generalization is particularly analyzed with the overall accuracy measurement of shadow detection results for test images above (i.e., WV3-Tripoli, WV3-Rio and WV2-WDC), since the overall accuracy is the most powerful evidence for the shadow detection performance. Figure 11a-c respectively depict the overall accuracy measurements of shadow detection results of 16 test images of WV3-Tripoli, 16 test images of WV3-Rio and 16 test images of WV-WDC by the proposed LSI method in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ). As can be observed in Figure 11a-c, relatively high values of the overall accuracy measurement are acquired for most test images of WV3-Tripoli, WV3-Rio and WV2-WDC, which shows the good shadow detection ability of the proposed LSI method for most test images of WV3-Tripoli, WV3-Rio and WV2-WDC. Additionally, stable and high values of the overall accuracy measurement are obtained for test images of WV3-Tripoli, WV3-Rio and WV2-WDC in HIS, HSV, CIELCh and YIQ spaces, although the LSI method fails in detecting shadow in six test images of WV3-Rio in the YCbCr space. Through comparing the overall accuracy measurements for shadow detection results of test images of WV3-Tripoli, WV3-Rio and WV2-WDC, a conclusion can be drawn that the proposed LSI method is able to further complete shadow detection tasks and delivers an excellent shadow detection performance for HSR multispectral satellite remote sensing images. Provided this situation, two test images of WV3-Tripoli are employed in this paper to specifically evaluate the shadow detection performance of the proposed LSI method against other comparative shadow detection algorithms, as discussed in Section 2 previously.
Conclusions
In this paper, we develop and validate a logarithmic shadow index (LSI)-based shadow detection approach mainly employing the properties of typical invariant color components in various invariant color spaces (i.e., HIS, HSV, CIELCh, YCbCr and YIQ) in terms of both higher hue and lower intensity components, as well as the dramatical decrease of near-infrared component against the visible band components (i.e., red, green and blue components). Additionally, a better visual sense and higher overall accuracies (over 92% for the test image Tripoli-1 and approximately 91% for the test image Tripoli-2) are acquired by the proposed LSI shadow detection approach against other comparative algorithms (i.e., MC3 [18], NSVDI [25], LSRI [24], SDI [26] and SRI [20]) in the comparative experiments, which reveals that the excellent performance and robustness of the proposed LSI shadow detection approach for high-resolution satellite images. Therefore, the proposed LSI shadow detection approach is a promising one, further settling typical shadow detection problems of high small shadow omission and typical nonshadow misclassification for high-resolution satellite images. In the future, we will further research the shadow detection techniques considering the interference of water, snow and desert on the base of our current study. | 2020-10-28T18:33:18.677Z | 2020-09-17T00:00:00.000 | {
"year": 2020,
"sha1": "7f6a5a07025a43afb8aa0fa4cbf142308a57205d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/18/6467/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "11678a71f46fcc528edfc0e2541143f5cee5d039",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": []
} |
207027783 | pes2o/s2orc | v3-fos-license | Childbirth or termination of pregnancy: does paid employment matter? A population study of women in reproductive age in Norway
Abstract Introduction We studied whether female paid employment is associated with pregnancy outcome; childbirth or pregnancy termination. Material and methods All women in Norway, 16–54 years of age, during the years 2007–10 were included. Data sources were; the Norwegian Central Person Registry, the Medical Birth Registry of Norway, and the Registry of Pregnancy Termination. We compared the proportion without paid employment among all women, women who gave birth, and among women who requested termination of pregnancy. Thereafter, and among pregnant women, we estimated the odds ratio for pregnancy termination request for women without paid employment by applying logistic regression analyses, using women with paid employment as reference. Results Among all women 16–54 years of age, 23.5% were without paid employment. Among women who gave birth, 15.8% were without paid employment, whereas this proportion was 46.4% among women who requested pregnancy termination (p < 0.05). Among the 307 512 women who were pregnant, 60 734 (19.4%) requested pregnancy termination. The odds ratio for pregnancy termination request was 3.18 (95% CI 3.11–3.25) for women without paid employment. Adjustments were made for age, number of children, and region of residence in Norway. Conclusion Being without paid employment was more common among women in the general population and among women requesting pregnancy termination than among women who gave birth. Hence, women seem to have children when they are in paid employment. The role of women's paid employment for reproductive choices should be further investigated.
Introduction
Reproduction is essential for population maintenance. Hence, knowledge about factors that influence reproduction is important. The demographic transition to fewer children per woman has been explained by altered preferences for women, from childbearing and childrearing to education and paid employment. Hence, high educational and employment levels among women have been linked to low fertility rate (1). The fertility patterns in many European countries today support the hypothesis of altered preferences for women. The educational and employment levels are high, whereas the fertility rate is low and below replacement level (2).
However, the inverse associations of educational and employment levels with fertility rate have been questioned, because the decrease in fertility rate occurred before the increase in women's level of education and employment (3). In the Western world, well-educated women and women with high income seem to have more children than women with lower education (4). Pregnancy termination has been associated with no or low family income (5), which may also suggest that childbirth occurs while being in paid employment.
Parental leave benefits after childbirth have been assumed to increase the fertility rate. The relatively high fertility rates in Scandinavian countries, compared with many other European countries, have been used to illustrate this. The parental leave benefits in Norway are generous and the fertility rate is among the highest in Europe (2). A woman with paid employment receives full economic compensation from the National Insurance for 49 weeks [46 weeks in 2009 (6)] while taking care of her infant (7). However, the right to parental leave benefits in Norway is closely linked to the woman's employment. Without paid employment 6 months before childbirth, a woman receives a tax-free lump sum transfer only (8). Hence, the right to parental leave benefits may further encourage childbirth while having paid employment. It may therefore be proposed that women without paid employment choose not to become pregnant, and that they request pregnancy termination if they become pregnant.
We aimed to study whether female paid employment is associated with reproductive patterns. We compared the proportion of women in Norway without paid employment; among all women in reproductive age, among women with childbirth, and among women who requested pregnancy termination. Additionally, and among the pregnant women, we estimated odds ratio (OR) for pregnancy termination request associated with having no paid employment.
Material and methods
The study included all women of reproductive age, 16-54 years old, in Norway during the period 2007-10.
To obtain information about the proportion of all women, 16-54 years of age, who were without paid employment, we used population statistics from the Norwegian Central Person Registry (9). Paid employment was defined as having yearly personal income above the taxfree income (more than 39 950 Norwegian kroner/ €4327.90) (10). The Central Person Registry is administered by the Norwegian Taxation Authorities. We present the mean proportion of women without paid employment across the years 2007-10.
To obtain information about the proportion of women without paid employment among women with childbirth, we used data from the Medical Birth Registry of Norway with individual linkage to the Norwegian Central Person Registry. All births in Norway are reported to the Medical Birth Registry by law (11). The unique person identification number given to all individuals living in Norway enabled the link to be made between the Medical Birth Registry and the Norwegian Central Person Registry (12) and made it possible to obtain information about individual income. Information about income was available throughout the year 2010. Paid employment was defined as having personal income above tax-free income (10) during the year of childbirth. Parental leave benefits (above the tax-free lump sum transfer), may be included in the income during the year of childbirth.
For women who requested pregnancy termination, we obtained information about paid employment from the Registry of Pregnancy Termination, to which all requests for pregnancy termination are reported by law. The Norwegian Institute of Public Health administers this registry, and since 2007 the reporting has been performed electronically (13,14). In Norway, pregnancy termination is performed on the woman's request within pregnancy week 12. In our study, only requests within pregnancy week 12 were included, representing 95% of all requests for pregnancy termination (13). Pregnancy termination is performed or initiated in hospitals, by law. The Registry of Pregnancy Termination includes individual, but anonymous data that are obtained by a standardized patient journal (15) completed by the doctor at the clinical examination, typically 1-5 days before the pregnancy termination. In the standardized patient journal, the answer categories regarding paid employment were mutually exclusive and included; having presently full-time work/ part-time work/student/housewife/working without payment/disabled/seeking employment. We defined paid employment as having full-time or part-time work (coded: yes or no). We had no information about level of income.
To link individual data from the Medical Birth Registry with data from the Central Person Registry we obtained approvals from the Norwegian Data Inspectorate, the The data files that we used did not include personal identification numbers. Hence, all women included in our study were anonymous to the researchers. We present the number and proportion (%) of women without paid employment among; all women in Norway, 16-54 years of age, women who gave birth and women who requested pregnancy termination. We tested for differences in the proportion without paid employment between groups by applying a chi-squared test. Additionally, and by applying logistic regression analysis, we estimated crude and adjusted OR for pregnancy termination request associated with having no paid employment. In these analyses, we included all pregnant women (women with childbirth and women who requested termination of pregnancy) with available information on all of the study factors. Our main exposure variable was paid employment (yes/no). We made adjustments for the following factors that could be associated with both paid employment and pregnancy termination and so be confounding factors; age [<20, 20-24,
Results
In total, there are approximately 1.3 million women 16-54 years of age in Norway. During the years 2007-10, 23.5% (mean per cent across the years 2007-10, range 22.0-24.8%) of these women were without paid employment.
Among all women of reproductive age, 312 773 women either gave birth or requested pregnancy termination during the years 2007-10. Among these, 5261 women were excluded from further data analyses due to lack of values for one or more study factors.
Among the 307 512 women in our study, 246 778 gave birth. Their mean age was 29.8 years [standard deviation (SD) 5.3 years], and 15.8% were without paid employment.
There were 60 734 requests for pregnancy termination, and the mean age of the women with pregnancy termination request was 27.5 years (SD 7.1 years). A total of 46.4% of these women were without paid employment ( Figure 1). The difference between the groups in the proportion of women without paid employment was statistically significant (p < 0.05, chi-squared test).
Among pregnant women, the crude OR for pregnancy termination request was 4.60 (95% CI 4.51-4.69), for women without paid employment using women with paid employment as the reference ( Table 1). The corresponding adjusted OR was 3.18 (95% CI 3.11-3.25). There was a U-shaped association of age with pregnancy termination, so 57.3% of pregnant women <20 years of age, and 33.9% of pregnant women ≥40 years of age requested pregnancy termination. Only 11.4% of the pregnant women 30-34 years of age requested pregnancy termination. The adjusted ORs for pregnancy termination request were 5.04 (95% CI 4.83-5.26) for women <20 years of age and 2.14 (95% CI 2.04-2.24) for women ≥40 years old, using 25-29 years of age as the reference. Having two children or more was also associated with increased OR for pregnancy termination request.
Discussion
Among pregnant women in Norway during the years 2007-10, women without paid employment had a more than threefold increase in OR for pregnancy termination compared with women with paid employment, after adjustment for age, number of children, and region of residence. Being without paid employment was also more common among women of reproductive age in general, than among women with childbirth.
Our study included all women of reproductive age, all women with childbirth, and all women with request for pregnancy termination in Norway. Hence, biased estimates due to skewed selection are unlikely. Among all women of reproductive age and women who gave birth, the definition of paid employment was yearly income above the tax-free income, as recorded by the Norwegian Taxation Authorities (more than 39 950 Norwegian kroner/€4327.90). For women who requested termination of pregnancy, the definition of paid employment was having full-time or part-time work as reported to the Registry of Pregnancy Termination. This difference in the definition of paid employment may have biased our estimates. However, it is likely that the women, who requested pregnancy termination and had full-time or part-time work, also had an income above tax-free income. Hence, they fulfilled the definition for the other women in our study of having paid employment, and misclassification of paid employment is unlikely to have occurred. We had no information about level of income among women with a request for pregnancy termination, so we could not compare income between groups. Some of the women with a request for pregnancy termination may not have had the termination performed. Also, some women may have had more than one reproductive event during our study period. As our data were anonymous, we could not identify these women. We used information about employment status at the time of the reproductive event in our analyses. There is little reason to believe that there have been more recurrent pregnancy terminations than childbirths according to employment status, and that the extent of such a possible bias would have altered the direction of our estimate.
We could only include in our analyses study factors that were available in the registries. We made adjustment for age, number of children, year of reproductive event, and region of residence. Unfortunately, we had no information about the country of birth for the women who requested pregnancy termination. Hence, we do not know whether the association of paid employment with reproductive outcome is valid across ethnic background. In Norway, non-Western women are over-represented among women with childbirth (16) and among women with pregnancy termination, suggesting a higher pregnancy rate in non-Western women (17). Non-Western women living in Norway are also more often unemployed than women who are born in Norway (18). We had no information about mental health, partner status, or social network, factors that also may be associated with both reproductive pattern and paid employment. It could also be argued that no adjustment should be performed, because the relations of different factors with paid employment and with reproduction are not well known. Recent studies suggest that termination of pregnancy may be linked to low education (19), low social status (20), age (19), and foreign origin (17,21). These factors are also closely linked to low or no income. We are not aware of any previous population studies of the association of paid employment with pregnancy termination among women who are pregnant. A study in Oslo, Norway during 2000-02, suggested that Pakistani women with low education had more children and fewer pregnancy terminations than women with high education (19). Norwegian born women displayed an opposite pattern. It is likely that high education is related to having paid employment. If that is true, the association of paid employment with childbirth may differ across ethnic groups within one country.
Generally, women in Norway have high educational level and high level of employment (18). In our study, 84% of women with childbirth were in paid employment, and this proportion was higher than in the general population of women of reproductive age. Our findings therefore suggest that women choose to give birth when they are in paid employment. The fertility rate in women <30 years of age has declined since the beginning of the 1970s, and the fertility rate in women >30 years has increased (22). These observations strongly suggest that women have delayed childbearing and choose to have children when they are in paid employment. Hence, women's income may have become increasingly important for the economic support of children.
In our study, particularly women with childbirth were in paid employment. This finding could not be explained by age differences between women with and without childbirth. Our findings therefore suggest that there may be a selection for childbirth by women who are in paid employment. One mechanism behind such selection could be that women without paid employment more often terminate their pregnancy than women with paid employment.
In addition to having a stable income, the right to parental leave benefits may encourage women to have their childbirths while in paid employment. In Norway, only women with paid employment have the rights to the generous parental leave benefits. The mother or the father may receive full economic compensation for 49 weeks (7) [46 weeks in 2009 (6)] from National Insurance. The National Insurance compensates for a yearly income up to 530 220 Norwegian kroner [€57 440.31 in 2014; €4696.14 in 2009 (6)] (23). If the mother is without paid employment, she receives a tax-free lump sum transfer only, 38 750 Norwegian kroner [€4197.90 in 2014, €3765.60 in 2009 (6)] (8).
Parental leave benefits vary largely across the world (24). The parental leave benefits in Norway and in other Scandinavian countries have been used to explain the relatively high fertility rates in the Scandinavian countries compared with many other European countries. Comparisons of national fertility rates according to parental leave benefits do not provide sufficient evidence for understanding the effects of parental leave benefits. Our findings could suggest that parental leave benefits discourage childbirth in women without paid employment. Our findings should encourage further studies about whether parental leave benefits influence reproductive choices independent of income level.
In our study of all women in Norway during 2007-10, 84% of the women with childbirth were in paid employment. Pregnant women without paid employment had more than a threefold increased OR for pregnancy termination. The role of maternal employment and parental leave benefits on reproductive choices should be further investigated.
Funding
This work was supported by South-Eastern Norway Regional Health Authority (research grant number 2709002). South-Eastern Norway Regional Health Authority has no part in this study except funding. | 2018-04-03T04:57:02.291Z | 2016-03-08T00:00:00.000 | {
"year": 2016,
"sha1": "f59d9f6046ac8ca514c2164b178fd80a66ab588a",
"oa_license": "CCBYNC",
"oa_url": "https://obgyn.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/aogs.12867",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f59d9f6046ac8ca514c2164b178fd80a66ab588a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202184849 | pes2o/s2orc | v3-fos-license | Diversity of Zooplankton in Seagrass Ecosystem of Mandapam Coast in Gulf of Mannar
Among the marine ecosystems, coastal areas of the seas are more fertile and productive regions than the offshore regions of the sea. Worldwide approximately half of the population live in coastal zones and about a billion people rely on the coastal environment for fish as their main source of protein. Coastal environment is very dynamic with much cyclic and random process owing to a variety of resources and habitats and the coastal ecosystems are the most productive ecosystems on earth. Planktons are one of the important component of any aquatic ecosystem as these organisms forms the base of the food chain. The distribution and growth of plankton depend on the availability of the inorganic nutrients and the physico chemical characteristics of the coastal waters. Zooplankton are small heterotrophic animals, plays a key role in the coastal as well as International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 8 Number 07 (2019) Journal homepage: http://www.ijcmas.com
Introduction
Among the marine ecosystems, coastal areas of the seas are more fertile and productive regions than the offshore regions of the sea. Worldwide approximately half of the population live in coastal zones and about a billion people rely on the coastal environment for fish as their main source of protein.
Coastal environment is very dynamic with much cyclic and random process owing to a variety of resources and habitats and the coastal ecosystems are the most productive ecosystems on earth. Planktons are one of the important component of any aquatic ecosystem as these organisms forms the base of the food chain. The distribution and growth of plankton depend on the availability of the inorganic nutrients and the physico chemical characteristics of the coastal waters. Zooplankton are small heterotrophic animals, plays a key role in the coastal as well as
ISSN: 2319-7706 Volume 8 Number 07 (2019)
Journal homepage: http://www.ijcmas.com The present investigation was carried out to assess the distribution of zooplankton in seagrass ecosystem in comparison with that of the coastal waters without seagrasses in Gulf of Mannar. Water and plankton samples were collected from the seagrass ecosystem (Station 1) and the control station without seagrasses (station 2) from September 2016 to May 2017. The physico chemical parameters were analysed and the mean values of surface water temperature, salinity, pH, dissolved oxygen, nitrite, nitrate, phosphate, silicate, gross primary productivity and chlorophyll-a were 27.72⁰ C, 34.17 ppt, 7.99, 3.85 ml.l -1 , 0.25µM, 0.01 µM, 0.63 µM, 1.03 µM, 0.22 mg.C.m -3 .h -1 and 0.24 mg.m -3 respectively. Totally, 59 species of zooplankton were recorded from each of the two stations with the maximum density of 667400 and 935300 nos.m -3 in station 1 and 2 respectively. The higher density of zooplankton was observed during the summer months. The species richness and diversity indices showed the maximum values of 7.13 and 1.58 bits.ind -1 respectively in seagrass ecosystem which indicates that the diversity of plankton is more in seagrass ecosystem as that of the coastal waters without seagrasses. oceanic food web. They form the intermediate link between phytoplankton and fishes of the higher trophic levels and are playing the important link in the transfer of energy from primary producers to the organisms of the higher trophic levels. Zooplankton also includes the early life history stages of commercially important fin fishes (ichthyoplankton) and shell fishes. Zooplankton are being used as indicators of the overall health of the ecosystem, since they responds quickly to aquatic environmental changes. Temperature, salinity, and food supply are some of the important factors that are known to cause spatial changes in zooplankton populations (Fernandes and Ramaiah, 2009).
GoM located between Rameswaram and
Kanyakumari has a chain of 21 Islands (area of each Island 0.95 to 130 ha.) along the 140 km stretch between Tuticorin and Rameswaram at 08°55'-09°15'N lat. and 78°0'-79°16'E long (Gopakumar et al., 2009). The GoM is unique for its heterogenous biological resources and commonly known as the 'Paradise of Marine Biologists', which is a legally protected Marine Biosphere Reserve (Jyothibabu et al., 2013). They have fringing and patchy coral reefs, seaweeds, seagrasses and mangrove rising from shallow areas of the sea shore. They are biologically diverse, ecologically productive and economically valuable ecosystems which import and export considerable amount of nutrients and organic matter between the terrestrial and marine ecosystems. Presence of multiple habitats like seagrass beds, coral reefs and mangroves not only supports a rich variety of fauna but also provides natural protection from storms and waves.
Materials and Methods
The present investigation was carried out to study the zooplankton diversity in seagrass ecosystem at Chinnapaalam (Station 1, lat 9⁰ 15'55''N; long 79⁰ 12'23''E) in comparison with the reference site without seagrasses at Kundhukal (Station 2, lat 9⁰ 15'13''N; long 79⁰ 13'8''E) ( Fig. 1) which is situated 2 km away from station 1. The surface water sample and plankton samples were collected once in a month from the two stations (1 and 2) from September 2016 to May 2017 to analyse the physico chemical parameters of the water and to assess the diversity and biomass of phytoplankton. The water samples were collected early in the morning at 7.00 a.m. to 9.00 a.m. in both the stations. Surface water temperature was measured using standard mercury filled centigrade thermometer with an accuracy of 0.1⁰ C in the sample collection site itself. The surface water samples were collected in a pre-cleaned polypropylene bottle in all the stations and transported to the laboratory for further analysis. The physico chemical parameters viz., salinity, pH, dissolved oxygen, nutrients (nitrite, nitrate, phosphate and silicate), primary productivity and chlorophyll-a were analysed for all the water samples by following the standard procedure of Strickland and Parsons (1972).
Plankton samples were collected from the surface water by filtering 500 l of seawater using hand plankton net (bolting silk no.30). The collected plankton samples were preserved in plastic bottle with 5% formalin in the site itself for further analysis at the laboratory. Plankton samples were analysed for their species composition and plankton density using Nikon Inverted Microscope (Eclipse TS 100). Zooplankton was identified using the keys of the standard publications of Kasturirangan (1963) and Santhanam and Srinivasan (1994). For the quantitative estimation of phytoplankton, from the plankton concentrate 1ml of sub sample was taken in a Sedgewick-Rafter counting cell which was subsequently placed in microscope provided with a stage for counting. The density of zooplankton was expressed as numbers per m 3 . For each sample, two counting were made and the average was recorded. The species richness (D) of plankton samples was determined following Gleason (1922) and species diversity was calculated as per Shannon and Wiener (1949).
Physico chemical parameters
The results of physico chemical parameters such as surface water temperature, salinity, pH, dissolved oxygen, nutrients (nitrite, nitrate, phosphate and silicate), gross primary productivity and chlorophyll-a of station 1 and 2 were depicted in Table 1 and 2.
Zooplankton
In the present investigation, a total of 65 species of zooplankton were recorded from the two stations. At station 1, a total of 59 species of zooplankton were found to be distributed. The percentage composition and number of species were tintinnids (11.87% and 7 numbers), foraminifers (1.69% and 1 number), copepods (47.46% and 28 numbers), cladocerans (3.39% and 2 numbers), chaetognaths (1.69% and 1 number), chordates (1.69% and 1 number), decapods (1.69% and 1 number), molluscs (1.69% and 1 number) and meroplanktonic forms (28.83% and 17 numbers). The number of species distributed during different months was ranged from 18 to 40, while the minimum numbers was during the month of September 2016 and the maximum numbers during April 2017. At station 2, a total of 59 species of zooplankton were recorded. The percentage composition and number of species were tintinnids (8.47% and 5 numbers), foraminifers (5.08% and 3 numbers), copepods (44.07% and 26 numbers), cladocerans (3.39% and 2 numbers), chaetognaths (1.69% and 1 number), The overall zooplankton density at station 1 ranged between 126900 and 667400 nos. m -3 ( Figure 2). The minimum and maximum densities were during the months of September 2016 and May 2017 respectively. The maximum density during May 2017 was contributed mainly by crustacean nauplius (33.10%) followed by copepod nauplius (21.13%), Oithona brevicornis (19.01%) and Acrocalanus gracilis (8.45%). In station 2, the overall density of zooplankton ranged from 89300 to 935300 nos. m -3 (Figure 2). The minimum and maximum densities were occurred during the months of October 2016 and May 2017 respectively.
In station 1, the species richness index for zooplankton varied from 3.33 to 7.13 ( Figure 3). The minimum and maximum values were observed during the months of September 2016 and April 2017 respectively. In station 2, the species richness index value ranged from 3.64 to 6.29 (Figure 3). The minimum and maximum values were observed during the months of October 2016 and April 2017 respectively.
In station 1, the species diversity index for zooplankton ranged between 0.88 and 1.58 bits / individual (Figure 4). The minimum and maximum values were observed during the months of May and April 2017 respectively. In station 2, the species diversity index ranged from 1.16 to 1.51 bits / individual (Figure 4). The minimum and maximum values were observed during the months of May 2017 and November 2016 respectively. Temperature is one of the most important factors controlling the physiological activities of the animals. In the present study, the surface water temperature variation is found to be uniform in all the three stations with the range of values from 25 to 30⁰ C. The optimum temperature range required for the growth of seagrass have been reported as 23 -32⁰ C (Short et al., 2016) and the observed values were within the range. The pH values were ranged from 7.68 to 8.23. Arumugam et al., (2013) have also reported the similar values of pH with the range of 7.7 to 8.5 in the seagrass meadows of Gulf of Mannar. Salinity acts as a limiting factor in the distribution of living organisms (Anand et al., 2015). The salinity values of the two stations varied between 31 and 36 ppt. Kannapiran et al., (2008) also observed the salinity values ranged between 30.6 and 34.5 ppt in the Gulf of Mannar region. Dissolved oxygen in water is a very important parameter as it serves as an indicator of the physical, chemical and biological parameters of the water body. In the present study, the dissolved oxygen values of the two stations varied between 3.13 and 4.69 ml.l -1 . The maximum value of 5.9 ml.l -1 was recorded by Sulochanan et al., (2011) in seagrass ecosystem of Palk Bay. Nutrients are the major factor controlling the plankton growth in aquatic ecosystem. However, the seagrass ecosystem could be affected by persistent higher nutrient levels for longer periods (Sridhar et al., 2008). Nitrogen and phosphorous are the major limiting factors for seagrass growth (Arumugam et al., 2013). Sulochanan et al., (2011) have recorded the maximum level of nitrite as 0.63 µM in seagrass beds of Gulf of Mannar. Sridhar et al., (2008) also observed the nitrite values ranged between 0.03 and 2.91 µM in the seagrass ecosystem. The values of nitrate ranged from 0 to 0.03 µM in both the stations. Anandakumar and Tajuddin (2013) reported the minimum value of 0.2 µM in the selected locations of Gulf of Mannar region. The concentration of phosphate in both the stations ranged from 0.11 to 0.95 µM. This higher concentration can also be spotted in seagrass ecosystem due to the terrestrial run off and release of organic phosphorous from the bottom (Kannapiran et al., 2008). Silicate is not an essential nutrient for seagrasses. However, it is required by the associated organisms especially diatoms. The concentration of silicate in two stations ranged from 0.08 to 1.98 µM. The values of silicate are found to be higher than other nutrients in both the stations and it was also supported by Sridhar et al., (2008) and Anandakumar and Tajuddin (2013). In the present study, the gross productivity ranged from 0.10 to 1.20 mg.C.m -3 . h -1 in both the stations. The highest value in seagrass ecosystem indicated that it is more productive than the areas without seagrass. Prasath et al., (2011) suggested that stable condition of salinity and other physicochemical parameters could promote the plankton production.
Chlorophyll-a values observed during the study period in both the stations varied from 0.01 to 2.11 mg.m -3 . The overall maximum value (2.11mg.m -3 ) was recorded in the seagrass ecosystem. Similar studies were also conducted by Anand et al., (2015) and Mahesh et al., (2015) and the results corroborates with the present study.
In the present study, a total of 65 species of zooplankton were identified from both the stations. The number of species is more in seagrass ecosystem compared to the other stations. Anandakumar and Tajuddin (2013), Pitchaikani and Lipton (2015), and Jeyaraj et al., (2016) have documented 72, 49 and 114 species of zooplankton respectively in Gulf of Mannar region. Among the zooplankton species, diatoms were found to be the dominant group followed by meroplanktonic forms. Many researchers (Fernandes and Ramaiah, 2009;Prasath et al., 2011 andMahesh et al., 2015) documented copepods as the dominant group in the coastal waters of India. Similarly, Calanoid copepods and larvae of crabs were registered as the dominant species in Gulf of Mannar by Pitchaikani and Lipton (2015).
The comparison of zooplankton density between seagrass ecosystem and the control station revealed that the former is more than the later. Jeyaraj et al., (2016) observed the maximum density of 11733 nos./m 3 in the Gulf of Mannar region. The species richness value of zooplankton in all both the stations varied from 2.97 to 7.13. The maximum species richness was recorded in seagrass ecosystem followed by control station. The species diversity value (H') for zooplankton in both the stations varied from 0.88 to 1.58 bits / individual. Prasath et al., (2011) reported the range of H' value from 0 to 2.88 bits / individual along the east coast of India. Pitchaikani and Lipton (2015) recorded the species diversity in the range of 3.29 to 3.77 bits / individual in Tiruchendur coast of Gulf of Mannar region.
From the present investigation, it can be concluded that the seagrass ecosystem harbours high diversity of zooplankton as that of the open waters providing suitable nursery grounds for the juveniles of fin and shellfishes. | 2019-09-10T20:24:05.593Z | 2019-07-20T00:00:00.000 | {
"year": 2019,
"sha1": "d55532d8b98a9dca61bdbb4fb4ba4896fad9aa04",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/8-7-2019/S.%20Deepika,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d1c89abd7246b92ec9640b9c3c9ad4e1c73df6a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
221760065 | pes2o/s2orc | v3-fos-license | Visegrad Group and Relations with Russia
This article refers to the Central European countries by meaning the Visegrad Group countries (V4) — Hungary, Czech Republic, Poland, and Slovakia. The development of the Visegrad Group aimed on integration to the Euro-Atlantic structures fulfilled its promise, nevertheless, the membership in Western structures does not necessarily mean the loss of Russian influence in the region of Central Europe. On the contrary, the region’s connection to Russia developed in the past remained to some extent even after the process of political transition in particular countries. Such connections are responsible for foreign policy discourse with a plethora of questions and misunderstandings on issues related to the political attitudes of Visegrad members towards Russia and some contradictory stances of the V4 countries among themselves as well with respect to Brussels. The EU’s politics of sanctions towards Russia is having a direct, counterproductive effect in Visegrad, what is resulting in undermined relations and weakened coherence inside the EU with the emergence of anti-Western and pro-Russian political parties that creates the space for Russian foreign policy to achieve more influence in the region. This article is analyzing the background of such discourse and some of the reasons behind the pro-Russian sentiment or discrepancies and non-coherence of the EU members’ opinions on Russia. At the same time, the awareness of the outcomes of this article can be relevant in analyzing the possibilities to avoid the deepening of the conflictual foreign policy between the EU and Russia, or the Visegrad and Russia, respectively. The research is built on both, primary and secondary sources, related mainly to the evolution of relations in specific areas between both sides. The mentioned historical perspective creates the basis of the analysis and is further put into contemporary discourse to find the answers on the question: what are the reasons for non-coherence of the EU and Visegrad towards the policy against Russia? To achieve the above-mentioned results, the analysis is provided in chronological perspective using the mixed methods by exploring the official documents, scholarly articles published on the topic, and public polls as well.
Historical Perspective on Russia -Visegrad Relations
To understand the nature of relations between Russia and Visegrad countries, it is inevitable to take a look at the historical background of the two's affiliation. The common experience of the Visegrad Group countries with their communist past and their existence under the influence of the USSR since the end of the Second World War until late 1989 has left huge "heritage" and interconnections on the post-Soviet regions including the leading successor of the USSR -the Russian Federation. To overcome the negatives and to adapt on a new, democratic political system in association with Western Europe [Marušiak 2013b: 31] more smoothly, Czechoslovakia, Hungary, and Poland decided in 1991 to establish the Visegrad Group, however, its formation was "particularly influenced by Austria's lack of interest in developing of a partnership with the democratizing post-Communist states of Central Europe" [Cabada 2018: 170]. This coalition, since the split of Czechoslovakia also known as the V4, was important not only for a transition process of the political system in the member countries but its foundation was aimed also to become beneficial for integration into the Euro-Atlantic structures like NATO and the European Union 1 . Restitution of democratic system, acquiring the economic and political relations with the West and diversifying the energy sources from Russia can be assumed as a political "restart" aimed to begin a new chapter without the influence of Russia; nevertheless, it was not as obvious. Mentioned processes were not in every V4 country as smooth as one would depict, mainly due to the different ideas of political representatives on further development mainly in regard to foreign policy and relations with Russia. This is related primarily to the case of Slovakia and the ideas of its prime ministers until 1998. During the short period of the federative state of Czechs andSlovaks (1990-1992), the Prime Minister of Slovakia Ján Čarnogurský (1991)(1992) of "Slavic Europe" and Russian Federation [Marušiak 2015: 32]. Such an identity-oriented idea of political cooperation with Russia was later partly followed by Čarnogursky's successor Vladimir Mečiar, the Slovak head of government until 1998. Nevertheless, the foreign policy of Slovakia during Mečiar's rule had among the key points the EU and NATO accession [Marušiak 2013a: 45], however, the prime minister was not eager enough to find the way of Slovakia from the Russian -mainly economic dependence, furthermore, he rather became inspired by nontransparent privatization and undemocratic tendencies as strong state control of the massmedia or using of power structures for his political aims -authoritarian tendencies that were prevalent in post-communist space with Russian influence [Cameron, Orenstein 2013: 2]. Thus, thanks to Mečiar's government Slovakia earned the status of "deviant country in Central Europe" [Szomolányi 2004: 149] and his foreign policy orientation was in a part responsible for the shifting of Slovakia closer towards Russia [Dangerfield 2012: 961].
The Soviet and Russian Foreign Policy towards Visegrad Group Countries after 1989
In fact, the mentioned "development" in Slovakia until the end of Mečiar's government in 1998 was in a part successful foreign policy of the USSR, and later of Russia as its successor state after the fall of the Soviet Union in 1991. This assumption is connected partially to the "Kvitsinsky doctrine" 2 at the beginning of the 1990s which was aimed at foreign and security policy of the former Soviet satellite states of Central Europe to prevent their membership in security alliance of the West after the dissolution of Warsaw Treaty, and so they should create a buffer zone between the NATO and the USSR, Russia respectively [Duleba 1998: 24].
However, the Visegrad countries -Czechoslovakia, Hungary, and Poland signed new treaties with Russia as a successor of the USSR in 1992, but they refused to include such security provisions in the new bilateral treaties and prevented the implementation of the "Kvitsinsky doctrine" in practice. Nevertheless, the new political situation after the split of Czechoslovakia in 1993 allowed Russian foreign policy to follow some patterns similar to the "Kvitsinsky doctrine" mainly in Slovakia. These patterns are referring more precisely to the "Kozyrev doctrine" 3 adopted by Russia in 1992-1993, which aimed on same security issues of Central European countries intending to prevent of expelling Russia's interests from region, and in contrary to "Kvitsinsky's", the "Kozyrev's doctrine" should avoid of creating of buffer zone in Central Europe that would isolate Russia from the West.
The new treaty that Slovakia signed with Russia lacked the coordination with its Visegrad partners what compelled Slovakia to "accept the Russian ideas on the way of building up the European security architecture" and make it more difficult for Slovakia to try and accede to the Western security structures [Duleba 1998: 30-31].
Moreover, there was also another economic instrument of Russia's foreign policy to influence some developments in Central Europe. This is connected to unresolved economic issues like Soviet financial debt to the V4 countries stemming from The Council for Mutual Economic Assistance (COMECON) cooperation which was after the split of the USSR transferred on Russia. The debt consisted of around 3.5 billion USD to the Czech Republic, 1.7 billion USD to Hungary, and 1.6 billion USD to Slovakia, while Russia offered to pay it by deliveries of military components [Duleba 1998: 92]. The Czech Republic has refused to sign such compensation of the Soviet debt keeping in mind its future in NATO, while Hungary agreed, and its dual armaments supplies, both, from the West and Russia was acceptable for NATO membership. On the contrary, Slovakia, during Mečiar's government accepted Russia's offer for refunding its debt by military deliveries, however, under very obscure circumstances, as Duleba states: "The debt is paying off by the Russian government to Russian business companies in the Slovak Republic" [Duleba 1998: 93]. Nevertheless, the rest of the debts owed to Visegrad countries from Soviet era were refunded by Russia until the end of 2013 4 , and as it was outlined, a part of the debt to Slovakia was returned in various commodities among which were also military supplies or upgrades of the MiG-29 jet fighters 5 .
The government of Vladimir Mečiar together with accepting the "Kozyrev's doctrine" led to the fact that Slovakia did not join the NATO in 1999 in contrary to its Visegrad partners which were in the signing of bilateral agreements with Russia more cautious. The foreign policy under the "Kozyrev's doctrine" clearly illustrates Russia's security issues and its opposition towards the possibility of NATO enlargement in Central Europe [Racz 2014: 65] as well it indicates the patterns of Russia's European policy [Póti 2006: 117].
Visegrad Towards the Joining of the EU and NATO
The results of Slovak parliamentary elections in 1998 has brought a new government composed of democratically oriented political parties [Szomolányi 2000: 77] that were more inclined towards the EU and NATO membership, hence its foreign politics was oriented primarily on the West. Slovak aspiration for membership in Euro-Atlantic structures was supported also by the rest of its partners in the V4 (mainly by Poland) whose membership at that time was just a question of a formal act and their support of Slovakia as the wish for revitalizing the Visegrad cooperation [Marušiak 2015: 33]. However, the Washington Summit in 1999 granted the NATO membership only to the rest of Visegrad Group and Slovakia obtained only the aspirant status -as it had just a very little period to provide changes in its politics after Mečiar's government.
After the NATO enlargement by the Czech Republic, Hungary and Poland in 1999, Russian foreign policy experts understood that this process is irreversible and that the new way of cooperation strategy in the region of Central Europe is needed. Despite the establishment of Russia's quasi-member status created by the 1997 NATO -Russia Founding Act on Mutual Relations, Cooperation and Security have renewed Russia's place on the security constellation in Europe, however, it does not granted Russia any veto power [Blank 1998: 118] what has been demonstrated by 1999 Kosovo crisis. Furthermore, even granted the veto power of Russia in the UN's Security Council did not prevent the military action of NATO in Kosovo what meant Russia's deeper isolation from the development of security in Europe. NATO's eastward expansion has brought direct opposition from the Russian side as it existentially concerned its security issues. This assumption is developed on the content of The Basic Provisions of the Military Doctrine of the Russian Federation approved by Boris Yeltsin in 1993 that is referring to the list of "key external military dangers to Russia, the expansion of military blocks and alliances" [Fedorov 2013: 319].
The period until 2000 was significant for the struggle of foreign policy dominance between Russia and Euro-Atlantic structures to establish their ideas on security policy in the European, post-Soviet region [Gerasymchuk 2014: 44]. The success of NATO enlargement indicates the loss of Russian dominance in the region, and according to this, the relations between Russia and the V4 after 2000 could be understood more or less only in pragmatic, and economic means with Russia's intentions to attain more influence on the energy market of Visegrad countries.
Russia and Visegrad's Energy Market
Since it was clearly sure that Slovakia will join its Visegrad partners in NATO in next enlargement in 2004, and accordingly, Russia has lost in this perspective its effect on security issues and military export even more with regards to fact that the V4 countries and its armed forces will sooner or later rearm its equipment on Western, NATO-compatible units. Therefore, enhancing the influence on the energy market together with the economic sector of Central Europe remained the most vital objective for Russia in order to maintain its presence in the Central European region.
Thanks to the Soviet development of energy infrastructure in the Central European countries during the communist period, it was not a very hard task for Russia to achieve influence on the V4 countries' energy markets even after the postcommunist political transition. The existence of the "Yamal" gas-pipeline in Poland and the "Brotherhood" gas-pipeline in the Czech Republic and Slovakia, both stemming in Russia, is crucial for gas deliveries not only for the Central European region but for the other European countries as well. Together with the "Druzhba" (Friendship) oil-pipeline crossing through all Visegrad countries it makes a very vulnerable tool of Russian foreign policy in the region, with the effect on the whole EU, thus, the energy security is a major theme of the Visegrad Group [Fawn 2014: 12]. However, each of the V4 countries is dependent on these deliveries to a different extent 6 the existence of such energy interconnections is creating space for Russian foreign policy having an impact on the countries in the region by bargaining through Russian energy companies.
This has been proved for instance during the 2009 gas crisis when Russia stopped deliveries of natural gas to Ukraine [Mišík 2012: 69] disruption meant that no gas from Russia was further delivered to Europe via Ukraine for 11 days, as a result of disagreement over the gas prices between Russia and Ukraine. Among the Visegrad countries, this crisis harmed the most to Slovak economy, which lost around 1 billion EUR subsequently to limited or halted production in factories [Tarnawski 2015: 132], while Slovak prime minister, Robert Fico accused the Ukrainian side from responsibility of such situation and called for drawing of political consequences with regards to Slovak support of Ukrainian ambitions to Euro-Atlantic integration [Duleba 2009: 5]. This allowed to Russian companies to dictate the conditions about the gas deliveries, thus to shape and influence the politics in the region of Central Europe and to some extent with outcomes towards the whole EU.
Nevertheless, the gas crisis has forced the V4 countries to find the possibilities of reshaping its energy security policy and to develop alternative sources of energy deliveries, less dependent on Russia. Despite the establishing of various policies for this purpose like Energy infrastructure priorities for 2020 and beyond, or Central and South-Eastern Europe Energy Connectivity (CESEC) as well developing several projects like Nabucco pipeline or Eastring pipeline, and Trans Anatolian Gas Pipeline (TANAP) or Trans Adriatic Pipeline (TAP) with various degree of success, it does not allow the Visegrad Group and the whole EU, in general, to become sufficiently independent on the energy sources from Russia. While the crude oil deliveries from Russia have decreased, however, not very significantly 7 , yet, the effectiveness of these policies is doubtful, as some Visegrad countries became even more dependent on Russian natural gas 8 . The period 7 For comparison, match the numbers from previous page with -Imports of crude oil from Russia according to EUROSTAT 2009: Slovakia -81.9 %; Hungary -78.0 %; Poland -74.6 %; Czech Republic -49.8 % // EUROSTAT. URL: https://ec.europa.eu/eurostat/cache/ infographs/energy/bloc-2c.html (accessed: 25.11.2019). 8 Natural gas imports from Russia according to EUROSTAT 2009: Slovakia -99.3 %; Hungary -82.7 %; Poland -82.0 %; Czech Republic -65.4 % // after the gas crisis and the EU's extensive search for alternative sources of energy has brought also addressing attention away from significant issues in relations between the EU/Visegrad and Russia -the development of EU's Eastern Partnership, described by Russia as an unfriendly gesture [Shishelina 2015: 72].
Russian Political Discourse in Visegrad Countries after the Ukrainian Crisis
The crisis in relations between the EU and Russia after the Ukrainian president Yanukovych refused to sign the Association Agreement with the EU in 2013 has brought some questions on the level of EU's cohesion. The representatives of some EU member states reflected contradictory stances on the Ukrainian crisis and different levels of support towards the anti-Russian sanctions, while the same applies to the case of Visegrad Group which was among the EU's regional factions the most skeptical to the political solution of crisis adopted by the EU. Except for Poland, the rest of the V4 members were reluctant to agree with Brussels on politics towards Russia what led into polarization of society [Stojarová 2018: 42] and misusing of political campaigns by various domestic political parties and movements of particular states to spread their pro-Russian, anti-Western ideologies [Gressel 2017: 3], with perception to achieve the sympathies of potential voters and to legitimize their actions [Sydoruk, Tyshchenko 2016: 25].
Finally, the governments of the Visegrad countries agreed on sanctions since 2014, however, each round of new sanctions has brought more objections from the representatives of the Czech Republic, Hungary, and Slovakia [Kucharczyk, Mesežnikov 2015: 12]. For instance, Czech President, Miloš Zeman stated that "sanctions are an expression of helplessness" 9 , or Hungarian Prime Minister Viktor Orbán evaluated the sanctions in 2014 as "In politics, this is called shooting oneself in the EUROSTAT. URL: https://ec.europa.eu/eurostat/cache/ infographs/energy/bloc-2c.html (accessed: 25.11.2019). 9 The V4 Will Never Agree on Russia // EURACTIV. URL: https://www.euractiv.com/section/central-europe/ news/the-v4-will-never-agree-on-russia/ (accessed: 02.12.2019).
foot" 10 , while the Slovak Prime Minister (until 2018) Róbert Fico has repeatedly called for stop of the sanctions, for example with the statement like "nonsensical and harmful" 11 . The main reasons behind such statements of three Visegrad countries are certainly in economic and energy issues, and it illustrates that some Visegrad representatives are ambitious for pragmatic and efficient cooperation with Russia [Dangerfield 2012: 971]. After all, in a long-term historical perspective of cooperation, such opinions of Central European leaders on anti-Russian sanctions should be viewed as a natural and predictable outcome [Dangerfield 2015: 3]. Nonetheless, the next paragraph will illustrate another interesting phenomenon related to civic society that is affecting the distinct or specific affiliation of Central European countries towards Russia, an aspect that cannot be overlooked by politicians of particular governments, thus it is shaping their foreign policies as well.
Public Opinion on Russia among the Visegrad Member States
The most comprehensive public opinion poll up to date, realized by the Visegrad countries for the purpose of highlighting its administration after twenty-five years of cooperation 12 , will help us to understand some foreign policy trends of Central European countries towards Russia. Supplemented with the poll called "Trends of Visegrad Foreign Policy" 13 conducted in 2015, 10 Hungary PM Orban condemns EU sanctions on Russia // BBC News. URL: https://www.bbc.com/news/ world-europe-28801353 (accessed: 02.12.2019). 11 Fico: If USA Scraps Russia Sanctions, EU Might Pluck Up Courage // NewsNow, The News Agency of the Slovak Republic. URL: https://newsnow.tasr.sk/foreign/ fico-if-usa-scraps-russia-sanctions-eu-might-pluck-upcourage/ (accessed: 02.12.2019). 12 25 Years of the V4 as Seen by the Public -Project coordinated by the Institute for Public Affairs in Bratislava, analyzed the data from a representative sample of adult population of the four countries, gathered by following research agencies: STEM (Czech Republic), Tárki (Hungary), Stratega Market Research (Poland) and Focus (Slovakia). URL: http://www.visegradgroup.eu/ documents/essays-articles/25-years-of-the-v4-as (accessed: 04.12.2019). 13 Trends of Visegrad Foreign Policy -Project supported by the Konrad Adenauer Foundation, the Ministry of Foreign Affairs off the Czech Republic and the the poll "25 Years of the V4 as Seen by the Public" has questioned citizens of the four countries not only about their awareness of domestic and inter-Visegrad issues but it also conducted research about foreign issues of the V4 with respect to organizations like NATO, the EU or other partners and allies outside the Visegrad and Euro-Atlantic structures. However, for the purpose of this analysis, we will use only data that are related to Russia.
According to the research, citizens of Slovakia (as a most strongly integrated Visegrad country with the EU in the institutional dimension [Pakulski 2016: 80]) expressed the highest level of trust towards Russia among the Visegrad countries. Answering the question: "To what extent can we trust and rely on the following nations?" (responses "definitely trust + rather trust" and "rather distrust + definitely distrust" are merged, without neutral responses "neither trust nor distrust" and "don't know", in %, expressing the trust to the V4 countries + Austria, Croatia, England, France, Germany, Russia, Slovenia, and Ukraine [Gyárfášová, Mesežnikov 2016: 20]) as much as 31 % of Slovaks expressed their trust towards to Russia, achieving the 8th place in their rankings, with Czech Republic on the first place, obtaining 78 %. What is more interesting on these results is the fact that Russia earned more trust than the V4 member -Hungary (30 %) and more than one of the NATO establishment initiator -the USA (27 %), or Slovakia's neighbour -Ukraine (17 %). Evaluation of trust towards the Russia by the other V4 countries was quite different, whereas, except for the Poland where Russia obtained the last place with only 9 % of trust following the Ukraine (29 %), in Hungary with 16 % and in the Czech Republic 17 % of citizen's trust, Russia achieved the 11th place, in both cases before Ukraine (14 and 13 % respectively).
With respect to the poll "Trends of Visegrad Foreign Policy" provided on civil servants, Open Society Foundations. It has been carried out in cooperation with the Center for EU Enlargement Studies -CENS (Hungary), the Central European Policy Institute -CEPI (Slovakia) and the Institute of Public Affairs -IPA (Poland). Via questionnaire, the project approached civil servants, experts, researchers, journalists, business and political representatives from Visegrad Group countries. URL: https://trendyv4.amo.cz/ (accessed: 04.12.2019). political representatives, etc., the findings are more remarkable. For example, on the question "Which countries are the 5 most important partners for your country's foreign policy?" [Dostal 2015: 22] -for Visegrad Group in general, Russia achieved sixth place with 39.1 %, while for Hungary itself it occupied a significant third place with 73.3 % behind Germany and the USA. With the task to evaluate the importance of the countries from the list (the V4 + Austria, China, France, Germany, Israel, Lithuania, Romania, Russia, Serbia, Sweden, Turkey, Ukraine, and United Kingdom) with respect to particular Visegrad members, in general, the V4 evaluated the importance of Russia on the 4th place with 67.1 % in average, with the biggest significance in Hungary (81.4 %). However, to evaluate the quality of the V4 countries' relations with the countries from the mentioned list, Russia achieved the worst mark on a scale of 1 to 5 (1 for very good and 5 for very bad) with 3.3 in average, with the best result in Slovakia (2.7).
Keeping in mind the dependence on energy security policy of the Visegrad Group from Russia, altogether with possible eagerness of the representatives of the V4 (except Poland) to cooperate with Russia in pragmatic, efficient way as indicated in previous paragraph, it is no surprise that energy security is going to be the issue No.1 for their countries in the next five years -what is demonstrated in the poll by achieving the first place (on the question -How important will the following issue be for your country's foreign policy in the next 5 years?) with 86.3 % for the Visegrad in general, while in Poland it achieved the highest value (90,4 %) [Dostal 2015: 28].
Conclusion
There is no common political stance or integrated foreign policy that reflects the substantial relations of the Visegrad Group towards Russia unanimously from Brussels, and which is similar for instance to the policy within the EU related to anti-Russian sanctions. The analysis proved that there are specific relations and opinions of single Visegrad countries that are different from the official EU-Russian discourse. These principles are built in most cases on pragmatic political issues and developed throughout the historical interconnections with contemporary effects and relations mainly in economic terms. Analysis showed that there are positive tendencies among the citizens and representatives of Visegrad, that are calling for cooperation with Russia what can be assumed as a source of EU's non-coherence, however, the lack of consensus on Russia and deficient of common foreign policy of the V4, together with prioritization of the Brussels' decisions above national -foreign policies in particular Visegrad countries makes it difficult to achieve full potential from this cooperation.
With exception of Poland, as the only Visegrad country which probably (and the most certainly) did not overcame the historical animosities with Russia, the polls have discovered the reasons (and potential) behind the "struggle" between domestic political parties of particular states, from leftist, nationalistic, anti-Western and conservative political spectre that manifests more sympathies towards Russia. These parties, stimulated by the principles of dissatisfaction towards the foreign policy of Brussels, nourished on anti-migration and pro-Russian discourse are standing against centralright, (neo)-liberal, West-oriented political spectre that is more or less anti-Russian adapted. Therefore, the context and course of the foreign policies of the both, the West, and the Russian Federation will be instrumental for shaping of the political discourse in the region of Central Europe that will influence the public opinion, the campaigns of the political parties, potential voters, and last but not least, the governments. Positive stances on Russia by the Hungarian president Viktor Orbán, pro-Russian sentiment of the Czech president Miloš Zeman, or negative attitudes towards anti-Russian sanctions of leading Slovak political parties like the Direction-Social Democracy (SMER-SD) or Slovak National Party (SNS) together with the rising popularity of populist, anti-Western party like "Kotleba-People's party our Slovakia" (Kotleba-ĽSNS) can lead Poland to exit the cooperation within the Visegrad, and at the same time, the course of national, anti-Western and pro-Russian politics or anti-Russian, pro-Western campaigning will have a crucial effect on the future development of relations between the Visegrad and the EU, the Visegrad and Russia, or between EU and Russia. Thus, prevention of EU's conflictual foreign policy discourse with Russia would be essential to alleviate tensions inside the EU, and to achieve more pragmatic relations with Russia, enabling the prospects for a win-win scenario.
Nevertheless, there is still Russian foreign policy with its specific instruments to play in the region of Central Europe but more importantly in the region of the EU's Eastern Partnership (EaP) as it is not fully integrated into the Euro-Atlantic structures yet. In shaping of its foreign policy in the EaP region, it can overcome its faults from past, however, space for manoeuvring and cooperation is by deepening of Russian isolation in context of sanctions shrinking, thus it depends only on calculations of Russian foreign experts how the country would use its inventory and whether there are any other possibilities to prevent unwanted scenario -Russia's loss of the influence in the region and even more isolation from the West. | 2020-06-25T09:07:14.584Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "f0c549c9879ab08f9a795d140861e07d66a910d7",
"oa_license": "CCBY",
"oa_url": "http://journals.rudn.ru/international-relations/article/download/23975/18330",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b28dfdc5b97c91ec04386a8207ed19112cc8f15f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
235750875 | pes2o/s2orc | v3-fos-license | Safety and efficacy of left bundle branch pacing in comparison with conventional right ventricular pacing
Abstract Background: Right ventricular pacing (RVP) has been widely accepted as a traditional pacing strategy, but long-term RVP has detrimental impact on ventricular synchrony. However, left bundle branch pacing (LBBP) that evolved from His-bundle pacing could maintain ventricular synchrony and overcome its clinical deficiencies such as difficulty of lead implantation, His bundle damage, and high and unstable thresholds. This analysis aimed to appraise the clinical safety and efficacy of LBBP. Methods: The Medline, PubMed, Embase, and the Cochrane Library databases from inception to November 2020 were searched for studies comparing LBBP and RVP. Results: Seven trials with 451 patients (221 patients underwent LBBP and 230 patients underwent RVP) were included in the analysis. Pooled analyses verified that the paced QRS duration (QRSd) and left ventricular mechanical synchronization parameters of the LBBP capture were similar with the native-conduction mode (P > .7),but LBBP showed shorter QRS duration (weighted mean difference [WMD]: −33.32; 95% confidence interval [CI], −40.44 to −26.19, P < .001), better left ventricular mechanical synchrony (standard mean differences: −1.5; 95% CI: −1.85 to −1.14, P < .001) compared with RVP. No significant differences in Pacing threshold (WMD: 0.01; 95% CI: −0.08 to 0.09, P < .001), R wave amplitude (WMD: 0.04; 95% CI: −1.12 to 1.19, P = .95) were noted between LBBP and RVP. Ventricular impedance of LBBP was higher than that of RVP originally (WMD: 19.34; 95% CI: 3.13–35.56, P = .02), and there was no difference between the 2 groups after follow-up (WMD: 11.78; 95% CI: −24.48 to 48.04, P = .52). And follow-up pacing threshold of LBBP kept stability (WMD: 0.08; 95% CI: −0.09 to 0.25, P = .36). However, no statistical difference existed in ejection fraction between the 2 groups (WMD: 1.41; 95% CI: −1.72 to 4.54, P = .38). Conclusions: The safety and efficacy of LBBP was firstly verified by meta-analysis to date. LBBP markedly preserve ventricular electrical and mechanical synchrony compared with RVP. Meanwhile, LBBP had stable and excellent pacing parameters. However, LBBP could not be significant difference in ejection fraction between RVP during short- term follow-up.
Introduction
Pacemaker therapy has been used for more than half a century as a treatment for patients with bradycardia arrhythmias. Conventional right ventricular pacing (RVP) including right ventricular apical pacing (RVAP), right ventricular septal pacing (RVSP), or right ventricular outflow tract pacing are widely accepted, which have the advantages of convenient installation, good pacing parameters, and less lead dislodgement. However, RVP causes cardiac electromechanical asynchrony, which is relate to an increased risk for hospitalization due to heart failure and atrial fibrillation. [1][2][3] Cardiac resynchronization therapy (CRT) can shorten the left and right ventricular delays and improve ventricular systolic function, which is especially suitable for patients with heart failure reduced ejection fraction combined with complete left bundle branch block. [4] But, 30% and 40% of patients implanted with biventricular pacing have no clinical benefit or no response to CRT [5] ; moreover, there was no significant improvement in cardiac function in patients with right bundle branch block, [6] even leading to deterioration of cardiac function in patients with narrow QRS duration. [7] His bundle pacing (HBP) ensures rapid activation in left and right ventricles and synchronized contraction via pacing His-Purkinje system directly, emerging as a viable alternative for CRT with physiological restoration of electrical synchrony. [8] However, there are still some limitations of HBP, including difficult implantation, high capture thresholds and lower success rates particularly in patients with bundle branch block (BBB) or infranodal block. [9,10] Thus, alternative pacing sites have been sought. Left bundle branch pacing (LBBP) is defined as capture of the left bundle trunk or its proximal fascicles, usually with septal myocardium capture, [11] which overcomes clinical deficiencies mentioned above of HBP. Previous studies reported that the surgery time was significantly increased for the LBBP compared with RVP. [12,13] But recent study revealed that the surgery method via the ventricular RAO fluoroscopic image was divided into 9 parts ("nine partition method") and without the guidance of intracardiac electrograms could save the operation time. [14] Recently, it still lacked study to systemically summarize and comprehensively evaluate the effects of LBBP. Therefore, this study represented the first systematic review and meta-analysis on safety and efficacy of LBBP in comparison with RVP.
Search strategy
An all-round search was searched in the Medline, PubMed, Embase, and the Cochrane Library databases from inception up to November 2020 by 2 reviewers independently. Only articles in English were included. The search strategy used the following relevant keywords, including the following: ([left bundle All analyses were based on previous published studies, thus no ethical approval and patient consent are required in this study.
Inclusion and exclusion criteria
Two investigators filtrated and identified researches that fulfilled the following inclusion criteria: full-text studies of controlled experiments about LBBP versus RVP; RVP group included RVSP, RVAP, or right ventricular outflow tract pacing; randomized control trials, case-control, cohort, and observational studies; studies wanted to provide some dependable information with QRS duration (QRSd), mechanical synchronization parameters, pacing parameters, left ventricular ejection fraction (LVEF) and complications in both groups. The exclusion criteria were as follows: studies that did not offer plentiful data to analyze the procedural efficacy and safety; animal studies, conference abstracts, case reports, review articles, editorials, or non-English language articles.
Data extraction
Data were extracted using standardized protocol and reporting forms, including name of the first author, year of publication, country of origin, sample size, baseline characteristics (age, sex, LVEF, QRSd), selection of patients and pacing parameters, and so on. Estimating the sample mean and standard deviation from commonly reported quantiles. [15] This data extraction process was independently performed by 2 investigators. Discrepancies between them were resolved by a third reviewer.
Quality assessment
The study quality was evaluated by two investigators using the Newcastle-Ottawa Scale (NOS) for nonrandomized studies. The NOS uses a star system (0-9) to evaluate studies. A research with NOS ≥7 was judged to be a study of good quality. [16]
Statistical analysis
Dichotomous variables and outcome endpoints were reported as a risk ratio (RR) with 95% confidence intervals (CIs). The continuous variables were analyzed using weighted mean differences (WMD) or standard mean differences (SMD). The between-study heterogeneity was reflected by I 2 >50%, with a P< 0.05 deemed statistically significant. In cases of heterogeneity (defined as I 2 >50%), randomeffects models were used; otherwise (I 2 50%), fixed-effects models were used. In cases of statistical heterogeneity, subgroup analysis or sensitivity analyses was used. Sensitivity analysis was performed to check the consistency of the overall effect estimate. When the pooled analysis still yielded significant heterogeneity, descriptive analysis was used. All statistical testing was 2-tailed with a statistical significance set at P< .05. The presence of publication bias was evaluated by the use of funnel plots. The statistical analysis was performed using the Revman5.4 soft-ware.
Study and data selection
Our search strategy yielded 177 potentially relevant articles (21 articles from PubMed, 25 articles from EMBASE, 91 articles from Cochrane Library, 40 articles from Medline). The results of the search and selection process are illustrated in Figure 1. Initially, the exclusion of 50 duplicated articles, 92 articles underwent title and abstract review. Of the remaining 5 studies were excluded as topics were conducted in animals and conference, leaving a total of 30 articles for reading the full text. Next, 23 studies were excluded for the following reasons: 6 were uncontrolled studies, 8 lacked study endpoints, and 7 reported duplicate data. And then, 1 trail by Hou et al [17] was excluded because RVSP justly acted as backup pacing in HBP group. Another trail by Li et al [18] was excluded because patients of LBBP implantation failure received RVSP. No additional articles were added through manual search. Thus, 7 articles were finally selected in this meta-analysis. [12,13,[19][20][21][22][23]
Study characteristics and quality assessment of included studies
Baseline and procedural characteristics of included studies are shown in Table 1. A total of 451 patients were enrolled in these trials (221 in the LBBP group and 230 in the RVP group). The mean ages of the study participants ranged from 61.64 ± 5.40 to 73.6 ± 8.9 years, and the mean follow-up duration was from 0 to 6 months. In this meta-analysis, 2 studies [12,19] included RVSP in RVP group, only 1 study [21] included RVAP, the remaining studies [12,20,22,23] included RVAP or RVSP. Only 1 study [19] selected patients who were sick sinus syndrome with narrow QRSd; the rest [12,13,[20][21][22][23] included sick sinus syndrome or atrioventricular block. The mean success rate of LBBP in the included study was 94.0%, and the average probability of recording LBB potential is 64.7%. Six of seven were prospective studies, [12,13,[20][21][22][23] and 1 was observational study. [19] It is worth noting that left ventricular (LV) mechanical synchrony was measured in different ways. In Cai et al study, [19] it was measured by SD-Tmsv-16; In Das et al's study, [21] it was measured by standard pulsed wave Doppler echocardiography as the interval between the onset of the QRS and the onset of the aortic and pulmonary ejection; In Sun et al study, [23] it was measured by standard deviation of 18-segment systolic times to peak 2-D strain.
The Newcastle-Ottawa scales (NOS) of the included studies are described in Table 2.
LV mechanical synchrony
The baseline of LV mechanical synchrony measured by different ways was summarized from 3 studies [19,21,23] and the heterogeneity was low (I 2 = 45%), taking a fixed-effect model and the continuous variables were analyzed using SMD. In the LBBP group, LV mechanical synchronization parameter of the LBBP capture was similar with the native-conduction mode (WMD: À0.01; 95% CI, À0.33 to 0.31, P = .95; Fig. 4A). But, the LV mechanical synchronization parameter in LBBP capture mode was superior to that of the RVP group (SMD: À1.5; 95% CI: À1.85 to À1.14, P < .001; Fig. 4B). And low statistical heterogeneity was observed (I 2 = 41%), taking a fixedeffect model. Meanwhile, the results of the sensitivity analysis were not changed by removing any individual study from the analysis.
LVEF assessment
The baseline LVEF assessment was reported among most of the included studies. Postoperative LVEF was assessed in only three studies. [19,21,23] In the LBBP group, LVEF of the LBBP were similar with RVP mode during short-term follow-up by a random-effect model (WMD: 1.41; 95% CI: À1.72 to 4.54, I 2 = 77%, P = .38; Fig. 5).
Publication bias
We intended to investigate potential publication bias via the funnel plot. However, as we only had up to 7 studies in our analysis, the number was insufficient to reject the assumption of no funnel plot asymmetry. Thus, we did not perform a funnel plot. [24,25]
Discussion
This study represented the first systematic review and metaanalysis on the comparison between LBBP and RVP. The main findings were as follows: the paced QRSd in LBBP capture was no significant difference with the native-conduction mode, whereas it was obviously shorter than the QRSd induced by RVP; regarding of QRSd and Stim-LVAT, there were no statistically significant differences between the potential + and potentialsubgroups; LV mechanical synchronization parameter of the LBBP capture was similar with the native-conduction mode, however was superior to that of the RVP group; neither LBBP capture mode nor RVP capture mode had significant change in ejection fraction during short term follow-up; LBBP showed stable low pacing threshold, high R wave amplitude and there was no significant difference compared to RVP. LBBP showed a higher ventricular impedance at implantation compared with the RVP group, but it was not different from RVP group at short-term follow-up; Complications of LBBP was low and similar with RVP. HBP as a physiological pacing utilizes the intrinsic His bundle-Purkinje conduction system that results in ventricular synchronized contraction, whereas LBBP can produce true conduction system pacing by bypassing pathological or diseasevulnerable region in the conduction system. [26] LV synchrony caused by HBP has been demonstrated, and some studies have even suggested that HBP may serve as the first-line treatment for patients with heart failure combined with LV asynchrony. [8,27,28] Meanwhile, recent researches showed that LV synchrony in the LBBP group was similar to that in HBP group, [17,29,30] and LBBP has also been shown to be effective in the treatment of heart failure combined with bundle branch block. [31] However, LBBP is easier to operate than HBP because of wide spread of fascicules of LBB in the subendocardium of the left side and limitation of His bundle. [32] Moreover, LBBP exhibited stable parameters of higher R-wave amplitudes and lower capture thresholds than those of HBP. [17] Importantly, LBBP can correct left bundle branch block (LBBB) and right bundle branch block at a low capture threshold. [20] But HBP required a high pacing output to correct LBBB, [10] which means the electrical current must penetrate the pathological region to reach normal left bundle branch for LBBB correction. And LBBP can theoretically perform cardiac resynchronization in patients blocked in His bundle. [26] Therefore, LBBP can effectively produce a better ventricular synchronization and may be superior to CRT based on biventricular pacing. Nonetheless, it is necessary to further verify safety and efficacy of LBBP by randomized clinical studies directly comparing HBP and LBBP with CRT in patients.
On the contrary, a good LV electrical and mechanical synchrony that is similar to that of native conduction. It is well known that the QRSd has been accepted as an indicator for the evaluation of electrical synchrony. Our analysis showed the paced QRSd and LV mechanical synchronization parameter measured by echocardiography in LBBP capture were similar with the native-conduction mode, which indicates LBBP can bring about synchronization of ventricular contraction. Inversely, compared with RVP group, LV electrical and mechanical synchronization was significantly better in LBBP group, for pacing from the RV causes an abnormal late activation of the LV free and lateral wall and consequent electromechanical dyssynchrony. [3] This also explains the clinical adverse events associated with RVP, such as heart failure, atrial fibrillation, and pacemaker cardiomyopathy. Interestingly, LBB potential can be recorded during the implantation procedure, an indication of direct LBBP, but not all LBBP can observe LBB potential. Studies showed that approximately 50% to 80% of implants can record LBB potential, [18,33] which was similar with our result of 64.7%. In Hou et al's study, [17] patients with LBBP with LBB potentials had shorter Stim-LVAT and better LV mechanical synchrony than those without potentials. However, our analysis found Stim-LVAT and paced QRSd in LBBP capture were irrelevant with whether the existence of LBB potentials. The mechanism may be that stimulation initially activates the LV septal sub-endocardium and then propagates to nearby conductive tissue or directly to the conduction system, so large sample size and randomized multicenter study with longer-term follow-up is needed for conclusive evidence. However, since pacing is intended to correct conduction disease or stimulate the bundle branch to produce rapid conduction with normal or near-normal electrocardiogram, it may not be necessary to record LBB potential. Consequently, surgical method of LBBP reported by Zhang et al [14] without the guidance of intracardiac electrograms proved to be effective.
Theoretically, LV function in patients with LBBP should be superior to RVP because of LV synchrony in LBBP group was significantly better than RVP group. However, no statistical difference existed in ejection fraction between the 2 groups during short-term follow-up in our meta-analysis. The result of one of our included studies showed that LBBP is associated with better LV function (higher LVEF 64.00 ± 3.03 vs 59.73 ± 6.73, P = 0.01) during 6 months' follow-up in comparison to RVP. [21] A major difference from other included studies was that up to 64% of patients combined with BBB, and the BBB was corrected in 84% of these patients in LBBP group. And the paced QRSd in LBBP capture was significant shorter than the baseline (112.27 ± 8.57 vs 131.64 ± 17.8), [21] indicating significant improvement in postoperative LV synchronization, so LVEF of LBBP group was increased compared with RVP. Moreover, more and more studies reported that patients with HF and BBB can benefit significantly from LBBP. [26,31,33] So there are 2 possible reasons for the result: one reason may be the small sample size and short follow-up time; LBBP mainly may improve LVEF of patients with HF combined with BBB, while LBBP and RVP have little effect on LVEF in patients with normal cardiac function and narrow QRSd during short-term follow-up.
The cathode ring of LBBP is also embedded in the myocardium as same as RVP. So, LBBP showed stable low pacing threshold and high R wave amplitude in our analysis, and a higher ventricular impedance at implantation compared with the RVP group, but it was not different from RVP group at short term follow-up. It may be that electrode tip of LBBP causes more myocardial injuries, then excessive myocardial edema in the early stage made the electrode impedance high at implantation. When the edema was reduced, the impedance gradually decreased and tended to be stable. Other studies also confirmed good pacing parameters for LBBP. [17,18,33] The complications of LBBP were low and no difference with RVP in our analysis. In one [12] of the studies we included, one lead perforation was observed in LBBP group mainly because of the rapid decline of impedance during the operation. So, it is necessary for us to timely monitor the change of electrode impedance to avoid acute or delayed ventricular septal perforation and ensure capture of the LBB. Except for this method, recent documents proposed several methods to monitor lead depth: fulcrum sign, sheath angiography, changes in the QRS notch in V1 lead, pacing from the ring electrode and observing fixation beats (the ectopic beats of qR/ rsR' morphology in V1 lead). [34] In addition, myocardial damage deserves our attention in LBBP. The recent study [35] showed the number of attempts at lead position was an independent risk factor related to the myocardial damage, so excessive number of attempts should be avoided. It is worth noting that patients with intraventricular block, hypertrophic cardiomyopathy and ventricular septal infarction should not be treated with LBBP.
Limitation
This meta-analysis has some limitations. First, there were several indicators with high heterogeneity, but the sensitivity analysis indicated that it did not affect the reliability of the results. This may be attributed to different diagnosis of patients, multiple right ventricular pacing sites, and different operator experiences and methodological quality. The inconsistency of RVP location in the included studies may have caused some heterogeneity in the results. Second, the small sample size of the included study may affect the stability of the result indicators, reduce the detection efficiency, and possibly lead to the bias of the study results. Third, the included studies were followed for a short period of time, with only 2 studies being followed for 6 months. Four, only 7 studies were included in our meta-analysis, and no randomized controlled trial were included. Thus, more well-designed and large-scale RCTs with longer-term follow-up are demanded to validate the results.
Conclusions
Our systematic review and meta-analysis confirmed that LBBP was a safe and effective method for bradycardia arrhythmias. Compared with RVP, LBBP markedly preserve ventricular electrical and mechanical synchrony. In addition, LBBP showed stable low pacing threshold, high R wave amplitude, and there was no significant difference compared to RVP. However, LBBP and RVP have little effect on LVEF in patients during short-term follow-up. | 2021-07-07T13:10:26.352Z | 2021-07-09T00:00:00.000 | {
"year": 2021,
"sha1": "ffe3b75267d87f2a7bfb9473ebb4f21682b5f12e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000026560",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffe3b75267d87f2a7bfb9473ebb4f21682b5f12e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56472568 | pes2o/s2orc | v3-fos-license | Sea quark QED effects and twisted mass fermions
We show that maximally twisted mass fermions can be employed to regularize on the lattice the fully unquenched QCD+QED theory with vanishing $\theta$-term. We discuss how the critical mass of the up and down quarks can be conveniently determined beyond the electroquenched approximation by imposing that certain symmetries of continuum QCD+QED, which are broken by Wilson terms, get restored in the continuum limit. A mixed action setup is outlined that allows to extend beyond the electroquenched approximation the computation (with only O($a^2$) artifacts) of the leading isospin breaking corrections to physical observables using the RM123 method and (pure QCD) ETMC gauge ensembles with $N_f=2+1+1$ dynamical quark flavours.
Introduction
In refs. [1,2] a strategy for evaluating the leading isospin breaking (LIB) corrections to hadronic quantities has been proposed. It is based on expanding the full Q(C+E)D observables to first order in powers of the small quantities (m d − m u )/Λ QCD and α em (so-called RM123 approach). In this way the computational task is reduced to that of evaluating hadronic correlators with insertions of electromagnetic (e.m.) currents or quark scalar densities in the theory with no electromagnetism and no u-d mass splitting (isosymmetric theory) .
In the same papers the viability of the RM123 approach was tested by evaluating the LIB corrections to hadron masses as well as the Dashen's theorem breaking parameter ε γ using the N f = 2 ensembles generated by ETMC [3,4]. The study of LIB effects in the leptonic meson decay rates, which was started in ref. [2] as far as the correction ∝ (m d − m u ) is concerned, has been recently extended to the LIB e.m. corrections (see talks by Tantalo [5] and Simula [6]) by following the general strategy of refs. [7,8], which allows to keep under control the infrared divergencies arising in the intermediate steps of the calculation.
All these investigations have been carried out so far in the electro-quenched approximation, i.e. by treating sea quarks as if they were electrically neutral (diagrammatically this means neglecting all contributions with photons attached to sea quark loops). Though this is a reasonable first approximation, it appears difficult to reliably control the systematic error it induces on the computed LIB effects. Here we discuss how twisted mass LQCD [9,10,11] can be conveniently combined with the RM123 approach in order to evaluate 1 LIB corrections to hadronic observables beyond the electro-quenched approximation in the theory with dynamical u, d as well as s and c quarks.
A lattice regularization of Q(C+E)D with maximally twisted Wilson fermions
The use of maximally twisted Wilson fermions allows to avoid O(a) lattice artifacts in physical observables at the price of introducing parity and isospin breakings at finite lattice spacing, which come on top of the physical isospin violations. One may thus worry that in the presence of e.m. interactions a delicate tuning of bare mass parameters is needed in order to obtain a continuum effective action with no strong and e.m. θ -terms. We show here that this is not the case if one works at maximal twist, i.e. with M 0 = M cr for each quark pair.
We discuss the conceptual point in LQ(C+E)D with u and d (non-degenerate) quarks. The quark lattice action of Q(C+E)D for a maximally twisted isospin doublet ψ = (ψ u , ψ d ) reads [12] where U µ (x) and E µ (x) are the strong and e.m. (non-compact QED) gauge links. The flavour structure of the latter is due to the unequal electric charges of up and down quarks. In flavour space the charge operator reads Q = e(1 1/6 + τ 3 /2), where e is the electric charge. Consequently also the critical mass counterterm, M cr , will take the diagonal matrix form We stress that in eq. (2.1) the critical Wilson term, −ψiγ 5 τ 3 W cr ψ, is chirally twisted in the τ 3 "isospin direction" to comply with e.m. gauge invariance, which leads to a complex quark determinant (see sect. 4 on how the resulting problem can be circumvented in the RM123 approach).
For convenience we separate out quark flavours and rewrite eq. (2.1) in the case of generic twist angles, called θ u and θ d (maximal twist is recovered e.g. for θ u = −θ d = π/2), obtaining Since lattice singlet axial rotations are not anomalous, twist phases can be freely moved from the Wilson to the mass terms by means of the axial rotations by which the lattice action (2.4) can be brought into the form (with untwisted Wilson terms W u,d cr ) where we have introduced the lattice bare quark mass parameters and density operators Symmetries of the lattice action (2.7) and power counting arguments imply [13] that the local effective action of the corresponding continuum theory can be written in the form 11) where χ f ,χ f , A (photon) and A (gluon) stand now for suitably normalized continuum fields and , the multiplicatively renormalizable mass parameters take (see e.g. ref. [12]) the form Noting that the twisted mass terms i(μ uP u χ +μ dP d χ ) and the action terms tr(GG) andFF (which can only appear with coefficients ∝θ f being odd functions ofμ f /m f , f = u, d) are not independent operators in the continuum theory, as they are related by anomalous axial U(1) rotations of the quark fields (see below), one cheks that the form (2.10) of the continuum action is indeed the most general one. Tradingm f andμ f for the alternative renormalized parametersM f andθ f ( f = u, d), the continuum theory effective action (2.10) is rewritten as In the formal continuum theory the analogs of the U(1)-axial rotations (2.5) and (2.6) are anomalous, so the effective action (2.15) can be equivalently cast into the form Eq. (2.16) shows that the lattice action (2.4) leads to a continuum effective theory with vacuumangleθ u +θ d . In general,θ u +θ d is not simply ∝ θ u + θ d due to e.m. interactions implying However from eq. (2.14) one checks that at maximal twist (e.g. θ u = −θ d = π 2 ) the continuum effective theory hasθ u = −θ d = π 2 , hence vanishing vacuum-angle and no undesired parity violation. In other words once M cr is determined we get Q(C+E)D for two non-degenerate quarks.
Fixing the critical mass in the twisted lattice Q(C+E)D theory
In our setting with two non-degenerate flavours we first have to non-perturbatively determine the critical mass, M cr (see eq.2.3), appearing in eq. (2.1). As we shall work at first order in α em , we need to accordingly expand the parameters m cr andm cr of eq. where m LQCD cr = w(g 2 )/a is the critical mass of the isosymmetric theory. From now on a suitable e.m. gauge fixing and procedure (e.g. QED L ) for removal of the photon zero mode are assumed. Following the strategy outlined in [14,9,11] one can determine M cr by enforcing the chiral WTIs of Q(C+E)D. For instance, with obvious notations for quark bilinears in ψ-basis (e.g. P 1,2,3 = ψγ 5 τ 1,2,3 2 ψ) and δ q ≡ q u − q d = 1 a way to fix M cr might be to enforce the formal continuum relations This procedure is rather practical as the flavour structure of the two relations is such that, once m LQCD cr is known, it is possible to separately determine δ em (from eq. (3.2)) andδ em (from eq. (3.3)) 2 . A look at eqs. (3.2)) and (3.3) shows that only parity violating correlators come into play. This fact together with a reconsideration of the condition usually employed in LQCD to determine the critical mass suggests a numerically simpler way to fix m cr andm cr .
To explain the idea we first discuss the situation one meets in twisted mass LQCD. In this case the LQCD Symanzik effective action [15] for m 0 out of its critical value takes the form The undesired term ∝ [ψiγ 5 τ 3 ψ] in (3.4) can be eliminated from Γ LQCD by enforcing the condition 3 . Indeed from the Symanzik expansion we obtain withm u =m d = 0 (butμ u > 0,μ d < 0 and α em = 0). Our claim is that m cr andm cr can be determined to first order in α em by enforcing the conditions In fact, based on the effective action (3.6) and using parity invariance of L Q(C+E) D 4 , we can write O(a 2 ) lattice artifacts, precisely the same set of renormalized correlation functions can also be evaluated by using the isosymmetric lattice action (mixed action approach) where S Y M (U, E) denotes the gluon and photon lattice action and S ℓh 33 is given in eq. (4.1). The virtual "sea" effects of the quark fields ψ h ,ψ h are canceled by those of the complex ghost field Φ h but reintroduced through the fields ψ sea h ,ψ sea h . Of course the mixed action theory S mix must be simulated at (fixed as a → 0) renormalized parameters (ĝ 0 i ) matching those of the theory [S Y M (U, E) + S ℓh 33 (µ ℓ , 0, µ h , ε h )]| e=0 one should have studied in principle, and in particular with equal values for the valence and sea renormalized masses of each quark flavour. In the isosymmetric theory this implies working with equal values of the valence and sea bare mass parameters, except for ε h = Z P 0 Z −1 S ε sea h . Only e.m. corrections will give rise to the more complicated pattern of eq. (2.13). The proof of the statements above is straightforward [13] and can be given along the lines of ref. [10]. We note that the form of the valence c-s sector in the action (4.3) is such that within the RM123 approach for LIB corrections to hadronic observables one has to evaluate precisely the same set of Wick contractions that should be computed if the action S ℓh 33 were adopted, but employing gauge ensembles that are generated with no "complex phase problems". This is so because ε ℓ = 0 and the c-s sea quark effects stem from the action term in second line of eq. (4.3). | 2016-12-07T14:30:09.000Z | 2016-12-07T00:00:00.000 | {
"year": 2016,
"sha1": "7c4653a18a36cdad06d6901527778b4da08f42a8",
"oa_license": "CCBYNCND",
"oa_url": "https://pos.sissa.it/256/320/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7c4653a18a36cdad06d6901527778b4da08f42a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
849210 | pes2o/s2orc | v3-fos-license | Treatment of ventilator-associated pneumonia and ventilator-associated tracheobronchitis in the intensive care unit
Objectives: To assess current practices of different healthcare providers for treating extensively drug-resistant (XDR) Acinetobacter baumannii (AB) infections in tertiary-care centers in Saudi Arabia. Methods: This cross-sectional study was performed in tertiary-care centers of Saudi Arabia between March and June 2014. A questionnaire consisting of 3 parts (respondent characteristics; case scenarios on ventilator-associated pneumonia [VAP] and tracheobronchitis [VAT], and antibiotic choices in each scenario) was developed and sent electronically to participants in 34 centers across Saudi Arabia. Results: One-hundred and eighty-three respondents completed the survey. Most of the respondents (54.6%) preferred to use colistin-based combination therapy to treat VAP caused by XDR AB, and 62.8% chose to continue treatment for 2 weeks. Most of the participants (80%) chose to treat VAT caused by XDR AB with intravenous antibiotics. A significant percentage of intensive care unit (ICU) fellows (41.3%) and clinical pharmacists (35%) opted for 2 million units (mu) of colistin every 8 hours without a loading dose, whereas 60% of infectious disease consultants, 45.8% of ICU consultants, and 44.4% of infectious disease fellows preferred a 9 mu loading dose followed by 9 mu daily in divided doses. The responses for the scenarios were different among healthcare providers (p<0.0001). Conclusion: Most of the respondents in our survey preferred to use colistin-based combination therapy and intravenous antibiotics to treat VAP and VAT caused by XDR AB. However, colistin dose and duration varied among the healthcare providers.
M echanical ventilation is commonly used as a therapeutic option when caring for critically ill patients in the intensive care unit )ICU(. Although mechanical ventilation may be lifesaving, it is associated with an increased risk of infections, including ventilatorassociated pneumonia )VAP( and ventilator-associated tracheobronchitis )VAT(. Ventilator-associated pneumonia has an estimated incidences of 10-25% and VAT has 1.4-11% with an estimated all-cause mortality of 25-50% for VAP, and 39% for VAT. [1][2][3] Late-onset VAP )occurring after 5 days( is usually caused by multidrugresistant )MDR( organisms and is associated with an increase in morbidity and mortality. 1,2 The increased incidence of infections with these MDR pathogens is a major concern to health care providers worldwide and in particular Acinetobacter species, which were recognized as a cause of infection in critically ill patients in the past decade. With an increase in the use of broad-spectrum antibiotics, MDR and extensively drug-resistant )XDR( Acinetobacter baumannii )AB( have emerged as common pathogens causing late-onset VAP in the Middle East and Europe. [4][5][6] The attributable mortality for ICU infected with AB was 10-43%, and for in-hospital patients )those who did not require ICU( was 8-23%. 7 These high rates are likely related to the limited number of drugs available to treat XDR strains, as AB has an exceptional pathogenicity and capability to develop inherent and acquired resistance. The current knowledge on its treatment is insufficient as the quality of evidence, and the clinical practice guidelines are lacking. There are many controversies regarding different treatment options such as the superiority of combination therapy over monotherapy and the optimal dose of colistin, which is usually the only antibiotic to which XDR AB is susceptible. 8,9 These have led to a variation in practice. The objective of this study was to investigate the current practices of clinicians and clinical pharmacists )CPs( caring for patients with XDR AB infections in Saudi Arabia. A survey was conducted to find answers to the following questions: Is combination therapy superior to monotherapy? Is there any role for colistin nebulization? What is the optimal dose of colistin?.
Methods. This cross-sectional study was performed in tertiary-care centers of Saudi Arabia between March and June 2014. The PubMed database was used to find prior related research.
The study subjects were physicians who were specialized in infectious disease )ID(, critical care medicine, and clinical pharmacology in all major hospitals in different regions of Saudi Arabia. Informed consent was not required as the survey consisted of voluntary anonymous responses to a web-based questionnaire with no risk of breaching the participants' confidentiality.
The sample size calculation was not performed a priori as we aimed at surveying all the specified healthcare providers in Saudi Arabia that were accessible via emails or social networks. After personal communication with acquaintances in the target hospitals, snowball sampling was used to reach the target healthcare providers. Assuming that the number of healthcare providers who were the target for this survey was 300, a reasonable estimate, the calculated sample size would be 169 at 95% confidence interval and 5% margin of error.
Item generation and development of questions. Five experts )3 ICU and 2 ID physicians( generated items through group discussion. Multiple meetings and discussions were held to shorten the list of items to reduce the burden on respondents and minimize redundancy while retaining the important items. Questions were developed based on the items of interest and were structured as multiple-choice questions that allowed a single answer for each question. The questionnaire was piloted with 10 participants before it was finalized. Feedback was obtained on the clarity and terminology of questions, and the questions were adjusted according to the feedback received and then the questionnaire was retested by the same 10 participants.
The final questionnaire was administered in English and investigated 3 components: respondent characteristics, antibiotic choice for 3 different clinical scenarios )VAP, VAT, and septic shock( and the duration of therapy for each of the clinical scenarios. Respondent characteristics included the medical specialty, job title )consultant, or fellow/registrar(, and number of years practicing in that specialty. Questions regarding antibiotics, including the antibiotic preferred to treat VAP caused by XDR AB, monotherapy versus combination therapy, duration, dose )in patients with normal kidney function and in those with acute kidney injury [AKI](, treatment of VAT caused by XDR AB, and the route )intravenous versus nebulization( used to treat VAT. The questionnaire also contained questions to assess physician's specialty title, years of experience, level of care, and perceptions of current clinical practice.
Disclosure.
Authors have no conflict of interests, and the work was not supported or funded by any drug company. Questionnaire administration. The questionnaire was distributed to all centers )N=41( that provided ICU care in all major geographic regions of Saudi Arabia. Survey Monkey )www.surveymonkey.com( was used to design and distribute the questionnaire and to collect the responses. Emails and smart phone applications )Facebook messenger and Whatsapp( were also used to distribute the questionnaire link and to send reminders. Three reminders were sent to all recipients one week apart, and the data were collected between March and June 2014. The research coordinators were responsible for reminding non-respondents by phone or email. No financial or other incentives were provided to respondents.
Statistical analysis. Data were analyzed using MedCalc Statistical Software version 13.2.2 )MedCalc Software bvba, Ostend, Belgium(. The categorical study variables were presented as frequencies with percentages. Responses to the survey items were compared according to the respondents' speciality, job title, and length of clinical experience using Pearson's Chi-square test. Using SPSS for Windows, Version 16.0 )SPSS Inc., Chicago, IL, USA(, we performed multivariate binary logistic regression analysis to determine the predictors of the most prevalent management options, defined as those options chosen by the highest proportion of respondents. The independent variables in the model were the following: specialty )clinical pharmacists and intensivists with ID physicians being the reference group(, position )consultants versus other healthcare practitioners(, length of clinical experience )>10 years versus <10 years( and type of hospital )tertiary-care versus other hospitals(. The results were presented as odds ratios )ORs( with the corresponding 95% confidence intervals )CIs(. A p-value of <0.05 was used to indicate statistical significance.
Results. Out of the 204 healthcare practitioners )174 physicians and 30 CPs( from 29 government hospitals and 12 private hospitals responded )68% response rate(, 183 completed the survey. The main characteristics of respondents are summarized in Tables 1 & 2. Most of the respondents were specialized in intensive care medicine )70.6%( followed by ID and clinical pharmacology )14.7% each(.
Responses of all study subjects.
Most of the physicians )54.6%( preferred colistin-based combination therapy as the first-choice treatment for VAP caused by XDR AB. The remaining 45.4% believed that IV colistin alone was sufficient. Carbapenems were the most frequently preferred antibiotics in combination with colistin )26.8%( followed by tigecycline )16.4%(. Sixty-three percent of respondents believed that 2 weeks of therapy were sufficient for treating VAP caused by XDR AB ) Table 3(. For VAT caused by XDR AB, 80% of respondents agreed that patients should be treated with antibiotics )p<0.0001(. Approximately 38% of respondents preferred to use 2 million units )mu( )160 mg( of colistin every 8 hours without a loading dose, whereas 35.8% chose a 9 mu )720 mg( loading dose of colistin, followed by 3 mu )240 mg( every 8 hours, or 4.5 mu )360 mg( every 12 hours to treat VAP caused by XDR AB )p<0.0001(. In case of AKI, 89.8 of respondents preferred to modify colistin dosing according to the creatinine clearance. When septic shock was present in patients with late-onset VAP on broad-spectrum antibiotics, most of the respondents )58.7%( preferred to add IV colistin empirically ) Table 3(. Responses according to position. Table 4 describes responses according to position. Most fellows )61% of ID and 58% of ICU( and ID consultants )55%( preferred combination therapy to treat VAP caused by XDR AB. Half of the ICU consultants and 52.4% of the CPs opted for IV colistin alone. Almost 41% of ICU fellows and 35% of CPs preferred to treat XDR AB VAP patients with normal renal function using 2 mu of colistin every 8 hours without a bolus dose. Most consultants )60% of ID and 45.8% of ICU( and ID fellows )44.4%( preferred to use a colistin loading dose of 9 mu followed by 3 mu every 8 hours, or 4.5 mu every 12 hours. Most CPs )66.7%(, ICU physicians )64.3%(, and ID physicians )57.8%( believed that a 2-week regimen was appropriate. When treating patients with late-onset VAP and shock despite broad-spectrum antibiotics, 80% of ID consultants recommended adding colistin empirically compared with 36.8% of CPs. Responses according to clinical experience. As described in Table 5, more than 50% of the subjects at all experience levels preferred to treat patients with colistin for 14 days. From the subjects who had 5-10 years of experience, 38.3% favored using IV colistin alone to treat VAT while the respondents with more experience chose not to treat )33.3%(, or to treat with inhaled colistin alone )34.5%(. More than two-thirds of the subjects preferred using either 2 mu )160mg( every 8 hours without a loading dose, or a loading dose with a higher maintenance dose of intravenous colistin to treat patients with XDR AB VAP irrespective of the length of experience. For late-onset, VAP patients in shock despite broad-spectrum antibiotics, the empirical coverage recommendations were similar regardless of the length of experience. Table 6, IV colistin alone was the prevalent choice for VAP management. Healthcare practitioners at tertiary-care hospitals were more likely to choose this response compared with those working at other hospitals )OR=1.96; 95% CI: 1.07-5.58(. Specialty, job title, and the length of experience did not predict this response.
Predictors of the prevalent management choices. As represented in
For VAP treatment duration, 14 days was the prevalent choice. None of the independent variables entered in the multivariate model predicted this response. Intravenous colistin alone was the prevalent choice for treatment of VAT. Being a consultant )OR=2.38; 95% CI: 1.03-5.49( and the length of experience more than or equal 10 years )OR=0.07; 95% CI: 0.02-0.26( were predictors of this response compared with healthcare practitioners of a different professional status and less experience.
Two million units )160 mg( of IV colistin every 8 hours without loading was the prevalent dose with creatinine clearance being more than or equal 70 ml/min. None of the independent variables entered in the multivariate model predicted this response. For AKI after IV colistin, modifying the colistin dose according to creatinine clearance was the prevalent choice. None of the independent variables entered in the multivariate model predicted this response. For septic shock not improving after 3 days of meropenem and vancomycin, "adding IV colistin" was the prevalent choice. The CP's were less likely to choose this response compared with ID physicians )OR=0.19; 95% CI: 0.05-0.66(. The other factors did not predict this response.
Discussion. The purpose of this survey was to assess current knowledge and practice among physicians and CPs in tertiary centers of Saudi Arabia. The main finding Survey on Acinetobacter treatment in ICU ... Al-Omari et al Table 6 -Predictors of the prevalent survey responses using multivariate analysis )N=83(. was that most respondents recommended for treating VAP caused by XDR AB with a combination of IV colistin and carbapenem for 2 weeks. Most respondents recommended using 2 mu of colistin every 8 hours, preceded by a loading dose, and 50% of participants opted for colistin therapy empirically when treating patients that had VAP and persistent septic shock while on broad-spectrum antibiotics. Most consultants preferred to treat VAT with IV colistin. In contrast, 38.1% of CPs and 33.3% of ID fellows preferred to use inhaled colistin alone. Most respondents )54.6%( recommended colistin-based combination therapy for the treatment of VAP due to XDR AB. Although we did not address the downsides of using monotherapy in our study, heteroresistance, rapid selection for resistance, toxicity, and lower efficacy were often common barriers for this treatment. 10 Although synergism with combination therapy has been showing in vitro laboratory studies, clinical data has revealed conflicting results. A retrospective study of 27 tertiary-care centers in Turkey that included 250 patients with XDR AB infection showed that microbiological eradication and in-hospital survival rates were significantly higher with colistin-based combination therapy. 11 In another retrospective study, Kalin et al 12 showed a non-significant improvement in cure and bacteriological clearance rates in patients treated with a combination of colistin and sulbactam compared with those treated with IV colistin monotherapy. Although carbapenems were the preferred add-on agents in our survey, a previous study 11 showed these agents did not improve 14-day mortality or clinical and microbiological clearance when used in combination therapy. Furthermore, in a recent retrospective study by Khawcharoenporn et al, 13 a 28-day mortality and hospital length of stay were not significantly different among colistin-based regimens in a cohort of 236 patients with XDR AB pneumonia. The optimal duration for VAP treatment caused by AB is controversial. In our study, more than 60% of respondents preferred to administer a 14-day course. However, a systematic review favored a shorter antibiotic course )7-days( for treating VAP compared with a longer course )10-days(. 14 However, the recurrence rate was higher in the group of patients who were infected with non-fermenting Gram-negative bacilli and received the shorter course. 14 This data demonstrates variation in practice, and reflects the need for studies examining the effects of treatment duration on outcomes in patients with VAP due to MDR gram-negative bacteria. A recent study 15 examining the optimal dosing of colistin showed that giving a 9 mu loading dose of colistin methanesulfonate followed by 4.5 mu every 12 hours resulted in early achievement of therapeutic concentrations of colistin in the blood of patients with normal kidney function. 15 In our study, there were significant differences in the preferred doses. Most of the respondents opted for 2 mu every 8 hours; however, most of the consultants believed that a loading dose was essential. The rationale for this approach is that a steady state therapeutic concentration of colistin is reached after several days if a loading dose is not given, which will cause a delay in cure rate, and a recent study 16 suggested that a loading dose with a high dose extended interval colistin methanesulfonate regime had a high clinical cure rate.
Variables Odds ratio 95% confidence interval P-value The prevalent response: "IV colistin alone" for the treatment of VAP caused by XDR AB
More than 50% of respondents preferred to add IV colistin empirically for the treatment of late-onset VAP with septic shock despite broad-spectrum antibiotics. The study by Kumar et al 17 revealed that delayed antibiotic therapy was associated with increased mortality in patients with septic shock. The choice of antibiotics in patients with VAP and septic shock depends on local epidemiology, risk factors, recent hospitalization, antibiotics use, previous colonisation, and infection with resistant strains. 18 Rello et al 19 demonstrated that the empirical use of colistin and meropenem decreased length of stay in the ICU in pneumonia patients who had a baseline AB prevalence more than 10%. Given the high prevalence of XDR AB infection, or colonization in most ICUs in Saudi Arabia, adding IV colistin is appropriate in situations with late-onset VAP with septic shock. 20 Most of clinicians )80%( in our survey recommended treating ICU patients with VAT, but the administration route, and the agents to be used varied among respondents. Treatment of VAT remains a topic of debate in the current literature, 21 and 2 randomized controlled trials )RCTs( have addressed this issue. The first RCT 22 demonstrated that treated patients had lower mortality and fewer days on mechanical ventilation compared with untreated patients, and the second RCT also showed a shorter duration of mechanical ventilation in the treatment group. 23 Interestingly, 36 participants ) 19.7%( in our study supported the use of nebulized colistin alone, which has been shown to reduce the progression rate of VAP, development of resistance, and the need for systemic antibiotic use in a small study. 23 However, the effect of nebulized colistin, either alone or in combination with other antibiotics, for the treatment of VAP or VAT is controversial. Korbila et al 24 described that a combination of nebulized and IV colistin resulted in a better cure rate compared with IV colistin alone in 121 patients with VAP without a significant change in mortality, and similar findings have been shown in a recent case-control study of 208 patients with VAP. 25 A recent RCT 26 revealed that patients with VAP caused by Gram-negative bacteria showed no clinical improvement in response to adjunctive nebulized colistin therapy compared with patients who were only given systemic antibiotics. In addition to local side effects of colistin nebulization, such as pneumonitis, bronchospasm, and respiratory failure, development of drug resistance is another concern among clinicians. A recent study 27 demonstrated that 4 out of 12 patients receiving colistin nebulization developed resistance to the antibiotic. However, more research in a wider range of patients is essential to elucidate this phenomenon of drug resistance. Since there is no clear data on treatment of VAP and VAT caused by XDR AB, we hypothesized that the workplace, specialty, job title, and clinical experience of the healthcare practitioner would influence the dose, duration, and type of antibiotics. However, we found that these factors had little effect on most responses, likely due to the absence of clear recommendations and guidelines.
Study limitations. First, most of the participants were from Riyadh, thus limiting the generalizability of our results. Second, most of respondents were in the ICU field and few ID specialists responded, indicating we were unable to survey all physician stakeholders. Third, physicians were not asked on which guidelines they based their choices. Fourth, the self-reported practices in our survey may not reflect the actual practice of respondents )inherent in all surveys(. We adopted a systemic approach in the development of this survey and had a good response rate of 68%.
In conclusion, this study revealed that clinicians of different specialties and length of experience varied widely in their treatment of VAP and VAT caused by XDR AB. These differences are likely related to the lack of high-quality evidence and clinical practice guidelines, as well as the conflicting results of the available studies. Given the increasing incidence of VAP and VAT caused by XDR gram-negative bacteria and the high mortality rate associated with these infections, large multicenter RCTs on the benefits of colistin-based combination therapy versus colistin monotherapy, the value of colistin loading dose versus non loading dose, the value of lower dose versus higher maintenance colistin dose and so on, are warranted to help guide the clinical practice and effectively treat these infections. | 2017-06-18T16:03:07.514Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "ae19e8a96148fa1c05472f51905df88bcad73a0b",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.15537/smj.2015.12.12345",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae19e8a96148fa1c05472f51905df88bcad73a0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
105201358 | pes2o/s2orc | v3-fos-license | Effect of Kupffer cells depletion on ABC phenomenon induced by Kupffer cells-targeted liposomes
Accelerated blood clearance (ABC) phenomenon is common in many PEGylated nanocarriers, whose mechanism has not been completely elucidated yet. In this study, the correlation between Kupffer cells (KCs) and ABC phenomenon has been studied by KCs-targeted liposomes inducing ABC phenomenon and KCs depletion. In other words, the 4-aminophenyl-α-D-mannopyranoside (APM) lipid derivative DSPE-PEG2000-APM (DPM), and 4-aminophenyl-β-L-fucopyranoside (APF) lipid derivative DSPE-PEG2000-APF (DPF) were conjugated and modified on alendronate sodium (AD) liposomes to specifically target and deplete KCs. The dual-ligand modified PEGylated liposomes (MFPL) showed stronger ability to damage KCs in vitro and in vivo, which also could indirectly illustrate that dual-ligand modification could better target KCs. Besides, the hepatic biodistribution and pharmacokinetics could directly prove that MFPL had a stronger targeting ability to KCs. In addition, in depletion rats, plasma concentration and splenic biodistribution of MFPL and PEGylated liposomes (PL) were significantly elevated and hepatic biodistribution was significantly reduced, which demonstrated that KCs played an important role on elimination of nanoparticles. What's more, ABC phenomenon of the secondary injection of PL was stronger in KCs depletion rats than that in normal rats, which indicated that depletion of KCs prolonged the circulation of PL in the first injection repeatedly stimulating B-cells in the marginal region of the spleen and causing it to secrete more IgM antibodies. This could also illustrate that anti-PEG IgM takes up a major station compared with KCs. Most important of all, KCs-targeted liposomes could induce a stronger ABC phenomenon than PL in normal rats, which declared that based on the same IgM concentration, the more the KCs were stimulated, the stronger ABC phenomenon was induced. However, in depletion rats, this difference of ABC phenomenon between PL and MFPL could no more exist, further demonstrating that KCs could participate and play a certain role in the ABC phenomenon.
Introduction
Accelerated blood clearance (ABC) phenomenon is common in many PEGylated pharmaceutical preparations, which changes the pharmacokinetics and biodistribution of the subsequently injection of PEGylated nanoparticles. The universally accepted mechanism of ABC phenomenon is indicated below [1-3] : Primarily administered PEGylated nanoparticles serve as TI-2 antigens, which would stimulate B cells in marginal zone of spleen in the induction phase. Then anti-PEG IgM were secreted and supposed to accelerate elimination of nanoparticles in the secondary injection from blood. However, the reason of ABC phenomenon induced by PE-Gylated nanoparticles has not been fully illuminated. Ishida et al. reported that splenectomy failed to completely eliminate the rapid clearance and enhanced hepatic accumulation of PEGylated liposomes [4] . Wang et al. also reported that ABC phenomenon was existent even in rats pretreated with conventional liposomes which were not TI-2 antigens [5] . Wang et al. reported that in rats of which complement had been depleted, the ABC phenomenon was not entirely eradicated [6] . Hence, in addition to the acquired immunity, there are other contributors can induce the ABC phenomenon. Kupffer cells (KCs), which are important cellular components of the innate immune system, are considered to be largely responsible for cellular uptake of nanoparticles in the liver [7] . KCs, as a type of antigen-presenting cells (APCs), provide a bridge between the innate and adaptive immune systems. In previous studies, we found that KCs-targeted liposomes could induce a stronger ABC phenomenon than PL in normal rats, which preliminarily proved the role of KCs in ABC phenomenon [8] . In this study, we further explored the effects of KCs depletion on the ABC phenomenon.
Bisphosphonates liposomes were reported to be used to deplete macrophage, on account of the liposome-mediated endocellular delivery of the bisphosphonate clodronate [9,10] in previous study. Liposomes serve as Trojan horses to introduce the small bisphosphonates molecules into cells. Once uptake by cells, the phospholipid bilayers of liposomes are destroyed by lysosomal phospholipases. The hydrophilic bisphosphonates molecules released inside cells could not get away from the cell, due to they could not easily traverse its cytomembranes. When more liposomes are uptake and digested, the intracellular bisphosphonates concentration rapidly increases. As a result, at a certain bisphosphonates concentration, irreversible damage will occur and promote macrophage apoptosis [11,12] . Alendronate sodium (AD) liposomes was demonstrated to be a better preparation to deplete macrophage than clodronate liposomes in previous study [13][14][15] .
Mannose/fucose derivatives were widely applied to target mannose/fucose receptors [16][17][18] , mannose/fucose receptors were widely expressing on the membrane of KCs [19,20] , In our previous study, the 4-aminophenyl-α-D -mannopyranoside (APM) lipid derivative DSPE-PEG 2000 -APM (DPM), and 4aminophenyl-β-L -fucopyranoside (APF) lipid derivative DSPE-PEG 2000 -APF (DPF) were synthesized and modified on liposomes, which were proved to be an efficient carrier for targeting KCs [8] . In this study, DPM/DPF was modified on AD-loaded liposomes to obtain the AD-MFPL, which showed stronger ability to deplete KCs both in vitro and in vivo . Effect of Kupffer cells depletion on ABC phenomenon induced by PEGylated liposomes (PL) and dual-ligand modified liposomes (MFPL) was respectively evaluated by examining the pharmacokinetics and biodistribution of liposomes. The results indicated that KCs played an important role on elimination of nanoparticles and KCs could also play a certain role in the ABC phenomenon although the anti-PEG IgM took up a major station compared with KCs. This study concerned about the role of innate immune cells (KCs) in the ABC phenomenon and offered a complement for the classical mechanism of the ABC phenomenon.
Cells and animals
KCs were obtained from Guangzhou Jennio Biotech Co., Ltd.
Synthesis of DPM and DPF
DPM and DPF were synthesized as described in Fig. 1 . In brief, APM/APF and DSPE-PEG 2000 -NHS (molar ratio 10:1) were dissolved in newly distilled N,N-Dimethylformamide (DMF). The mixture was agitated for 24 h at room temperature. The resulting DPM and DPF was depurated by dialysis against purified water using a dialysis bag of 1 KDa MWCO to remove the unreacted APM/APF, subsequently analyzed by 1 H NMR (Bruker 600 MHz).
Preparation of liposomal AD
Liposomes were prepared by the lyophilization hydration method [21] . In brief, liposomes were prepared with HSPC, CH, DPM and DPF at a molar ratio of 6:2:1:1. Liposomes were dissolved in t-butanol and lyophilized overnight. The lyophilized cake was hydrated with an aqueous solution containing AD at 65 °C for 30 min with rapid stirring. The suspension was then extruded three times through polycarbonate membranes (Nucleopore, CA, USA) of 0.8, 0.4 and 0.2 μm pore sizes by means of a thermos barrel extruder (Northern Lipids, Inc., Vancouver, Canada). The obtained liposomes were purified by dialysis method. The liposomes were placed into a dialysis bag with a cut-off molecular weight (MW) of 8000 Da and dialyzed against 5% glucose solution for 24 h to remove the free AD.
Detection of encapsulation efficiency (EE%)
After AD loading, the AD-loaded liposomes were taken and the free AD was eliminated by Sephadex G-50 chromatogra-phy. And then the EE% was calculated by ratio of liposomal AD and total AD content. In brief, 100 μl of the samples were loaded onto a Sephadex G50 microcolumn subsequently eluted by purified water. AD content was assessed by spectrophotometric assay of their complex with Cu 2 + at λ = 240 nm [22] .
Particle size distribution and zeta-potential
The Particle size distribution and zeta-potential of the ADloaded liposomes were determined by a NICOMP TM 380 submicron particle analyzer (Particle Sizing System, CA, USA).
Morphology of liposomes
The morphology of AD-loaded liposomes was observed by transmission electron microscopy (TEM) [23] . Formulations were diluted with purified water and placed on a formvarcoated copper grid (300-mesh, hexagonal fields). The samples were air-dried at 25 °C followed by removing redundant preparations. Afterwards, liposomes were adhered to the copper grid and phosphotungstic acid was used to negative stain. At last the sample was allowed to air-dry overnight at room temperature before measurement.
In vitro release of AD from liposomes
The rate at which AD was released from liposomes was measured as a function of time when the liposomes were placed into dialysis bags (10 KDa MWCO) and dialyzed against 5% glucose solution when incubated in an orbital shaker at 37 °C [24] . At various time points of 30, 60, 120, 240, 360, 480, 720, 1440, 2160, 2880 min, 0.5 ml of release medium was removed and replaced with fresh 5% glucose solution. AD content of the release media were assessed by spectrophotometric assay of their complex with Cu 2 + at λ = 240 nm [22] .
In vitro cellular toxicity of liposomal AD
In vitro cytotoxicity of AD liposomes was evaluated by CCK8 assay. In brief, KC cells were plated at a cell density of 5 × 10 4 cells/well into 96-well plates. After 12 h adherence, KCs were respectively incubated with 10 μl different AD liposomes with AD concentrations ranging from 1 to 100 μM for 24 h. And then cells were incubated with 10 μl CCK8 solution for an additional 2 h. The absorbance was measured at 450 nm using a Microplate reader-SpectraMax M3 (Molecular Devices Instrument Co., Ltd., US). The half maximal inhibitory concentrations (IC 50 ) were calculated to comparing the cytotoxicity of various AD-loaded preparations.
Amounts of KCs in liver after depletion
Rats were administered AD liposomes intravenously (via the tail vein) at 10 mg/kg to deplete KCs [25,26] . After 48 h, amounts of KCs in liver after depletion were detected by flow cytometry. In brief, liver cells suspensions were prepared by in situ liver perfusion method [27] . Rats were anesthetized by 10% chloral hydrate and sterilized by 70% ethanol solution.
The abdominal cavity was cut through, and then the portal vein and inferior vena cava were exposed. A 23 G butterfly needle (wings cut) was cannulated into the portal vein. The lower part of the inferior vena cava was incised after 2 ml D -Hank's solution perfusion. D -Hank's solution was kept on being perfused with a flow rate at 7 ml/min. After 15 min of D -Hank's perfusion, 0.05% collagenase IV solution was perfused for 10 min with a flow rate at 10 ml/min. Then the liver was removed from the abdominal cavity. Liver cell suspensions were douched from liver using PBS solution and filtered through a 100-μm cell strainer. Liver cell suspensions were centrifuged at 1000 rpm for 5 min and then diluted into a concentration of 1 × 10 6 cells/ml. KCs of liver cell suspensions was stained for 30 min at 4 °C in the dark by anti-CD163 PE. Liver cell suspensions were washed thrice by PBS. Fluorescent character of cells was detected by flow cytometry.
Pharmacokinetic studies of a single intravenous injection of liposomes
To determine the effect of KCs depletion on clearance of liposomes, a pharmacokinetic study of a single intravenous injection of PEGylated liposomes was performed on male Wistar depletion rats and normal rats. Depletion rats were established by administered AD liposomes intravenously (via the tail vein) at 10 mg/kg. Pharmacokinetic studies were performed on depletion rats at 48 h after AD-MFPL injection.
Briefly, depletion rats and normal rats (as control) were respectively divided into two groups (3 rats per group) and intravenously administrated with DiR-PL and DiR-MFPL at a dose of 0.65 mg DiR/kg. At various time points of 0.017, 0.083, 0.25, 0.5, 1, 4, 8, 12, 24 h after injection, 0.5 ml blood were obtained through the orbital sinus and placed into microcentrifuge tubes pretreated with anticoagulant sodium heparin. Blood was centrifugated (4500 rpm, 10 min) to prepare plasma. The plasma was mixed with ethanol and then centrifugated at 10 000 rpm for 10 min. Next, the supernatant (200 μl/well) was added into a 96-well plate and measured photometrically on the Microplate Reader-SpectraMax M3 with excitation/emission wavelengths at 750/790 nm, respectively.
Pharmacokinetics of subsequently injected PEGylated liposomes
To study the effect of KCs depletion on ABC phenomenon induced by KCs-targeted liposomes, KCs-targeted liposomes MFPL was used in a single intravenous injection in depletion rats and normal rats. In other words, the depletion rats were intravenously injected blank MFPL and PL at 15 μmol phospholipids/kg for the first administrated (normal rats, as control, were dealt with the same procedure). After 7 d, all groups were injected intravenously with DiR-PL at the same dosage. At specified time point after administrated, blood samples (0.5 ml) were collected and processed using the same procedure described in 2.9.1.
Bio-distribution of liposomes in KCs depletion rats
After all blood samples were collected at specific time points, the rats of all group were euthanized, and livers and spleens were dissected and washed in 0.9% NaCl solution. Tissue samples were treated as follows: 200 μl of homogenates (equivalent to 0.1 g tissue) were mixed with ethanol (800 μl). The entire mixture was vortexed for 5 min and centrifuged at 10 000 rpm for 10 min. The supernatant (200 μl/well) was added into a 96-well plate and measured photometrically on the Microplate Reader-SpectraMax M3 at excitation 750 nm/emission 790 nm to determine the bio-distribution of liposomes in different organs.
Detection of anti-PEG IgM antibodies [2]
mPEG 2000 -DSPE ethanol solution (0.56 mg/ml) was added into a 96-well plate (50 μl/well). After dried under 25 °C, the plate was blocked by 100 μl 1% BSA solution (dissolved in Trisbuffer). Every well was washed three times with 0.1% BSA Trisbuffer after 1 h blocking. Then the serum collected from rats was diluted 100 folds with Tris-buffer containing 1% BSA. Diluted serum was added into 96 wells and incubated for 1 h. Every well needed to be washed five times after 1 h incubation. Then horseradish peroxidase conjugated goat anti-rat IgM (Bethyl Laboratories Inc., TX, USA) was added into the 96well plate (100 μl/well) at a concentration of 1 μg/ml. The plate
Statistical analysis
Statistical difference was calculated by Student's t -test with SPSS software. P values was applied to Statistical differences. P < 0.05 was considered statistically significant. P < 0.01 was considered statistically extremely significant.
Synthesis and characterization of DPM and DPF
DPM and DPF were synthesized as described [8] . The process of DPM and DPF syntheses was shown in Fig. 1 . 1 H NMR was used to characterize the structure of DPM and DPF ( Fig. 2 ). 1 . As the characteristic peaks of DPM/DPF, benzene ring structure, PEG structure and DSPE structure simultaneously appeared in Fig. 2 , which proved that DPM/DPF was successfully synthesized.
Characterizations of the liposomal-AD
Studies showed that numerous factors can make a great influence on the in vivo behavior of liposomes, hence characterizations of liposomes are essential [28] . Table 1 summarized some significant physicochemical properties of the AD-loaded liposomes. These results indicated that the particle sizes of the prepared liposomes were about 200 nm, and that the zeta potentials varied between -20 and -30 mV. The encapsulation efficiencies of AD in the prepared liposomes were about 20%. The morphology of formulations was further characteristic by TEM ( Fig. 3 ). As shown in Fig. 3 , AD-PL and AD-MFPL displayed a subglobose shape with the representative structure of a phospholipid bilayer, which were consistent with previous study [29,30] . The liposomes were homogeneous in size and were slightly aggregated because of the evaporation of moisture during the course of preparation of the sample.
In vitro release assay
The in vitro AD retention characteristic by different formulations was investigated using the dialysis method. As shown in Fig. 4 , the release of the AD from the nanoparticles was biphasic; a typical burst release phase was followed by a slower release phase AD solution could completely permeate through the dialysis bag within 8 h, whereas all liposomal formulations had a comparative small amount of AD leakage against the dialysis bag. Therefore, liposome can be used as a reservoir for drugs. There was no obvious difference in AD retention properties between AD-PL and AD-MFPL in 48 h. This result suggested that DPM and DPF co-modified liposomes (AD-MFPL) possess the same stable drug retention characteristics as AD-PL.
In vitro cytotoxicity (CCK8) assay
The cytotoxicity of the AD-loaded liposomes was tested in KC cells. For the various AD preparations, the cell viability became lower when the dose of AD increased. The IC 50 values of the various preparations were represented in Table 2 . AD-MFPL showed a lower IC 50 compared with AD-PL, indicating that AD-MFPL exhibited stronger inhibited effect than AD-PL ( Fig. 5 ), suggested that the modification of DPM and DPF on the phospholipid bilayer improved the inhibition of cells by preparations compared to unmodified formulations.
Fig. 5. -The inhibition ratio of different AD liposomes on
KCs. Each value represented mean ± SD, n = 6.
Depletion of KCs
Two different AD liposomes (10 mg/kg) were used to deplete KCs. Anti-CD163 antibody was used as a fluorescence marker. Therefore, PE + means KCs in liver. PE − means other cells in liver. And amounts of KCs in liver cells suspension was evaluated by flow cytometry. The relative ratio of KCs in liver cell suspension compared with controls decreased from 19.4% to 8.19% and to 3.76%, when AD-PL and AD-MFPL were administered respectively ( Fig. 6 ). When the AD-MFPL was applied, 80% KCs in the liver were depleted ( Fig. 6 D). These results directly showed that the modification of DPM/DPF on the lipid bilayer can increase the specific uptake of liposomes by KCs. This may be attributed to the fact that the surface of KCs could express high level of mannose receptor [19,20] . The established depletion rat model was used to determine the influence of KCs depletion on ABC phenomenon in the following experiment.
Effect of KCs depletion on pharmacokinetic and biodistribution of a single intravenous injection of liposomes
To study the targeting ability of MFPL and effect of KCs depletion on a single intravenous injection of liposomes, DiR was applied as fluorescence probe to label liposomes, so as to examine the bio-distribution and pharmacokinetics studies. In other words, bio-distribution and pharmacokinetics studies of PL and MFPL were performed on Wistar rats before and after depletion, respectively.
Fig. 6. -The relative ratio of KCs in liver cells suspension in vivo at 48 h after AD injection. (A) Rat was administered normal saline intravenously. Live cells suspension was incubated with isotype as negative control. (B) Rat was administered normal saline intravenously. Live cells suspension was incubated with anti-CD163 PE. (C) Rat was administered AD-PL intravenously. Live cells suspension was incubated with anti-CD163 PE. (D) Rat was administered AD-MFPL intravenously. Live cells suspension was incubated with anti-CD163 PE.
In normal rats, MFPL, compared with PL, showed a higher accumulation in liver ( P < 0.01), which directly proved that MFPL had a stronger targeting ability to KCs ( Fig. 7 B). When KCs were not depleted, a significantly difference in DiR concentration versus time curves was obvious for PL and MFPL ( Fig. 7 A). This is due to that DiR-MFPL had stronger ability to target KCs in the liver and therefore cleared more quickly from the blood.
However, in the rats depleted of KCs, the clearance rate of both liposomes was slower in vivo , which further illustrated that KCs played an important role on clearance of nanoparticles in liver [7] . In another study, Tsoi et al. [31] showed that, the flow rate of the nanomaterials slows by 1000 times in the liver to increase interaction and uptake by KCs. In addition, there was no significant difference between the dual ligand modified liposomes and the ordinary liposomes when the drug-time curve of the two was compared in depletion rats ( Fig. 7 A). On the other hand, as shown in Fig. 7 B, there was no statistic difference between the distribution in liver of MFPL and PL in depletion rats ( P > 0.05). This indicated that when KCs were depleted, the target role of dual-ligand modification was lost.
In addition, concentration of anti-PEG IgM was measured at 7 d after the first injection of PL and MFPL in depletion rats and normal rats. When IgM concentration of experiment group was calculated, OD value of blank control (data not shown) was subtracted as background values. After depletion, the concentration of IgM was significantly elevated ( P < 0.01).
Effect of depletion on ABC phenomenon induced by PL
The influence of KCs depletion on ABC phenomenon was clearly observed after seven days of the first injection of PL. ABC index was applied by Ishihara et al. [32] to assessed the intension of the ABC phenomenon, which was calculated by AUC (0-t ) of the second injection/that of the first injection. In this experiment, the ABC index was calculated according to the following equation: the AUC (0 -1 h) of the subsequently injection of PL/the AUC ( 0 -1 h) of the single injection PL. A lower index indicated faster elimination of liposomes and a stronger ABC phenomenon. As shown in Fig. 8 and Table 3 , whether in depletion rats or in normal rats, the first injection of PL decreased the circulation time of secondary injection of PL compared to that in the control group, which confirmed ABC phenomenon still exists in KCs depletion rats. After depletion of KCs, the first injection of PL resulted in faster blood clearance of the secondary injection of PL ( Table 3 ). Besides, the distribution of the secondary injection of PL in liver and spleen was significantly increased in KCs depletion rats ( P < 0.01).
It was reported that anti-PEG IgM secreted by splenic marginal zone B cells was responsible for the ABC phenomenon [33,34] . Therefore, the results described above might be contributed to the fact that depletion of KCs prolonged the circulation of PL in the first injection, repeatedly stimulating Bcells in the marginal region of the spleen, causing it to secrete more anti-PEG IgM antibodies ( Fig. 7 C), which in turn leads to enhancement of the ABC phenomenon. This could also illustrate that anti-PEG IgM takes up a major station compared with KCs.
Effect of depletion on ABC phenomenon induced by MFPL
As shown in Fig. 9 , in normal rats and KCs depletion rats, both pretreated with PL and MFPL would induce ABC phenomenon and enhancement of hepatic and splenic accumulation, which was coincident with the conclusion in Section 3.6.2 .
Interestingly, in normal rats, based on the same IgM concentration ( Fig. 7 C), pretreated with MFPL possessed faster blood clearance ( Fig. 9 A) and a higher hepatic accumulation ( Fig. 9 B) of the secondary injection of PL than pretreated with PL ( P < 0.05). IgM concentration suggested that splenic marginal zone B cells would not be responsible for the faster clearance of the subsequent PL injection ( Fig. 7 C). Pretreated with MFPL induced a stronger ABC phenomenon than pretreated with PL when ABC index was calculated ( Table 3 ), which illustrated that based on the same IgM concentration, liposomes targeted the KCs at the first injection of MFPL, stimulated the liver more, and thus induced a stronger ABC phenomenon.
However, in KCs depletion rats, this difference of ABC phenomenon between pretreated with MFPL and PL could no more exist. The pharmacokinetics ( Fig. 9 C), biodistribution ( Fig. 9 D), and anti-PEG IgM concentration ( Fig. 7 C) became almost completely similar. In KCs depletion rats, PL-PL group and MFPL-PL group induce the same intension of ABC phenomenon when ABC index was calculated ( Table 3 ). This may be due to the fact that KCs were almost completely depleted, MFPL lost its original targeting role, and showed the same in the ABC phenomenon as PL.
In summary, the ABC phenomenon "difference" between MFPL-PL and PL-PL in normal rats and the ABC phenomenon "similarity" between MFPL-PL and PL-PL in KCs depletion rats demonstrated that KCs could participate and play a certain role in the ABC phenomenon.
Conclusion
The effect besides adaptive immune system and anti-PEG IgM on ABC phenomenon was objectively existent, but no one cared about this. In this report, study of KCs depletion on single injected PEGylated liposomes demonstrated that the clearance function of KCs to PEGylated liposomes in body. Effect of KCs depletion on ABC phenomenon induced by MFPL indicated that KCs could participate and play a certain role in the ABC phenomenon. This study offers a complement for the classical mechanism of the ABC phenomenon and potentially provides direction for its solution.
Conflicts of interest
The authors report no declarations of interest. | 2019-04-10T13:11:54.299Z | 2018-09-04T00:00:00.000 | {
"year": 2018,
"sha1": "6f1d083dc96ad46e0ac8683ced2d8d18b944637f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ajps.2018.07.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "134e2cce72e62e1b84a8ab7ad01bb31212c9bc76",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
128304365 | pes2o/s2orc | v3-fos-license | The ATLAS Muon Trigger
Events containing muons in the final state are important for many physics analyses performed by the ATLAS experiment at the Large Hadron Collider. To collect such events, an efficient and well-understood muon trigger is required. The ATLAS muon trigger consists of a hardware-based and a software-based subsystem. In order to cope with the high luminosity and pileup conditions in Run 2, several improvements have been implemented to suppress the trigger rate while maintaining a high efficiency. Recent improvements include addition of layers in the coincidence of the muon spectrometer and optimisation of a muon trigger isolation requirement, among others. An overview of the algorithms deployed by the ATLAS muon trigger and its performance in 2018 data taking is presented.
Introduction
The ATLAS experiment [1] installed at the Large Hadron Collider (LHC) started data taking of Run 2 in 2015 with a center of mass energy of √ s = 13 TeV. The Run 2 data taking will continue until the end of 2018. The ATLAS trigger system is essential to efficiently select the events of high interest for physics analyses. Events containing muons in the final state are important for many analyses, such as searches for new particles and precision measurements of the Standard Model. An efficient muon trigger is vitally important to accumulate these events.
The ATLAS muon trigger in Run 2 is designed as a two-stage system that consists of a hardware-based trigger system (Level 1 muon trigger) and a software-based reconstruction system (High Level muon trigger). In order to cope with the high luminosity and pileup conditions in Run 2, several improvements have been implemented in the muon trigger to suppress the trigger rate while maintaining a high trigger efficiency.
The ATLAS muon trigger
The ATLAS muon trigger uses the information provided by the Muon Spectrometer (MS) and the Inner Detector (ID) of the ATLAS detector in order to select a high quality dataset of muons. The MS consists of four types of subdetectors with different purposes and three large air-core superconducting toroids as shown in Figure 1. Three layers of Resistive Plate Chambers (RPCs) in the central region (|η| < 1.05) and three layers of Thin Gap Chambers (TGCs) in the endcap regions (1.05 < |η| < 2.4) provide fast reconstruction of muon candidates for the Level 1 muon trigger. 1 Three or two layers of Monitored Drift Tube chambers (MDTs) Figure 1. Schematic drawing of one quarter cross-section of the muon system of the ATLAS detector [2].
covering the central, the endcap and a part of the forward regions (|η| < 2.7) and one layer of Cathode Strip Chambers (CSCs) covering a part of the forward regions (2.0 < |η| < 2.7) provide precise track information for the High Level muon trigger and offline muon reconstruction.
The ATLAS muon trigger selects the events including muon candidates with transverse momentum p T greater than a predefined threshold. In Run 2, one of the primary high p T triggers consists of a Level 1 muon trigger with a 20 GeV threshold and a High Level muon trigger with a 26 GeV threshold.
Improvements of the Level 1 muon trigger for Run 2
The Level 1 muon trigger requires spatial and temporal coincidence on the hits in the RPCs and TGCs. The muon p T is estimated by using the degree of deviation from the hit pattern of an infinite momentum assumption [2]. In Run 1, the Level 1 muon trigger rates in the forward regions were polluted by low p T charged particles, which were considered to be mostly protons originating from beam background.
To suppress such fake events, an additional coincidence of the TGCs has been implemented in Run 2. It is based on the small-wheel TGCs, the forward inner (FI) and the endcap inner (EI) chambers placed in front of the endcap toroidal magnet in the part of the endcap regions (1.05 < |η| < 2.0) shown in Figure 1. Figure 2 shows the η distributions of the Level 1 muon trigger for a p T threshold of 20 GeV (L1_MU20) with and without the additional coincidence. The additional coincidence reduces the trigger rate by about 20% with efficiency losses below 1%.
Improvement of the High Level muon trigger for 2018
The High Level muon trigger in Run 2 is designed to have a two-step approach: a fast and a precise muon reconstruction. In the fast reconstruction, the muon p T is measured by using the hit information provided by the MDTs and CSCs with fast tracking provided by the ID. If the muons satisfy the requirements in the fast reconstruction, they proceed to the precise muon reconstruction. In the precise reconstruction, the muon p T is measured by using algorithms close to offline muon reconstruction [2].
An efficiency drop was observed in the isolated muon trigger with a p T threshold of 26 GeV (mu26_ivarmedium), which has been used as one of the primary triggers. The isolation requirement imposes an upper limit on the momentum fraction of additional tracks in a cone of ∆R = √ (∆ϕ) 2 + (∆η) 2 < ∆R cut around the muon relative to the muon candidate's momentum [2]. In 2017, the trigger efficiency of mu26_ivarmedium dropped by about 4% for average pileup of 60. This is due to a cone width requirement in z direction, i.e. ∆z, which is the difference between the longitudinal impact parameters z 0 (Figure 3) of the track and the muon. A previous ∆z requirement, ∆z < 6 mm, was too wide for requiring the isolation, because it picked up other tracks from different vertices (i.e. pileup tracks etc), and then discarded even isolated muons in the high pileup conditions.
The ∆z requirement was optimised, and the trigger efficiency of mu26_ivarmedium was recovered. Table 1 shows the rates and efficiencies of mu26_ivarmedium for three ∆z requirements. As shown in Table 1, the trigger efficiency for ∆z < 2 mm is recovered by 3% with a small rate increase due to muons with the related momentum fraction close to the threshold, compared to the previous ∆z < 6 mm requirement. Since the rate increase was acceptably low, the ∆z requirement ∆z < 2 mm has been used for the 2018 data taking.
Efficiency measurement in the 2018 data
The efficiency of the muon trigger is evaluated with a tag-and-probe method using offline selected Z → µµ events [5]. For the selection of the Z-boson sample, invariant mass of a pair of oppositely charged muons is required to be consistent with the mass of Z-boson within 10 GeV. If one of the two muons is reconstructed fulfilling the medium identification criteria [5], is matched to a trigger muon, and has p T > 25 GeV, it is a candidate for the tag muon. The other muon is a candidate for the corresponding probe muon. The muon trigger efficiency is defined as the fraction of prove muons as Efficiency = Number of probe muons matched to a trigger muon Number of probe muons .
In Eq. (1), the effect of background contribution was found to be negligible. Figure 4 shows the absolute efficiency of L1_MU20, the absolute efficiency of the OR of mu26_ivarmedium and the High Level muon trigger with a p T threshold of 50 GeV (mu50) and the relative efficiency of the High Level muon trigger to the Level 1 muon trigger as a function of the offline muon p T . The trigger efficiencies are evaluated separately in the barrel and endcap regions, since the MS features different technologies and has different geometrical acceptance in each region. The muon trigger has high efficiencies for p T greater than the thresholds. . Absolute efficiency of L1_MU20 (black dots), absolute efficiency of the OR of mu26_ivarmedium and mu50 (red squares), and the efficiency of the OR of mu26_ivarmedium and mu50 relative to L1_MU20 (blue triangles) as a function of p T of offline muon candidates. The plots are shown for barrel (left) and endcap (right) [3].
Conclusion
The muon trigger system is essential for the physics program of the ATLAS experiment. For Run 2 data taking, several improvements were implemented in the muon trigger to cope with the high instantaneous luminosity and pileup conditions. At the Level 1 muon trigger, an additional coincidence of TGCs with the small-wheel TGCs, the FI and EI chambers, has been implemented. It reduces the trigger rate by about 20% in the region 1.05 < |η| < 2.0. At the High Level muon trigger, the trigger efficiency of isolated muon triggers is recovered by 3% by optimizing the ∆z requirement. In 2018 data taking, a high efficiency of the muon trigger has been validated by a tag-and-probe method using Z → µµ events. The ATLAS muon trigger has excellent performance and is operated smoothly in 2018. | 2019-03-05T06:20:59.743Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "bdbb8d3838320f2545e917c93d40046e2cc2dea0",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2019/19/epjconf_chep2018_01009.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "00cd7931291f56cadac65aeeb1319562fec7323b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252155108 | pes2o/s2orc | v3-fos-license | Patient Experience Ratings: What Do Breast Surgery Patients Care About?
Introduction Patient experience is essential in the overall care; physicians often receive patient reviews evaluating their consultation encounters. Patient experience surveys can be a helpful tool to identify areas to target for improvement. We sought to evaluate what factors influenced breast surgery patients' reviews of their clinic visits. Methods Prospective surveys from 2018-2020 were reviewed from a single institution. Surveys were sent to all patients within 48 hours after visiting one of our breast surgery clinics, and patients were asked their preferred mode of contact for the survey. Patients responded to surveys with scores of 0-10, with 0 as "not likely" and 10 "extremely likely" to recommend the provider's office. Scores 0-6 were considered negative, 7-8 neutral, and 9-10 positive. Positive/Negative comments from patients were reviewed and classified according to mention of surgeon, clinic staff/team, clinic processing, and facility amenities. Results 744 out of 2205 patients contacted responded to the survey, resulting in a 33.7% response rate. Of this cohort, 47.6% (354/744) were new patients, and 52.4% (390/744) were established patients. Interactive voice response (IVR) and email, per patient indicated preferred mode of survey communication, had the highest responses. The average patient score was 9.5. Most ratings were positive (91.3%, 679/744), followed by neutral comments (5.2%, 39/744). There were 3.5% (26/744) which were negative ratings. Of those who responded, 47.7% (355/744) left a comment with their score. Surgeon-specific remarks were often noted in positive comments, followed by clinic staff/team comments. Negative comments most commonly referenced clinic processes. Conclusion Patient satisfaction surveys provide a window into creating the best patient experience. Further efforts to address these factors affecting patient experiences should be made to continue improving patient care.
Introduction
The patient experience describes an individual's experience of illness or injury and how healthcare treats them. Surveys and satisfaction scores to assess the patient experience have become increasingly important factors in evaluating performance in healthcare. Recently, patient experience grading has even been used as a reflection of the future viability of healthcare organizations [1]. Patient experience data can detect essential areas to target for improvement in hospital systems [2]. Several studies indicate that patients with higher satisfaction levels may be more prone to increased adherence to recommended medical therapies, better clinical outcomes, fewer patient safety issues within hospitals, and fewer healthcare resources [3]. A study on inpatient mortalities in acute myocardial infarctions found that higher patient satisfaction was associated with improved guideline adherence and lower inpatient mortality rates [4]. Therefore, patient experience measures, when used correctly, can be appropriate quality measures that complement clinical performance evaluations [3].
Furthermore, patient experience reviews increasingly influence physician performance evaluation [1]. Though initially, it was uncertain if this was an appropriate method to assess physician performance quality, several studies have shown a relationship between improved physician-patient communication and perception to be influential in better clinical results [5][6][7].
The comprehensive management of breast surgery patients has become increasingly complex. This multileveled care process includes tailored discussions of complicated subject matters with multiple providers, some of which may be outside the breast surgeon's influence.
Database
A review of prospective breast surgery patient experience surveys collected from January 1, 2018 -December 31, 2020, was conducted from a single healthcare network encompassing three hospital sites and six breast surgeons. This included benign and breast cancer patients. Breast surgeon genders were identified as five females and one male, and breast surgeon race was reported as four Caucasian and two African American. Breast surgeon years in practice ranged from 0-5 years to >20 years. The study was deemed IRB (institutional review board)-exempted by the Indiana University School of Medicine (Protocol #: 11142).
Survey collection
Institutional surveys were sent to all new or established patients within 48 hours following their visit to one of our outpatient breast surgery clinics. Patients could only receive a survey once every seven days from different hospital survey locations and only once every 90 days from the exact hospital location. Patients were asked at their appointment check-in if they preferred email, text (SMS), or an interactive voice response (IVR) telephone call for their survey. Surveys were sent via patient preference mode; two attempts were made for email and SMS and three for IVR.
The list of survey questions is as follows: 1) On a scale of 0-10, where 0 is not likely and 10 is extremely likely, how likely is it that you would recommend this hospital (provider office) to a friend or family member? a) What is the primary reason for your score?
2) Did we spend enough time to discuss what matters most to you?
3) Do we make it easy for you?
Scoring
Scores were recorded from 0-10, with 0 as "not likely" and 10 as "extremely likely" to recommend the provider's office to a friend or family member. Scores 0-6 were considered negative, 7-8 neutral, and 9-10 positive, consistent with the Net Promoter Score (NPS) system [8]. Comments from patients for their given score were reviewed and classified into four categories: surgeon, clinic staff/team (includes nurses, midlevels, oncology clinic staff), clinic processing (i.e., wait times, check-in process, check-in/front desk personnel), and facility amenities (i.e., ease of parking, location) as indicated. If comments had more than one category mentioned, both were recorded and included as applicable. Comments that were vague or did not easily fall into one of the four categories were excluded from the analysis.
Statistical analysis
Categorical variables were compared using chi-squared analysis. Descriptive data were presented as numbers and percentages. Statistics were done using SPSS Version 27. A P-value of ≤0.05 was considered statistically significant.
FIGURE 5: Score Distribution
Of patients who responded to the survey, 47.7% (355/744) left a comment with their score. Examples of patient comments and how they were classified are shown in Table 1. Surgeon-specific remarks were most often noted among the positive comments, followed by mentions of the team or clinic staff. Negative comments were most commonly for clinic processes such as long wait times. Few comments were related to facility amenities such as ease of parking or clinic aesthetics (figure 6-8).
Positive Comments Negative Comments
Surgeon "Dr. *** is a skilled surgeon and clearly communicates with her patients." "Dr. *** was very professional, very friendly, and she answered every Facility "Everyone very nice, listened well, office easy to find." "… everything was clean and the parking is free…" "The office was a calm environment, very friendly and kind…" "… Also was made to feel most uncomfortable because gown given to me was most likely an extra small … "
TABLE 1: Examples of Comments and Categorization
Process (specific surgeon names were replaced with "***" for anonymity)
FIGURE 8: Distribution of Categories by Negative Comments
Chi-square analysis showed no difference in the likelihood of a negative, neutral, or positive comment from a new or established patient (p=0.291). No further statistical analysis could be performed stratifying positive and negative comments due to the small number of negative comments.
Discussion
Patient experience surveys of breast surgery patients in our system showed overall high levels of satisfaction with their experience. Surgeon-specific comments were the most common driver for positive experiences contributing to 59% of the positive comments. This is a reassuring finding as the physician-patient relationship continues to contribute to better clinical outcomes. A study by Chen et al. found that the more treatment outcomes discussed by physicians with patients, the higher the patient satisfaction ratings were at baseline and even in follow-up [9]. A similar finding was shown in the study by Ong et al., which confirmed that doctor-patient communication during oncology consultations was related to patients' quality of life and satisfaction [5]. Perhaps in breast surgery, an improved surgeon-patient relationship could theoretically translate into improved compliance and better outcomes. Kahn et al. showed that patient-centered care significantly predictor adherence to long-term tamoxifen use [7].
While physicians should continue to strive to serve patients in a supportive manner, caution should still be mentioned as hyperawareness or overemphasis on patient satisfaction scores as the sole driver of physician performance could also have negative consequences. Overemphasis on patient experience as the main reflection of stellar professional performance could be inappropriate. As demonstrated by Li et al., an unintended consequence of patient satisfaction surveys was altering surgeon clinical practice beyond standard care to meet patient expectations. These actions could include unnecessary referrals, prescribing medications (such as opioids), and ordering additional imaging tests to avoid patient dissatisfaction [10]. Most surgeons reported that these changes did not ultimately result in any clinical changes in outcome or management [10]. Therefore, although patient satisfaction surveys may be a tool to aid in quality performance evaluations, they should still be used judiciously as there may be unintended consequences if healthcare quality measurements are only reflected by patient perception. Additionally, the NPS scoring system may have limitations and not fully encompass a patient's experience, a limitation that applies to many patient experience surveys [8,11,12].
Clinic team interactions were frequently cited as cause for positive experiences, with 34% of positive comments noting their interactions with the clinic staff. This suggests that a breast surgeon's clinical team, such as their oncology nurses and mid-level providers, can influence patients' clinic visit experiences. Attention to selecting support staff who are empathetic and dedicated to the care of breast surgery patients seems to influence patient experiences for breast surgery patients positively.
Very few positive comments referred to the amenities of the hospital or clinic. This likely reflects that patients are focused primarily on the healthcare personnel treating their diseases rather than external factors.
Although occasional negative comments towards the surgeon or clinic staff were found in our review, most negative patient experience evaluations were due to clinic process issues (61%). Long wait times and poor check-in experiences were the most frequently cited comments for negative experiences in our study.
Unsurprisingly, patients prefer waiting times to be reasonably short [13]. However, it can often be difficult for breast surgeons to gauge the length of time needed for a new consultation because breast diseases and cancer range in complexity, and patient-driven discussions can be unpredictable. Studies regarding the psychology of patient wait-time experiences note some proactive techniques to mitigate the negative effect of wait times. These can include proactively informing patients of delays, apologizing for delays when they occur, and providing opportunities for diversion for the patient (i.e., magazines, pamphlets, and technology to allow patients to leave and come back when the doctor is ready) [14]. Understanding that wait times may be unavoidable in the care of breast surgery patients, implementing initiatives to ameliorate the negative effects on the patient may improve patient experiences.
Response rates were highest in patients age-45-64, which may correlate with the median age of breast cancer in America. IVR, followed by email, yielded the highest percentage of patient responses. The findings of this study could guide future efforts to improve patient survey response rates.
Using our findings in this study could help institutions and physicians who treat breast cancer target highyield areas to address to make the most positive impact on patient satisfaction. For instance, given that clinic processes often contribute to negative scores, efforts to minimize patient wait times or offer alternative options when physicians are late may be helpful interventions.
Limitations of this study include the single-institution database. Patients seen at our facility were mostly Midwestern, and generalizability to other geographic communities may be limited. Additionally, comments were subjective, and attempts at standardizing them into categories could be subject to author interpretation. As with all survey studies, the phrasing of the patient-directed questions may also inadvertently narrow response content.
Conclusions
Patient satisfaction surveys provide a window into creating the best experience for all patients. Based on our data for breast surgery patients, surgeons are the primary driver behind positive experiences, followed by team and staff interactions. Frustrations with clinic processing, such as long wait times, are the primary reason for dissatisfaction among breast surgery patients. Further efforts to address these factors affecting patient experiences should be made to continue improving patient care. As an emphasis on these scores can affect surgeons' quality metrics, a good understanding of these drivers is essential for healthcare systems.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Indiana University IRB issued approval 11142. NOTICE OF IRB REVIEW NOT REQUIRED Protocol #: 11142 Protocol Title: Patient Experience Ratings: What Do Breast Surgery Patients Care About? PI: Fan, Betty The above submission was reviewed and IU HRPP staff determined the project is not human subjects research and does not require further review. Please retain a copy of this email in your research records. You will not receive a separate approval letter. If you have any questions or require further information, please contact the HRPP via email at irb@iu.edu or via phone at (317) 274-8289. . Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-09-09T16:51:36.382Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "a8cea65449ef38836a29fb198c71cf2d557e1513",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/107475-patient-experience-ratings-what-do-breast-surgery-patients-care-about.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c53f63d472132a78d714cf0b67100be81cc41249",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
62703031 | pes2o/s2orc | v3-fos-license | Uncovering impacts: a case study in using altmetrics tools
Altmetrics were born from a desire to see and measure research impact differently. Complementing traditional citation analysis, altmetrics are intended to reflect more broad views of research impact by taking into account the use of digital scholarly communication tools. Aggregating online attention paid to individual scholarly articles and data sets is the approach taken by Altmetric LLP, an altmetrics tool provider. Potential uses for article-level metrics collected by Altmetric include: 1) the assessment of an article's impact within a particular community, 2) the assessment of the overall impact of a body of scholarly work, and 3) the characterization of entire author and reader communities that engage with particular articles online. Although attention metrics are still being refined, qualitative altmetrics data are beginning to illustrate the rich new world of scholarly communication, and are emerging as ways to highlight the immediate societal impacts of research.
Introduction
The future of scholarly communication is one in which a large part of scholarly communication is conducted online [3].A key part of the scholarly communication lifecycle is trying to understand the impact of work.The process of understanding impact helps scientists, science administrators and others both find, evaluate, and access scholarly products.Traditionally, this impact assessment has been done primarily through the tracking of formal citations.This is possible because citations counts, for all their occasional ambiguity [2], do reflect use of scholarly products.However, this reflection is of a restricted spectrum; scholarly products are often used by scholars, and others, in ways that do not perturb the citation record [5].Furthermore, traditional citation does not reflect the rapid nature of communications afforded by the Web.Thus, we need new approaches for measuring impact in this changed world.
Indeed, because of the Web scholarly communication, formerly "underground" uses like reading, bookmarking, sharing, discussing, and rating are beginning to leave online traces.The are becoming visible on Web pages [8,13], on blogs [6], in downloads [1,4], on social media like Twitter [9], and in social reference managers like CiteULike, Mendeley, and Zotero [7].These alternatives to traditional citation analysis have been labeled altmetrics [11].Altmetrics offer potential for gathering information on more diverse types of impact, from more diverse scholarly products, including blog posts, slides, datasets, or even tweets.They also have the important benefit of speed; altmetrics typically accumulate in days or weeks rather than the years citations require.This is particular useful in as the research process increases pace where users of scientific content need to understand the impact of it rapidly.To begin to make practical use of altmetrics for measuring impact requires both a greater understanding of the properties and validity of these new metrics, and practical tools for obtaining them [10].Others have begun the former [12]; here we will pursue the latter, presenting two new tools for gathering and presenting altmetrics.
2 Tools for Altmetrics: CitedIn and total-impact CitedIn (http://citedin.org)and total-impact (http://total-impact.org) are opensource tools that receive as input a list of identifiers for scholarly products, and output a set of altmetrics for each product.CitedIn accepts only articles with PubMed IDs (PMIDs); total-impact accepts articles identified by PMID or DOI, but also datasets and slides using a variety of identifiers including URL, handle, and accession numbers.Both tools allow users to input identifiers manually; CitedIn also offers a REST API, and total-impact lets users automatically populate the products list using items stored in Mendeley or Slideshare libraries.Once users have uploaded products, CitedIn and total-impact both use calls to open Web APIs to gather data about them; CitedIn also caches available databases.As of September 25, 2011, the data sources used by each are listed in Table 1.
In addition to gathering altmetrics from these sources, both tools also include some additional features.CitedIn lets users input and output data over a REST API, and also reports a "CI-number" that summarizes all almetrics activity in a single value.Total-impact offers persistent URLs for impact report pages; the impact metrics can be refreshed over time.Both tools let users download results as structured text files for further analysis.Output pages for the tools are shown in Figures 1 and 2.
Case study: altmetrics for a national research center
We used a set of 214 articles from the National Evolutionary Synthesis Center (NESCent) as a realistic test for the two tools.NESCent was interested in tracking the impact of work they funded in a faster and more comprehensive way than citation analysis allowed -a typical use case for altmetrics.We entered the articles into CitedIn on August 14 2011, and into total-impact September 23 2011, then collected and analyzed the results.
All 214 articles had DOIs, and so were able to be processed by total-impact.Only 174 articles had the PMIDs required by CitedIn, so the CitedIn sample is smaller.Both tools showed that altmetric activity as measured by number of "altmetric events" (bookmarks, downloads, etc.) is relatively widespread across articles: CitedIn found at least one event on 95% of its articles, and total-impact on 85%.There were a mean of 28 and median of 16 events per CitedIn article, with a maximum of 678.Total-impact had a per-article mean of 92 events and a median of 19; the higher mean is due to Dryad dataset downloads, which accumulate more easily than other metrics, reaching a maximum of 2769 on one article.We visualized the activity across articles using heatmaps, shown in Figures 3 and 4 to create a sort of "impact genome."Only altmetrics with nonzero counts are shown, and counts of each altmetric are normalized by that metric's maximum.Articles are arranged so that those with higher mean event counts across all metrics are further left.
Conclusion
Altmetrics have potential to improve the speed and breadth of scientific evaluation.CitedIn and total-impact are two tools in early development that aim to gather altmetrics.A test of these tools using a real-life dataset shows that they work, and that there is a meaningful amount of altmetrics data available for use.These tools continue to improve: check out the current versions for up to date capabilities.
The properties and validity of these data, however, are still unclear, and call for additional research.What is the scholarly value of, for instance, a Mendeley bookmark or a Wikipedia citation?Future work should also investigate how altmetrics for different sets of articles can be compared; this is a particularly tricky problem given the high dimensionality of altmetrics data, and may benefit from better visualization techniques, or statistical approaches like principle component analysis and factor analysis.
Fig. 4 .
Fig. 4. Active total-impact event types and normalized event counts per article.
Table 1 .
Data sources for CitedIn and total-impact as of September 2011 | 2014-10-01T00:00:00.000Z | 2013-07-08T00:00:00.000 | {
"year": 2012,
"sha1": "2288edfdddbc8e1032886b66181deb8e3851cd08",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-up-j-i-files/journals/1/articles/73/submission/proof/73-1-71-2-10-20141216.pdf",
"oa_status": "GOLD",
"pdf_src": "DBLP",
"pdf_hash": "2288edfdddbc8e1032886b66181deb8e3851cd08",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
195001981 | pes2o/s2orc | v3-fos-license | Design, Demos, Dialectics: Max Raphael’s Theory of Doric Architecture
The main focus of this paper is to examine the analysis offered of the Temple of Zeus at Olympia by Max Raphael in his study dedicated to the remains of the temple. The temple of Zeus at Olympia is often cited as the canonical example of Doric temple architecture and Raphael examines how a particular design can have such far ranging influence, to which end he elucidates the relationship of design to the activity of a participatory and democratic process specific to the Greek polis. By bringing to bear a highly dialectical analysis of the various forces at play in both construction and the elaboration of the temple, Raphael advances a brilliant interpretation which takes account of the social, spiritual and material dimensions at play and dissolves older academic understandings of the achievement of ‘classical art’. Design, Demos, Dialectics: Max Raphael’s Theory of Doric Architecture
Design, demos & dialectics
This paper will look at a discussion on design, "demos" and dialectics in a remarkable series of studies conducted by the German theorist and philosopher Max Raphael, whose writing about the Doric temple will be its focus. More specifically it will examine the arguments on the Temple of Zeus in Olympia to which his study is largely dedicated. As this work is not available in English, nor his earlier published work on the Doric temple from 1930, I take the liberty to give extensive paraphrases of the German original in English. 1 I will also show that the analysis provided by Raphael allows one to understand what is meant by speaking of a dialectical method for the analysis of the design achievements of the Doric, and the role of the "demos" -the term in Greek refers to the people -in their collective and participatory democracy with regard to the religious, spiritual and social meaning of these temples. This paper also expands on my previous notices of Raphael's work in my Beauty and the Sublime, (Healy 2006, 63-71) and an article for the inaugural number of Footprint, "Max Raphael, Dialectics and Greek Art," (Healy 2007, 57-77).
In the first part of this paper I will briefly indicate the reception of Raphael, especially In the Introduction, Herbert Read suggested that the little known author had made "the most important contribution in our time to the philosophy of art." (Read 1968, xv). 3 In the following year, 1969, John Berger endorsed Read's judgement and bestowed high praise on Raphael's work. It was Berger's advocacy, in its evaluation, for example, of Frederick Antal and Max Raphael, which influenced the direct engagement with these authors-in the case of Antal, via Anthony Blunt at the Courtauld, and in the case of Raphael by the art theorist Jonathan Tagg. Tagg was in direct contact with the literary executor of Raphael, Claude Schaefer, in Paris. Tagg added considerably to the awareness of the range and extent of Raphael's work. 4 In the 1970's and 80's one can speak at the
Elizabeth Chaplin published Sociology and Visual
Representation in 1994, and in the first part of the study (Chaplin 1994, 19-112) there is an extensive discussion of Raphael that is largely influenced by 1). For Raphael, the understanding of the Doric temple and the classical conception of the human situation was a matter of fascination to historians, not only for the impact such creations exerted on Rome and India, but on all subsequent revivals of antiquity. He hoped that the understanding of such achievements would help in efforts to transform the world. Understanding the making of this art would allow one to clarify a few facts that had been obscured by "the evolutionary prejudice prevalent in the historical sciences". 8 The task Raphael advances is to grasp the creative method and not simply describe the product of the imagination of classical man. In other words, the task is to understand the transforming actions of creation, which needs to not only contemplate the "what", but also reflect on and re-experience the "how". To that end, one must gain insight into the forces which, under the name of Greek art, or the classical, have so profoundly influenced history for reasons that, Raphael argues, remains largely unknown. He would also, inter alia, address the question of how the design of the Doric Temple could be so paradigmatic over such a long period of time when social and other conditions changed from which it emerged. 9 Raphael opts to examine in detail a small number of works in order to clarify the method by which they were created and their historical background.
One dimension of the historical background suggests to him that the tradition, the ultimate Neolithic foundation, and its impact on Egypt was a hostile one, against which "nascent classical art had to assert itself." Raphael sets himself the task of solving the problem of the classical achievement, and thus provides a weapon against the irrationalism of the phenomenologists, existential philosophers, no less than against, what he calls the pseudo-classical works from Raphael of Urbino to Ingres, and contemporary abstract artists making the resounding claim that: "The heart of genuine classical art is dialectics, and it is one of the deepest ironies of history that the most dialectical of art should have come to be regarded as the most dogmatic, as the mother of the academic". 10 For Raphael, dialectical art cannot be imitated. It is the method by which it is created that deserves to be studied, not because it gives the direction to some new, third, or fourth, or fifth humanism, "but to a humanity that will for the first time in history be truly free." There is another relation between the triangular pediment and the rectangular peristyle, which if not directly perceivable is rationally recognisable and felt in its effects. As mentioned, the two slanting lines of the pediment suggest two movements-one ascending, and one descending from corners to centre, from centre to corners. This is matched in the peristyle by the fact that spacing between columns are greater at the centre than the sides and this leads to a structural paradox, that the greatest height and, hence, heaviest part of the pediment is above the widest intercolumniation, where it receives its weakest support.
Raphael's contention is that the triangle that begins in the peristyle is completed in the pediment, and yet the pediment remains a part of not only of the actual front, but also of the ideal triangle whose diagonals we obtain by extending the sides of pedimental triangle. Thus, the actual triangle has become part of an encompassing ideal space that is not embodied in a material form, just as the space surrounding the structure below the pediment remains invisible. What can be derived from this is that the same basic attitude toward infinite space is expressed in the dimension of both depth and height. The intention is to create a physical limitation, to express only a part of the whole, but also to express, at the same time, the whole in the part.
What is further argued is that, even in such a mental experiment, the upward movement of the column is counteracted by an ideal pressure originating outside the temple, at a level far above that of the entablature. Raphael, it is clear, uses this discussion to advance the strong thesis that one must reject the static conception of the Greek temple as a plastic, sculptural, body without spatial dynamism, or to see it merely as the solution to purely mechanical problems. In his rich array of arguments he wants to demonstrate how an artistic expression of broader, universal ideas takes place. So it is that the pediment as analysed must be looked upon as mediating between two forces, must be looked on not merely as a static force, but, as a field of opposing forces that has become form.
The central figure in the pediment continues the rising movement from below, but starts from a void. Therefore, it is not the continuation of the column. At the same time this figure, whose head is close to the apex of the pediment, is more exposed to the ideal pressure from above than to the force rising from below. For Raphael, the Greek temple embodies the dialectical interaction of antithetical forces of various kinds-spatial, physical, and intellectual-and in its architecture these forces are adequately embodied in a finite, enduring, and clearly articulated structural body, which is harmonious. When one understands such multiple forces, especially in respect to their role in shaping space, it is, as he argued in Der Dorische Temple, possible to recognise the meaning of the whole. What Raphael will discover through his analysis, is the fundamental principles which guide the design and making of the temple.
Staying with the pediment, however, the element to be most emphatically grasped is the element of The groups are arranged so that the action develops from centre to corners, which is the artistic action; whereas the real, referred-to action, develops from the corners to the centre. Thus, artistic time abolishes real time, and yet, the tension between the two is preserved. This shows Each column or figure that enters into relationship with other columns or figures is characterised first, in its high degree of elaboration-a value of its own defined by the fact that the form of the column has significance that goes beyond its function or expression. Secondly, by selfcontainment, independence, and self assurance, it suggests nobility, self reliance, and a free and self-confident individual who does not seek to dominate others and refuses to submit to others, and yet they change into their opposite and become part of a whole without resentment, without losing their individuality. This perfect balance between community as an independent entity, and existence as part of a community, expresses both law and freedom.
The individual elements are linked together as much by these subtle similarities as by contrasts in the fullest sense of the word. If it be contrast between load and support, between solid and void, the concave and convex, we are in any case made to perceive both the actual polarity and the actual interlocking, as well as the imaginary principle which is the source of the oppositions. The Greeks did not know the direct transition from similar to similar that bridges the opposites and that which is embodied in the arch; they knew only the conflict of opposites that were originally united and strive to achieve definite unity.
Raphael goes on to assert that the relation obtaining between the whole and the parts is not one of direct dependence; the parts do not directly determine the whole, nor does the whole directly determine the parts. The absence of dependence and directness is made possible not by the presence of a hierarchy of mediations, but by the operation of a formal, mathematical principle which governs the geometric shape and the proportions both of the whole and of the parts, so that their harmony is achieved indirectly, and each preserves an appearance of freedom.
The principle here is not a transcendent power. Classical art is bound to marble to such an extent that one could almost say that without marble it would not exist. As an artistic medium it is halfway between poros and granite. In the purest variety of Parian marble, for example, the average size of the crystals is 1-1.5 mm (sometimes 2-3 mm) Because of its coarser and firmer crystalline structure, this marble is more transparent than many other varieties, and light penetrates it from and for a greater distance. In its natural state light penetrates it and it is structured.
The physical and spiritual worlds are not merely juxtaposed, but matter is spiritualised to the same extent as spirit is materialised. The interpenetration of form and light makes possible a synthesis between outer and inner worlds, between body and soul. Neither is reduced to sameness nor conceived as congruent, the two are embodied in the work; one as air and light-filled space, the other as intense human expression. In the unity of content and visual means of expression there is the completion of the constitution of the artistic unity.
Classical art ultimately works with bodies and forms. The classical artist shifts his system of coordinates in such a way that the deviation remains measurable. In sculpture for example, the notion of the structural block is transformed into artistic space. The old square/cross section of the block has been replaced by a rectangular one, thus freeing the human figure from its subjection to the block.
Space is no longer seen as abstract opposition between full/empty, being/non-being, rather it is | 2019-06-19T13:23:46.840Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "ab10fcaf63d8f9816d5d7ac570778394d962e63d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.31182/cubic.2018.1.006",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e5f5d530fe873d39388b1ca0ef987f4084b26856",
"s2fieldsofstudy": [
"History",
"Art"
],
"extfieldsofstudy": [
"Art"
]
} |
259239969 | pes2o/s2orc | v3-fos-license | Replantation using groin flap in thirty-four years old male with traumatic total degloving of little finger: A case report
Introduction and importance Degloving injuries of the hand or fingers have a devastating presentation which challenges a surgeon to conduct reconstruction in order to resurface the naked finger and recover its function. The gold standard treatment for degloving injuries is using skin grafting and flap. The pedicle of groin flap is based on superficial circumflex iliac artery. It is one of standard flaps used in reconstruction of degloving fingers. In this study, we use groin flap for reconstruction of traumatic total degloving little finger. Presentation of case This is a case of 34-years old man with total degloving of his left little finger because stuck inside running cutting machine in a clothes factory. The patient was then brought to the Hasan Sadikin General Hospital. The patient underwent thorough debridement, preparation of the donor site, and groin flap. After a week, the wound was in good condition with no signs of infection. Clinical discussion The groin or skin flap is pedicled and vascularized by superficial circumflex artery. It can be considered as an option for treatment of single finger degloving wound because of its compliant nature and vascularization reliability. Despite this, it often results in bulky appearance which needs to be reconstructed later. The conclusion Groin flaps are an appropriate method in managing degloving little fingers and are still cosmetically acceptable.
Introduction and importance: Degloving injuries of the hand or fingers have a devastating presentation which challenges a surgeon to conduct reconstruction in order to resurface the naked finger and recover its function. The gold standard treatment for degloving injuries is using skin grafting and flap. The pedicle of groin flap is based on superficial circumflex iliac artery. It is one of standard flaps used in reconstruction of degloving fingers. In this study, we use groin flap for reconstruction of traumatic total degloving little finger. Presentation of case: This is a case of 34-years old man with total degloving of his left little finger because stuck inside running cutting machine in a clothes factory. The patient was then brought to the Hasan Sadikin General Hospital. The patient underwent thorough debridement, preparation of the donor site, and groin flap. After a week, the wound was in good condition with no signs of infection. Clinical discussion: The groin or skin flap is pedicled and vascularized by superficial circumflex artery. It can be considered as an option for treatment of single finger degloving wound because of its compliant nature and vascularization reliability. Despite this, it often results in bulky appearance which needs to be reconstructed later. The conclusion: Groin flaps are an appropriate method in managing degloving little fingers and are still cosmetically acceptable.
Introduction and importance
Degloving injuries are a type of avulsion injury that is frequently brought on by trauma. It takes extremely trained orthopaedic surgeons to reconstruct the hand's surface adequately while maintaining the hand's movements and functions [1,2]. Degloving injuries caused by rotational forces which stretch the skin and subcutaneous tissue resulting of avulsion from the deeper less mobile musculoskeletal structures. Degloved injuries can be caused by traumatic hand injuries such as traffic accidents such as motor vehicle accident, sports accidents, post burn injuries or tumour excisions, industrial accidents. Currently, the gold standard treatment for degloving injuries is using skin grafting and flaps [1,3].
Degloving injuries in general is a challenge for even experienced reconstructive surgeons. A very strong shearing force to the hand is required to deglove the skin. Degloving mostly occur at the weakest spot between the skin and deeper tissues. In the palm of a hand, this surface is usually on the surface of the palmar fascia. On the dorsal side, the skin is loosely attached and splitting occurs between the superficial fascia and the areola tissue over the tendon. The dorsal vein is also usually included in the degloved skin [2,3].
The Groin flap, the abdominal flap, the quadrant flap, the free vascularized flap, and abdominal pocketing operation are common option in reconstructing a degloved finger [3,4]. The wound is inserted into subcutaneous abdominal pocket during the abdominal pocketing surgery. In little finger degloving injury, The abdominal flap was employed for fixation [3,[5][6][7].
Making a space in the abdominal wall behind the subdermal plexus is one reconstructive strategy used in cases with finger degloving injuries. After the finger has been peeled, it is put into the pocket and straps are inserted under a surgical tube stopper (insulok Ties l Tycon Co Ltd., Tokyo, Japan) [8][9][10]. After few days of following procedure, the thin flap started to fuse with the wound. After 3 weeks, The finger subsequently removed from the abdominal wall and dorsa of the finger are cover with a thin flap [9].
The aim of this paper is to report management of traumatic total degloving of the little finger using groin flap [6].
The presentation of case
We report a case of 34-years old man with total degloving of his left little finger. His finger was stuck inside a running cutting machine in a clothing factory. The patient then brought to Hasan Sadikin General Hospital in Bandung. Physical examination revealed a total degloving at little finger of the left hand. Laboratory studies were within normal limits. Plain radiograph showed no bone discontinuity. The patient underwent thorough debridement, preparation of donor site, and abdominal flap. After one week of surgery, the wound was in good condition, with no pus. Fig. 1 describes the pre-operative marking conducted for preparation of the flap. Fig. 2(a) and (b) shows the little finger of the left hand post debridement, while Fig. 3(a) and (b) shows condition of the dorsal and palmar little finger post reconstruction with the abdominal flap. Fig. 4 (a), (b) and (c) describes post detachment of the second surgery.
Clinical discussion
The cause of degloving injuries is a strong twisting force that pulls the skin and underlying tissue apart. Although the avulsed skin may appear healthy, it does not truly reflect the extent of the injury. Because the dermis contains a rich vascular network, degloving injuries often lead to a loss of important blood vessels which compromises the blood supply to the injured tissue. This puts the viability of the tissue at risk and can result in venous congestion, increased vascular pressure, and necrosis. Attempts to reattach the skin often result in high rates of necrosis and can lead to severe wound infections [10,14].
Degloving injuries typically occur in industrial workers' hands and can be categorized as either complete or partial. While reconstruction with skin grafts or flaps is often used in non-amputated, single finger injuries, care must be taken to choose skin that is both functional and cosmetically acceptable. Abdominal flaps are a popular option due to their thin and pliable nature [11,13,14].
Treating degloving injuries is a challenging issue in hand surgery, and successful outcomes are typically achieved with regard to both function and aesthetics in finger injuries. Microsurgical replantation is effective in single and multiple-finger degloving injuries, although it is more difficult in cases where severe damage has occurred to skin, subcutaneous tissue, and vessels. For thumbs and ring avulsion injuries, repair may require the use of a free flap taken from the big toe or a distally based radial forearm flap or thinned groin flap if replantation is not feasible [6,14].
For some condition, a random abdominal flap, free fascia lata myocutaneous flap, free omentum flap, or distally based radial forearm flap has been used as the primary treatment for multiple finger degloving injuries. Groin flap is a flap based on superficial circumflex iliac arteriovenous system. It is usually fashioned for soft tissue coverage on any part of hand and distal two thirds of the forearm in bilobed Y pattern or other shapes to fit specific defects. Groin flaps widely used as pedicled flaps in hand reconstruction because they can cover extensive defects without sacrificing major artery or the need for endtoend microvascular anastomosis. The indications for groin flapping are complex defects in children under the age of two, coverage of digit stump defects, electrical burn of hand with vascular preservation, traumatic amputation, length preservation of multiple digit amputations in manual workers, and multiple deformities in upper extremity. Contraindications are anatomical malformations, previous groin surgery, cancer or radiotherapy in the groin area [9,11,16].
We used groin flaps in this case as it presents some advantages such as vascular reliability, commendable vascular supply and is also a simple and quick procedure, hence can be used in emergency cases [5,9,15]. Cosmetically, groin skin also has a good quality and hairless skin which gives an appropriate cosmetic appearance for the hand and fingers. In addition, The donor site scar can be easily hidden underneath the cloth. Disadvantages of these flaps have a thick layer of subcutaneous fat, which results in a puffy and bulky shape. Additional operations to debulk the fingers are then required to correct the deformity. It results in longer hospital stay. Primary reconstruction of composite defects cannot be done. This disadvantage can be greatly reduced by proper planning to avoid lengthy flap and tubing [16,11].
Conclusion
Groin flaps are cosmetically acceptable method for managing degloving injuries. There are less intraoperative and postoperative problems as a result of the low donor site morbidity that it provides.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
The work has been reported in line with SCARE 2020 criteria [12,17].
Ethical approval
Ethical Approval was provided by the Ethical Committee of Hasan Sadikin General Hospital, Bandung, West Java, Indonesia on January 5th 2023 ethical number 73/rsc/OT/I/23.
Funding
None.
Author contribution
Nucki | 2023-06-25T06:15:48.018Z | 2023-06-05T00:00:00.000 | {
"year": 2023,
"sha1": "0fe69f242c893f1a1382cce06c165a30b32781bf",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2023.108377",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c88d3cc3fe421d6da11718eade1613b6dc0d773",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218901008 | pes2o/s2orc | v3-fos-license | Adversarial Contrastive Predictive Coding for Unsupervised Learning of Disentangled Representations
In this work we tackle disentanglement of speaker and content related variations in speech signals. We propose a fully convolutional variational autoencoder employing two encoders: a content encoder and a speaker encoder. To foster disentanglement we propose adversarial contrastive predictive coding. This new disentanglement method does neither need parallel data nor any supervision, not even speaker labels. With successful disentanglement the model is able to perform voice conversion by recombining content and speaker attributes. Due to the speaker encoder which learns to extract speaker traits from an audio signal, the proposed model not only provides meaningful speaker embeddings but is also able to perform zero-shot voice conversion, i.e. with previously unseen source and target speakers. Compared to state-of-the-art disentanglement approaches we show competitive disentanglement and voice conversion performance for speakers seen during training and superior performance for unseen speakers.
Introduction
Disentangling factors of variation in data recently attracted increased interest for many modalities. Learning disentangled representations with no or only little supervision is a promising approach to make use of the vast amounts of unannotated data available in the world. Disentangled representations are considered useful in two ways. First, they can improve performance for various downstream tasks, which are learned on a small amount of labeled data. In particular, it can yield improved robustness against train-test mismatches if the factors which are informative about the task can be successfully disentangled from the variations caused by a domain shift. Second, in a disentangled representation certain factors can be modified while keeping the rest fixed, e.g., changing the lighting in an image without changing the content. For the purpose of learning disentangled representations from unannotated data it is required that the disentangling approach requires no or only little supervision and scales to large databases.
In this paper we tackle disentanglement of speech signals such that we separate speaker attributes and content attributes into two disjoint representations. Successful disentanglement not only provides a speaker independent representation of the linguistic content of a sentence but also allows to perform voice conversion by exchanging the speaker attributes.
Here two encoders are employed to extract a speaker embedding and sequence of content embeddings, respectively, which are jointly decoded to reconstruct the input signal. To encourage the content embeddings to be speaker invariant we propose an adversarial regularization based on contrastive predictive coding (CPC) [1], which is completely unsupervised. The basic idea is that speaker and content induced variations in the signal can be disentangled according to the mutual in-formation between a current and a future observation, which is mainly the speaker information. Hence, our proposed model can be learned from raw non-parallel speech data requiring neither content labels nor speaker labels. We further suggest to use vocal tract length perturbation (VTLP) [2] to support disentanglement and show its efficiency for the proposed adversarial training. Due to the speaker encoder the model learns to extract speaker representations from audio rather than relying on onehot speaker representations as used in most other works. Therefore, our model is also able to perform zero-shot many-to-many voice conversion, i.e. for unknown source and target speakers.
Related Work
There are many works focusing on unsupervised disentanglement of all latent factors of the generative model [3,4,5]. Those works are mainly applied to toy-like image data sets, e.g., 2D shapes [6], where the generating factors are well defined. Other works tackle disentanglment of a single supervised factor using an adversarial classifier in the latent space [7,8].
While the above works targeted other modalities, there are several recent works tackling disentangled speech representation learning from non-parallel data. Many works, e.g. [9,10,11,12], focus on extracting a speaker independent content representation, while representing the speaker identity as a one-hot encoding. Others also use speaker specific decoders [13,14]. Therefore, these works can neither be used to extract speaker embeddings nor to perform voice conversion to an unknown target speaker. Also speaker supervision is required.
Unsupervised approaches to speaker-content disentanglement are proposed in [15,16,17]. None of these works use an explicit disentanglement objective as proposed in this paper. The authors of [16] propose to encourage disentanglement by using instance normalization in the content encoder, which normalizes, to some extend, static signal properties such as speaker attributes. The AutoVC model [17] relies on a carefully tuned bottleneck such that ideally all content information can be stored in the content embedding but none of the speaker-related information. In [18] the AutoVC model was extended to an unsupervised disentanglement of timbre, pitch, rhythm and content. The factorized hierarchical variational autoencoder (FH-VAE) proposed in [15] unsupervisedly disentangles "sequencelevel" (>200 ms) and "segment-level" (<200 ms) attributes, by restricting sequence-level embeddings to be rather static within an utterance and using a rather small bottleneck as well as a Kullback-Leibler (KL)-regularization on the segment-level embeddings.
Further, there are works on non-parallel voice-conversion based on CycleGANs [19,20] and StarGANs [21,22] that directly learn a mapping function from source to target speech without relying on disentanglement.
Factorized Variational Autoencoder
To learn disentangled representations of speaker and content we propose a fully convolutional variational autoencoder (VAE) which employs two encoders: a content encoder to encode content information from an input X1 into a sequence of content embeddings Z=[z1, . . . , zT ], and a speaker encoder to extract speaker traits from an input X2 into a speaker embedding s. During training X2 is required to be from the same speaker as X1. Note that it could also be the same signal: X2=X1. Then Z and s are expected to jointly allow reconstruction of the input signal X1 which is trained by minimizing the mean squared error (MSE): 2 , whereX1 denotes the reconstruction. If content and speaker can be successfully disentangled, voice conversion can be performed at test-time by presenting a signal X2 from the target speaker. The proposed VAE structure is illustrated in Fig. 1.
As input signal representation X we extract F =80 logmel-band energy features for each frame of a short-time Fourier transform (STFT) using an audio sample rate of 16 kHz, a frame length of 30 ms and a hop-size of 10 ms. Each log-mel-band is normalized by subtracting the global mean and dividing by the global standard deviation, which are determined on the training set. Encoders and decoder are one-dimensional convolutional neural networks (CNNs) as shown in Fig. 2. The speaker encoder uses global average pooling over time at the CNNs output to obtain a single speaker embedding s. The concatenated embeddings s . . . s z1 . . . zT are then forwarded through the decoder network. Do note that the embedding rate does not necessarily have to match the frame rate. We set the kernel size and stride of the encoders output layer to Ko=So=Sds, with downsampling being performed when Sds>1. The input layer of the decoder maps the embeddings back to frame rate (Ki=Si=Sds). If not stated otherwise, however, we do not perform downsampling (Sds=1). Naturally the proposed model would tend to access all the required signal information through Z while ignoring s, because X1, the input to the content encoder, is the signal to be reconstructed. Even if X2=X1 it is still easier to encode all required information into Z as there is usually much more capacity in a sequence of embeddings Z than in a single embedding s. Therefore, the challenge is to prevent the model from also encoding speaker properties of the signal into Z but make the model access it through s.
The usage of VAEs have shown to improve disentanglement [9]. Here zt is interpreted as a stochastic variable with prior p(zt)=N (zt; 0, I) and an approximate posterior q(zt)=N (zt; µt, diag(σ 2 t )), with the content encoder providing µt and log σ 2 t . The content embeddings that are forwarded into the decoder are sampled as zt∼q(zt) using the reparameterization trick [23] during training, while being set to zt=µt in test mode. The KL regularization that is added to the VAE objective prefers the posterior q(zt) to be uninformative which helps encoding information into s rather than Z. However, it also harms reconstruction which is why we only choose a small value β=10 −3 here. While in the subsequent sections adversarial regularizations are presented to enforce disentanglement, two simple measures to encourage the model to access speaker information via the speaker encoder are the following. First, during training we distort the speaker properties in the input of the content encoder using VTLP [2] yielding the distorted signal X ′ 1 . VTLP was originally proposed to increase speaker variability when training speech recognition systems. For this purpose, the center bins of the mel-filter-banks are randomly remapped using a piece-wise linear warping function: with warping factor α∼LogUniform(0.8, 1.25) and boundary frequency fhi∼Uniform(0.6, 0.8).
Second, we perform instance normalization [24] of the content encoder input, i.e. each log-mel-band is locally, i.e. for each input signal separately, normalized to zero mean and unit variance, yielding the content encoder input X ′′ 1 . The signal that has to be reconstructed, however, is the undistorted and globally-only normalized signal X1. We perform instance normalization also in the hidden layers of the content encoder instead of batch normalization as used in the speaker encoder and the decoder. Instance normalization has been frequently used for speech recognition [25] suggesting that it retains the content information while normalizing static properties of the signal. It also has been found useful to encourage speaker-content disentanglement [16].
Adversarial Speaker Classifier
To enforce disentanglement the authors of [12] suggested to employ a jointly trained adversarial speaker classifier on the content embeddings. The speaker classifier is trained to classify the speaker identity from a segment of content embedding means Mt=[µ t−l , . . . , µt+r], where l and r denote the left and right context of the classifier. The training objective is the cross entropy loss: with y denoting the one-hot encoded speaker identity and yt=fclf(Mt) the classifiers prediction. By adding the negative cross entropy to the VAE objective: the content encoder is trained to not allow such classification which requires to drop information revealing the speaker identity. This ideally does not harm reconstruction as speaker information can be encoded in the speaker embedding. The classifier has the same architecture as the encoder: fclf=Enc(Dy, 1, 1) with Dy=#speakers.
Adversarial Contrastive Predictive Coding
The adversarial speaker classifier has some severe disadvantages. First, although it does not require text annotations it still requires speaker annotations. Second, it does not scale to large unbalanced databases with a huge number of speakers as the classification task itself becomes very uncertain such that no useful adversarial gradients can be obtained.
Therefore, in this work we propose adversarial CPC as an alternative which is fully unsupervised and independent of the (unobserved) number of speakers. Hence, this approach has the potential to be scaled to large unlabeled databases.
CPC [1] aims at extracting the mutual information from segments Mt and Mt+n which have a certain temporal distance of n steps. For this purpose the segments are encoded into the embeddings ht=fcpc(Mt) and ht+n=fcpc(Mt+n) such that ht allows prediction of the future embedding ht+n: ht+n = gn(ht) with gn(·) denoting the projection head that predicts n steps ahead. The CPC model is trained using a contrastive loss [1]: with Bt denoting the set of candidate embeddings {h Note that Eq. (3) equals a cross entropy loss including a softmax where the logits are given as the inner product of the predicted embeddingĥt and the candidate embeddings ht ∈ Bt. Hence, for a given segment Mt the model is essentially trained to be able to correctly classify the true future segment out of a couple of candidates. The number of steps n that the model predicts into the future controls the kind of mutual information that is encoded. If the segments are very close to each other the model probably learns to recognize content attributes, e.g., whether the segments are parts of the same acoustic unit. If the segments are further apart, however, the mutual information the model has to recognize are primarily the static properties such as speaker attributes. For our purpose we therefore choose n=100 which corresponds to a segment distance of 1 s. To prevent the model from learning some kind of language model, the projection head gn(·) is chosen to be the identity: hi+n=hi. Hence, the CPC encoder fcpc(·) is trained to extract similar embeddings for segments from the same utterance and orthogonal embeddings for segments from different utterances.
By adding the negative CPC loss to the VAE objective: the content encoder is trained to remove mutual information between segments which are 1 s apart (or further) which prevents the content encoder from encoding speaker attributes and other static properties.
The CPC encoder has the same architecture as the VAE encoder: fcpc=Enc(D h , 1, 1) with D h =256.
Experiments
Experiments are performed on the Librispeech corpus [26]. Here, the train-clean-100 subset is considered for training the VAE models. This set contains ∼100 h of clean speech from 251 speakers. This subset is randomly split into 80 % for training, 10 % for validation and 10 % for testing with each set containing utterances from all 251 speakers. This subset is termed clean-seen-speakers in the following.
For evaluation purposes a second dataset is composed of speakers, which have not been seen by the VAE models during training. This subset is therefore called clean-unseen-speakers here and consists of the utterances from 251 randomly sampled speakers from the train-clean-360 subset. This subset is again split into 80 % and 10 % for training and validation (of classifiers used for evaluation) and 10 % for testing.
VAE models are trained on the training set for 10 5 update steps using mini-batches of 48 segments with a segment length between 2 s and 3 s. When X2 =X1 during training, segments between 4 s and 6 s are split into two which ensures to have two segments from the same speaker without requiring supervision. Adam [27] is used for optimization with a learning rate of 5·10 −4 and gradient clipping is applied using thresholds of 10, 20 and 2 for encoder, decoder and adversarial networks, respectively. For all models content and speaker embedding sizes of Dz=32 and Ds=128 are used. After training, the checkpoint which achieves lowest reconstruction error on the validation portion is used to report results on the test portion.
Three different state-of-the-art disentanglement approaches are investigated and compared: 1) Information bottleneck [17]: By reducing the temporal resolution of the content embeddings, there is ideally just enough capacity to encode content information, while speaker information has to be encoded in the speaker embedding. Here a downsampling factor of Sds=32 is used. This is roughly the same bottleneck as suggested in [17]. We also tested wider and narrower bottlenecks by tuning Sds and Dz but found Sds=Dz=32 to have the best balance between disentanglement and reconstruction performance. The model is trained using the objective (1) without further regularizations.
2) Adversarial Speaker Classifier as described in Sec. 4. The model is trained using the training objective (2) with λ=1, which was found to give a good balance between disentanglement and reconstruction performance.
3) Our proposed adversarial CPC as described in Sec. 5. The model is trained using the training objective (4) with λ=2.
To obtain better adversarial gradients, the adversarial networks of the two latter approaches are updated three times exclusively before each joint update of all parameters.
Performance is measured in two ways. First, voice conversion performance is evaluated, which indirectly mea-sures the achieved disentanglement. For that purpose a speaker classifier f (X) spk =Enc(251, 5, 1) and a phone classifier f (X) phn =Enc(40, 5, 1), which make predictions at frame rate, are trained on clean log-mel-spectrograms of the training set. We report the recognition accuracies of the classifiers on converted test-spectrograms and compare them to the accuracies on clean test-spectrograms. Similar evaluations have been made in [28,29]. The achieved source-(lower is better) and targetspeaker (higher is better) accuracies measure the quality of the speaker exchange while the source-phone accuracy (higher is better) measures the reconstruction of the source content. Converted test-spectrograms are generated from the list of clean test-spectrograms by combining it with a randomly shuffled version of itself to obtain tuples (X1, X2) which are then forwarded through the VAE. Readers are encouraged to listen to the prepared voice conversion examples 1 .
Second, post-hoc [12,16,17] speaker and phone classifiers f (Z) spk =Dec(251, Sds, Sds) and f (Z) phn =Dec(40, Sds, Sds) are trained on the clean-seen-speakers subset to classify speaker and phones from the content embeddings of a VAE model, which can be viewed as more direct measures of disentanglement performance than the ones above. Here the classifiers have a similar architecture as the decoder to map embeddings to predictions at frame rate for a fair comparison. The phone accuracy on the test-set that can be achieved (higher is better) indicates how much content information is encoded, while speaker accuracy (lower is better) measures the amount of encoded speaker information. Two setups for content embedding extraction are considered here. In the first setup, which is referred to as onepass, the content embeddings are directly extracted from the clean input features. In the second setup, which is referred to as two-pass, we first convert the signals to a common speaker before re-extracting the content embeddings from the converted signals. As common speaker we choose that speaker embedding from the validation utterances which is closest to the mean of the validation speaker embeddings.
All classifiers are trained on the training portion of a subset for 10 5 update steps using mini-batches of 64 segments with lengths between 1 s and 3 s. Adam is used for optimization with a learning rate of 5·10 −4 and gradient clipping at a threshold of 20. The checkpoint which achieves highest accuracy on the validation portion is used to report results on the test portion.
For each of the investigated methods, experiments were made on whether to use X2=X1 or X2 =X1 which cannot be presented in detail due to space constraints. When using an adversarial classifier with X2=X1 it was found that the model started to shift content information to the speaker embedding resulting in bad content reconstruction when performing voice conversion. When using an information bottleneck it was found that X2=X1 clearly outperformed X2 =X1. Note that we also made experiments combining the information bottleneck with one-hot speaker representations as in [17], but found the suggested speaker encoder with X2=X1 to perform better. Table 1 shows voice conversion performance for seen speakers as well as unseen speakers. "Clean" presents the accuracies achieved on the clean unconverted test-spectrograms. Note that all other models use instance normalization as explained in Sec. 3 and the column "Method" refers to an additional disentanglement approach. It can be seen that all methods are able to shift the speaker identity from the source speaker towards the target speaker while mostly preserving the content. 1 go.upb.de/acpcvc When not applying VTLP on the content encoder input, adversarial approaches only slightly outperform the information bottleneck on seen speakers. However, they benefit from VTLP a lot while it does not bring any gain to the information bottleneck. Thus, with VTLP the adversarial approaches outperform the bottleneck approach by >2% in target speaker accuracy and >4% in phone accuracy on seen speakers. Comparing the adversarial approaches to each other it can be seen that adversarial CPC reconstructs the content slightly better while the adversarial speaker classifier performs slightly better in exchanging the speaker traits. When considering unseen speakers it can be seen that all models have a performance deterioration in target-speaker accuracies while phone accuracies roughly stay the same. Especially models trained with X2 =X1 have a large performance drop. Here, adversarial CPC with X2=X1 significantly outperforms the other approaches. Table 2 presents post-hoc classification performance on the content embeddings using the clean-seen-speakers subset. It can be seen that if only instance normalization is performed only little speaker information is removed from the content embeddings for both one-pass and two-pass extraction. For the other methods speaker information is removed drastically especially with two-pass extraction. While the adversarial speaker classifier removes the most speaker information (which it was trained for) the adversarial CPC model retains the most content information with decently low speaker information.
Conclusions
The proposed adversarial CPC conducts disentanglement of speaker and content induced variations and allows zero shot many-to-many voice conversion. Unlike an adversarial classifier based approach its training is fully unsupervised and does not even require knowledge of speaker labels. Yet it achieves comparable, if not better disentanglement and voice conversion performance.
Acknowledgements
This work has been supported by Deutsche Forschungsgemeinschaft under contract no. HA 3455/15-1 within the Research Unit FOR 2457 (Acoustic Sensor Networks). | 2020-05-28T01:01:25.175Z | 2020-05-26T00:00:00.000 | {
"year": 2020,
"sha1": "a8986c160352f755339e17b20fcc8da771974a7d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a8986c160352f755339e17b20fcc8da771974a7d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
202854640 | pes2o/s2orc | v3-fos-license | Enhanced antibiotic activity of ampicillin conjugated to gold nanoparticles on PEGylated rosette nanotubes
Purpose This work presents the preparation of a nanocomposite of ampicillin-conjugated gold nanoparticles (AuNPs) and self-assembled rosette nanotubes (RNTs), and evaluates its antibacterial properties against two strains of drug-resistant bacteria (Staphylococcus aureus [S. aureus], methicillin-resistant S. aureus [MRSA]). Materials and methods Small, nearly monodisperse AuNPs (1.43±0.5 nm in diameter) nucleated on the surface of polyethylene glycol-functionalized RNTs in a one-pot reaction. Upon conjugation with ampicillin, their diameter increased to 1.86±0.32 nm. The antibacterial activity of the nanocomposite against S. aureus and MRSA was tested using different concentrations of ampicillin. The cytocompatibility of the nanocomposite was also tested against human dermal fibroblasts. Results Based on bacterial inhibition studies, the nanocomposite demonstrated enhanced antibiotic activity against both bacterial strains. The minimum inhibitory concentration (MIC) of the nanocomposite against S. aureus was found to be 0.58 μg/mL, which was 18% lower than ampicillin alone. The nanocomposite also exhibited a 20 hrs MIC of 4 μg/mL against MRSA, approximately 10–20 times lower than previously reported values for ampicillin alone. In addition, at concentrations of 4 μg/mL of ampicillin (70 μg/mL of AuNPs), the nanocomposite showed negligible cytotoxic effects. Conclusion Our findings offer a new approach for the treatment of drug-resistant bacteria by potentiating inhibitory effects of existing antibiotics, and delivering them using a non-toxic formulation.
Introduction
Resistance to β-lactam antibiotics appeared alongside the first use of penicillin in the 1940s. Bacteria strains, such as Staphylococcus aureus (S. aureus), rapidly evolved to produce enzymes that deactivate the antibiotic. Shortly after, the penicillin derivative methicillin was introduced due to its proved resistance to bacterial enzymes. However, resistance to methicillin developed with the growth of a bacterial isolate of S. aureus called methicillin-resistant S. aureus (MRSA). 1,2 Regardless of the influence of risk factors and country variability, the increasing prevalence of S. aureus and MRSA infections necessitates the need for new antibiotics and/or therapeutic formulations. For instance, the β-lactam antibiotic ampicillin, an extended spectrum penicillin, was developed to extend the antibacterial activity of penicillins. 3 While ampicillin was once used to treat a number of bacterial infections with particularly high efficiency, its widespread use has also triggered the growth of drugresistant bacterial isolates. 4,5 The recurring growth of bacterial isolates has consequently prompted research for new antibiotic treatments. With the emergence of nanoscience and nanotechnology in the field of drug delivery, several reports have found that the bactericidal efficacy of ampicillin can be improved when conjugated to gold nanoparticles (AuNPs). [5][6][7] AuNPs are an ideal drug delivery vehicle due to their low-toxicity, biocompatibility, high chemical stability, and ease of synthesis. 8,9 Furthermore, antibiotics could be conjugated to AuNPs in one simple step. 10 Surface functionalization of AuNPs with antibiotics occurs because the drugs can coordinate to the surface of the AuNPs via their carboxylic acid, hydroxyl, thiol, or amine functional groups. 11 The resulting functionalization imparts antibacterial properties to the AuNPs that consequently result in bacteria degradation and death. 12 In an effort to improve the efficacy and biocompatibility of antibiotic-conjugated AuNPs, our group has been concerned with the use of ultra-small AuNPs (ie, high surface area) grown on the surface of a new class of self-assembled nanotubes, rosette nanotubes (RNTs). RNTs are obtained through the self-assembly of a synthetic DNA base analog, the G∧C motif. [13][14][15][16] Intermolecular hydrogen bonding allows six G ∧ C bases to form a supermacrocycle (rosette) maintained by 18 hydrogen bonds, which then self-organize to produce a tubular stack with tunable dimensions and surface chemistry ( Figure 1). 17 We have previously shown the nucleation, growth, and morphogenesis of nearly monodisperse AuNPs (1.4 nm ±0.2 nm) 17 on self-assembled RNTs using a onepot reaction process. We have also developed a new synthetic strategy to prepare PEGylated RNTs ( Figure 1) using G ∧ C motifs covalently grafted with polyethylene glycol (PEG). PEGylated biomaterials have been shown in different studies to possess advantages for drug delivery, including reduced immunogenicity, improved pharmacokinetics, biodistribution, and biocompatibility. [18][19][20] In addition, the RNTs were previously shown to be non-toxic [21][22][23] and biocompatible with bone, 24-32 cartilage, 33,34 heart, 35 skin, 36 endothelial, 37 and nerve cells functions. 38 Based on these studies, here we present the preparation of a new nanocomposite material of ampicillin-conjugated AuNPs nucleated on PEGylated RNTs (Amp-AuNPs-PEG-RNT) and an evaluation of its antibacterial properties against S. aureus and MRSA. Finally, we tested the cytocompatibility of this nanomaterial against human dermal fibroblasts (HDF) in vitro.
Synthesis of AuNPs on PEGylated RNTs
The formation of AuNPs on PEGylated RNTs was studied in a systematic fashion by altering the ratios of reactants used ( Table 1). The PEGylated RNTs used in the synthesis method were prepared using a strategy previously developed by the authors. [13][14][15][16][17] A solution of PEGylated RNTs was treated with potassium tetrachloroaurate (KAuCl 4 , Sigma) solution. The resulting solution was aged in the dark at room temperature for 24 hrs then treated with hydrazine hydrate (Acros Organics). The final solution was stored in the dark at room temperature for an additional 4 days.
Synthesis of ampicillin-conjugated AuNPs on PEGylated RNTs
Amp-AuNPs-PEG-RNT nanocomposite was prepared using a one-pot reaction. Ampicillin solution (Fisher, 4 μL, 1% w/w) was added to a solution of prepared AuNPs-PEG-RNTs (1 mL, Au-PEG-RNT-2, 2 days aging). The mixture was then aged in the dark at room temperature for 4 days. All the concentrations noted refer to the total ampicillin concentration in solution.
Electron microscopy
Transmission electron microscopy (TEM) and scanning electron microscopy (SEM) were used to image the formation of AuNPs on the RNTs. Samples were prepared according to our previously reported protocol. Briefly, carbon-coated 300-mesh copper grids (CF300-Cu from EM Sciences) were floated on droplets of AuNP solutions. After 1 min, the grids were blotted with filter paper and air-dried for 24 hrs. TEM images were recorded on JEOL JEM-1010 instrument at 80 kV and SEM images were recorded on a Hitachi S 4800 at 3 kV with ca. 3 mm working distance.
Bacterial assays
The bacterial growth curve is a typical method for quantifying the number of bacteria in a sample over a period of time. As the OD of bacterial solution increases with bacteria multiplication, a growth curve is obtained. In this study, OD values were continuously recorded at 600 nm on SpectraMax Paradigm (Molecular Devices, Sunnyvale, CA) for up to 24 hrs at 37°C.
Bacteria colonies of S. aureus (Staphylococcus aureus subsp. aureus, ATCC ® 12600™) and MRSA (Staphylococcus aureus subsp. aureus, ATCC ® 43300™) were suspended in tryptic soy broth (TSB, 30 g/L, 5 mL) and solutions were propagated for 16 hrs in a shaking incubator at 37°C. The solutions were diluted with TSB to a concentration of 10 9 bacteria/mL, where the bacterial density was determined by measuring OD values at 600 nm. Further dilution with TSB was made until a final concentration of 10 6 bacteria/ mL was achieved. The resulting bacterial solution was then transferred into a 96-well plate with a volume of 100 μL per well. The bacteria were then treated with Amp-AuNPs-PEG-RNT (10 μL, various concentrations). The final concentrations of ampicillin were 1.2, 0.86, 0.58, 0.29, 0.23, and 0.12 μg/mL. Negative control wells received 10 μL of TSB only. For the treatment of MRSA, the final concentration of ampicillin in each well was 4, 3.2 , 2 μg/mL, 1 , and 0.8 μg/mL.
MTS assay
Antibacterial drugs and 20,000 cells/cm 2 were seeded in 96-well plates with a final volume of 100 μL/well. An additional set of wells was added (medium only) for background subtraction. The mixtures were incubated for 24 hrs, after which the solutions were gently removed from the wells. To each well, 20 μL MTS ((3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium), Promega medicine) solution and 100 μL DMEM were added and the resulting solutions were incubated for 3 hrs under standard cell culture condition (37°C, humidified, 5% CO 2 /95% air). The OD values at 490 nm against background were then recorded. Minimum inhibitory concentration (MIC) values were determined using Prism 7 (GraphPad software) to fit concentration data based on a modified Gompertz function. 39 All cell and bacterial experiments were run in triplicate and repeated at least three times. One-tailed Student's ttests were used to estimate the significant differences, with a p-value ≤0.05.
Amp-AuNPs-PEG RNT nanocomposites
TEM imaging was used to assess the nucleation and size of AuNPs on PEGylated RNTs ( Figure 2). The presence of the RNTs is an indication that they did not disassemble under the conditions tested. It is apparent from the images that at high KAuCl 4 /PEG-RNTs ratios, small and nearly monodisperse AuNPs were obtained. Statistical analysis of the particle diameter revealed an average diameter of 1.43 ±0.5 nm. However, at low concentration of KAuCl 4 relative to PEG-RNTs (eg, 1:1 and 2:1), larger polydisperse AuNPs were observed ( Figure S1).
Typically, conjugation of antibiotics with AuNPs requires chemical functionalization of the AuNPs. 40 However, these subsequent steps could affect the efficacy of the antibiotics. 41,42 In this study, we used a one-pot synthesis method to conjugate ampicillin to AuNPs on PEG-RNTs without further modification of the surface of the AuNPs or the antibiotic.
An earlier report by Brown et al showed that ampicillin can be conjugated to AuNPs via the thioether moiety on ampicillin. 5 We anticipate a similar type of coordination in this study as depicted in Figure 3. As a result, the TEM images of Amp-AuNPs-PEG-RNTs show that the average diameter of the AuNPs increased to 1.86±0.32 nm suggesting that ampicillin is effectively coordinated to the surface of the AuNPs.
Bacterial inhibition
The antibacterial activity of ampicillin conjugated to AuNPs on PEG-RNTs was compared to ampicillin alone against S. aureus ( Figures S2 and S3). The results showed that bacteria growth treated with Amp-AuNPs-PEG-RNT at concentrations higher than 0.58 μg/mL was inhibited for 24 hrs (MIC=0.58 μg/mL, Figure 4). The free ampicillin solutions were less effective at inhibiting bacteria growth than Amp-AuNPs-PEG-RNT (MIC=0.71 μg/mL) ( Figure S2). The antibacterial activity of Amp-AuNPs-PEG-RNT was also evaluated against MRSA. Unlike S. aureus, MRSA required a higher concentration of Amp-AuNPs-PEG-RNT (2-4 μg/mL) to exert an antibacterial effect ( Figure 5). This higher concentration is likely due to the acquired gene in MRSA (mecA) that encodes for the penicillin-binding protein PBP2a. 43,44 In contrast, the MRSA grew rapidly when treated with ampicillin alone, suggesting that Amp-AuNPs-PEG-RNT is effective at delivering ampicillin.
Based on the results from Figure 5, the MIC value of Amp-AuNPs-PEG-RNT against MRSA was estimated to be 4 μg/mL. Earlier reports have shown that ampicillin alone has MIC values ranging from 32 to 50 µg/mL against MRSA. [45][46][47] Therefore, the MIC of the nanocomposite against MRSA in this study is at least ten times lower than ampicillin alone.
To test the cytocompatibility of Amp-AuNPs-PEG-RNT an MTS assay was carried out using HDF cells. The cells treated with 0-4 μg/mL of ampicillin (70 μg/mL of AuNPs) exhibited viability similar to control groups after 24 hrs exposure ( Figure 6).
Discussion
We have shown that the nanocomposite, Amp-AuNPs-PEG-RNT, has greater antibacterial activity (18% lower MIC) than the antibiotic ampicillin alone against S. aureus. While there are many MRSA strains, Staphylococcus aureus subsp. aureus, ATCC ® 43300™ exhibits methicillin resistance and expresses mecA to produce PBP2a. 48 Therefore, for this MRSA strain, the ampicillin-loaded nanocomposite was 10-20 times lower than ampicillin alone, suggesting significantly enhanced antibacterial properties. This could result from three possible mechanisms operating at the same time, with possible synergistic effects among the three: 1) electrostatic interaction between the cationic nanocomposite The cationic nature of the nanocomposite 13-17 suggests a possible mechanism that could help deliver the antibiotic to the target bacteria. Previous authors have reported influences of cell-surface charge against different cationic antimicrobial agents. 45,49,50 These studies have detected electrostatic attractions between the antimicrobial agents and negatively charged bacterial surfaces which consequently induce a binding effect. Therefore, the electrostatic attraction seems to play an important role in potentially binding the nanocomposite to the bacteria surface, which thereby increases the likelihood of delivery of the ampicillin.
In addition to the overall surface charge of the nanocomposite, the AuNPs of the nanocomposite could disrupt the bacterial membrane thereby contributing to bacteria death. For example, Cui et al have shown that AuNPs exert their therapeutic effects by interfering with bacterial metabolism, notably by collapsing the membrane potential, limiting ATP synthase activity and inhibiting tRNA binding to the ribosomes. 51 These effects induced by the AuNPs offer a possible secondary mechanism for the enhanced antibiotic activity of the nanocomposite against the two strains of bacteria. Finally, a third possible mechanism for the observed enhanced antibacterial activity is the mode of action of ampicillin. It is well understood that the β-lactam antibiotics, such as ampicillin, target penicillin-binding proteins required for bacterial peptidoglycan (the polymer of amino acids that make up the bacterial cell wall) synthesis, thereby resulting in bacterial lysis. 52 Therefore, this mechanism may have been retained in the inhibition of the bacteria with the nanocomposite alongside the two mechanisms discussed above.
In another experiment, we tested the cytocompatibility in vitro of the nanocomposite against HDF cells using different concentrations of ampicillin. Under 4 μg/mL of ampicillin (equivalent to 70 μg/mL of AuNPs) and 24 hrs exposure, HDF cells exhibited viability similar to control groups. Earlier reports suggest that charges on the surfaces of AuNPs are the primary factor contributing to their cytotoxicity at concentrations as low as 10 μg/mL. Charged AuNPs decrease the membrane potential and intracellular Ca 2+ levels, resulting in up-or downregulated gene expression, 53 whereas neutral AuNPs do not. 54,55 Our results show that our nanocomposite formulation containing 70 μg/mL of AuNPs was negligible confirming that the AuNPs are bound to the RNTs and may be released in a controlled manner.
Conclusion
We have presented the formation of a new nanocomposite composed of ampicillin-conjugated AuNPs on the surface of PEGylated RNTs to inhibit the bacterial growth of S. aureus and MRSA. The nanocomposite displayed significantly higher antibacterial activity against S. aureus than ampicillin alone. At higher concentration of AuNPs, we have also shown that the nanocomposite inhibits MRSA growth. Taken together, these results establish the enhanced potential of antibiotics such as ampicillin when combined with the AuNPs-PEG-RNTs and lay a foundation for future research of this biomaterial against antibiotic-resistant bacteria. | 2019-09-17T03:00:44.301Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "2c6cf389df90888be4b8445682993f65cd50e6c8",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=52643",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b2cc744435fa071d04518d95f8197c60871a28c",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
38649390 | pes2o/s2orc | v3-fos-license | excellence in medical research – can we make it in india?
The health-care system across the world has witnessed a phenomenal improvement so that the life expectancy in almost every country has increased significantly. Besides improvements in public hygiene, the newer non-invasive methods of diagnosis, newer drugs and unprecedented technological advances in treatment and patient-care have all contributed to the longer life span. This puts further demands on applied research for developing new drugs, tests, imaging techniques, surgical modalities etc, especially because the increasing population burden and longer lifespan has generated novel health issues that were not so critical even a few decades ago.
Introduction
The health-care system across the world has witnessed a phenomenal improvement so that the life expectancy in almost every country has increased significantly. Besides improvements in public hygiene, the newer noninvasive methods of diagnosis, newer drugs and unprecedented technological advances in treatment and patient-care have all contributed to the longer life span. This puts further demands on applied research for developing new drugs, tests, imaging techniques, surgical modalities etc, especially because the increasing population burden and longer lifespan have generated novel health issues that were not so critical even a few decades ago.
The recent unprecedented progresses in our understanding of Nature, biological systems and the amazing technologies now available to common man give an impression that we have solved most mysteries of the Nature's laws and principles that govern us. Armed with this belief, most of the economically advanced countries have placed priorities on "applied research", especially in the biomedical field, in order to ameliorate the increasing load of old age and life-style diseases. With detectable improvements in the overall performance of scientific research in India, it is often asked if India should also not place greater priorities on applied research in Medical Institutions.
Basic research as tool for transforming medical practice
Basic research in bio-medical field is usually understood as a tool to help unravel the disease mechanisms and identify drug targets through genetic and/or biochemical analyses. Such studies are generally carried out by MSc-PhDs. Ability to reading the human genome fuelled ideas that we understand most human disorders and therefore, can develop and apply personalized medicine. However, a deeper probing compels us to ask if we have really learnt enough about Nature's laws and life processes? A serious reflective thinking makes us realize that a very long path still lies ahead before we reach even near that goal. Consequently, concerns are already being expressed in the US and other developed nations about the wisdom on relegating basic research to non-essential, and therefore, avoidable entity. India has so far followed a balanced view and not succumbed to oft repeated question as to why we should spend limited resources on basic research. While technological advances appear stupendous and attractive, one must not forget that their roots are deeply embedded in knowledge gained through basic research carried out by passionate people whose only objectives were to unravel mysteries of nature. Only when the "mystery" becomes "knowledge", we can apply and exploit it. Mysteries of Nature continue to exist and baffle us and, therefore, stimulate basic research. Newer basic findings in conjunction with appropriately developed technology leads to affordable and integrative healthcare.
Where are the roadblocks?
While basic research efforts have generally been supported in India, we have not had many breakthroughs, either in biological or in physical sciences. Unfortunately, as a nation we do not also have many technological advances to our credit. Obviously, there is something wrong in the system, notwithstanding the large human and other resources being used in the process. Paradoxically, Indian scientists outside India have been doing very well and make us proud but when it comes to 'make in India', we are not able to feel the same sense of pride as most of the drugs, diagnostic kits or equipments used in healthcare are made outside India, including in China. Obviously, besides the limited resources, we have more serious systemic issues that underlie the country's generally poor performance.
Overburdened with patient load or human resources or both?
Our medical institutions, medical colleges as well as the mandated research institutions, are expected to be actively involved in research since all MD, MS, MCh, DM aspirants are required to carry on some "original" research and submit a thesis for earning the degree. In addition, the various regulations for appointments and excellence in medical research -can we make it in india?
promotions require research publications as essential components. Several institutions have also introduced MD-PhD dual degree programmes beside the PhD programmes. Thus, there is, in principle, a sizeable work force in place for carrying out research in the medical colleges and institutions. Unfortunately, only a small proportion of this large work force has the opportunity to work at places with fairly well-equipped infrastructure. Most others work under rather difficult conditions including very long continuous "duty" hours. They are also constrained by inflexible time-limit for completing the "research" component of the degree. A continuity of research is also not maintained so that each new student works on different topics rather than extending the theme where the previous one had left. As a result, the research output remains rather disappointing and the enormous advantages offered by the human resource on one hand and the diversity of Indian population on the other is almost completely lost, and we continue to rely, for diagnosis as well as prognosis, on data generated in other countries with very different genetic and physiological backgrounds.
The formal teaching load of a typical medical college faculty is usually not as high as those teaching in basic science departments in a university or college, although in most of the clinical disciplines, teaching continues in OPDs, wards and on the operation table as well, somewhat parallel to "teaching" that goes on in basic research labs. A common explanation for the rather limited novel research output from medical institutions is that the medical college faculty members have patient load amidst meager infrastructure which leaves them with little time and energy to think about any serious research. This may possibly be true to some extent for faculty in clinical disciplines at a medical college attached to big hospital. However, the medical faculty in better endowed medical institutions may not be engaged with OPDs/surgeries or wards on every working day and, therefore, the average per week workload may not be exceptionally or unduly high. This may be due to large number of physicians in such Institutes. Compared to the many private/corporate hospitals, faculty positions at publicly funded medical colleges generally fare poorly in terms of service conditions, salary/promotions and facilities. Existence of significant disparity amongst different state and central institutions, poor infrastructure for research in medical colleges, inevitable bureaucracy associated with administrative issues of running hospitals, all add to the medical teaching institutions becoming less favoured places of work. This adversely affects the academic output of the institution.
Medical colleges generally seem to have a strong hierarchical and authoritative setup. This thwarts the enthusiasm of young and capable faculty who wish to go beyond the routine health-care. A healthy academic and productive environment demands equal participation, incentives and opportunities for research.
Collaborative involvement of basic scientists in research, administration and policies relating to medical research
Medical institutions also have "non-clinical" or "para-clinical" departments/units whose faculties are not directly involved in clinical practices or patient care. Unfortunately, even their research output is also generally not impressive. At the same time, the administrative dichotomy created by differential privileges and responsibilities of the "clinical" and "non-clinical" faculty members remains a major cause, often unnecessary and avoidable, for heart-burn and conflict that affects basic as well as applied bio-medical research in medical institutions Notwithstanding our ad libidum appreciation of practices followed in western countries, we have kept the medical education and research separate from basic sciences as well as technology. On the other hand, almost all the leading biology departments in US universities are parts of Medical schools. Although models for integrative learning and teaching have been frequently discussed in the country and many detailed reports prepared, the fact is that we continue to ensure compartmentalization and fragmentation that percolate down to the smallest unit possible. Absence of integrative research with collaborative basic science leadership remains a major impediment to 'Make in India' based innovation in Medical Institutions.
In the context of "conflicts" between "clinical" and "non-clinical" or "basic" scientists in our medical institutions, an idea has sometimes been mooted that the country should have "Basic Science Council" along the lines of the existing "Medical Council", "Dental Council", "Pharmacology Council" etc. However, whether establishment of such councils and formulation of rules will solve the conflict or promote any better research environment remains to be seen. An example of well-meaning but poorly formulated and implemented rules that result in more serious ill-effects is the introduction of the "Academic Performance Index" by the University Grants Commission to ostensibly promote academic activities. Paradoxically, these measures have generated more graft than promoting any better academic environment or performance. Thus even well-intentioned rules can become counter-productive when driven in the wrong direction.
It is indeed a sad commentary on the state of affairs that while we have not been able to make significant inroads in modern medicine, we have also failed to capitalize on our age-old health-care system of Ayruveda, in spite of our sense of pride at the great wisdom of our far-removed ancestors. As discussed elsewhere, including in these pages (Lakhotia, 2013, Ann Neuro), Ayurveda continues to suffer because of want of serious unbiased inter-disciplinary research, which alone will help us understand its principles and to resolve between myths and reality. It is notable that Chinese have smartly integrated Chinese Medicine as part of formal Medical curriculum. Such integration in Indian context can be promoted by inclusion of multi-disciplinary basic science experts together with practicing clinicians in various committees, governing bodies and other advisory bodies of Ministry of Health and Family welfare.
Basic scientists and clinicians as complementary stakeholders in medical education and research
How do we initiate and establish a more stable and interactive dialogue between the clinical and basic scientists and also involve technological experts in translating basic bio-medical research into real applications? One of the steps initiated in recent times to bring in some integration is the introduction of M.D.-Ph.D. dual degree programmes. However, it is not clear as to how these would be qualitatively different from the regular MD or PhD dissertations, since such programmes do not ensure interactive participation of basic and medical scientists, especially when PhD-MD candidates are rather rare (Anand and Rao, Ann Neuro 2014). In any case, what we need are long-term research collaborations on specific themes which, on one hand generate new basic knowledge/databases and on the other promote better health-care or usable indigenous technology.
Creating positions of basic scientists within the medical colleges/institutions, who lead well furnished and independent laboratories, can provide opportunities for MD/MS/DM/MCh as well as PhD students to work under joint supervision of Scientists and medical faculty. Physical placement of such labs within the medical college/hospital is expected to facilitate better interaction since the clinician can walk in any time for interaction with scientists, who can similarly walk to OPDs or surgery tables. Such basic research scientists can guide and monitor "directed basic research" in identified core areas that impinge on basic health-care in the country. A model of "directed basic research" was initiated some years ago, with success, to revive understanding of the basic science underlying Ayurveda.
Recent years have witnessed an increasing number of better equipped corporate health-care systems with lucrative paypackages. These are good destinations for utilization of basic science research skills but have remained untapped. With increasing involvement of the better equipped corporate sector in health care, it would be prudent to engage them into a public-private partnership so that they function as technology incubators utilizing research outputs from both public and private medical institutions.
Initiating teaching programmes which involve co-participation of basic scientists and clinicians is another avenue that fosters sustainable partnerships. An example is the discipline of Human Genetics. An increasing proportion of contemporary health issues centres around genetic factors. Unfortunately, the medical curriculum does not adequately prepare the medical doctors to understand the complexities of genetic disorders, their diagnosis and possible treatment. Formal co-training of science students by basic scientists and medical professionals, through didactic lectures, would not only prepare appropriately skilled human resource, whose demand is continuously increasing world-wide, but would also foster a better dialogue between the basic scientists and medicos. The Molecular and Human Genetics MSc programme started at the Banaras Hindu University about 15 years ago is an example of such success story. Next step in this direction should be to prepare courses for Genetic Counselors. Equally rewarding would be development of training and research programmes in metabolomics and microbiomes, which have also become hot areas in contemporary health-care.
Appropriate changes in the archaic rules that govern medical education and profession together with active participation of all concerned would make a value addition and generate the much needed manpower to collect and understand data for genetic and physiological makeup of Indian populations. Such data are essential to provide "Make in India" health care in the country.
Conclusion
Medical research is not singularly poor in our country. We have less than impressive performance in other spheres of research, innovation and technological development. The poor performance of medical research, however, has more serious repercussions since it directly affects health of people and therefore, of the nation. Obviously, we need to ensure quality medical research on a much larger scale. More than rules and regulations, what we really need to achieve these goals include: i) commitment and passion, rather than compulsion, for research and innovation combined with necessary mentoring, ii) bi-directional interactive and integrative environment that promotes and sustains collaboration between clinical and basic scientists on one hand and the technologists on the other, who can convert innovative findings into usable technology for affordable healthcare, iii) good training of medical students in clinical research especially for those who are inquisitive and research-oriented and iv) adequate independence of doing research to take their discovery to masses.
There is an element of "conflict of interest" when it comes to considering the medical profession as a profession that is directed solely to treat patients and earn the livelihood in return. It is argued that to be able to get into active clinical profession, which usually implies obtaining super-specialization degree, the young person has to spend many more years of life, often under rather unpleasant conditions, than is the case in other professional courses. Therefore, they feel that they are entitled to greater monetary rewards than the NPA available in most academic institutions as a compensation for losing on private practice. Such disgruntled persons cannot obviously give their best just like those basic scientists who seek introduction of non consultancy allowance (NCA). New salary structures of medical faculty. normalized to per hour risk free engagement, is often argued to provide remunerations equivalent to private centres. A substantial increase in the NPA or introduction of NCA for basic scientists may, therefore, not be the best or lasting solution. As long as we do not develop a system of identifying the right kind of human resource for a given job, such conflicts of interests and poor outputs would continue. Just as every MSc or PhD degree holder does not by default become a scientist, a basic medical or even a super-specialty degree would not generate a medical researcher. While we need a large number of researchers in the bio-medical fields, we need equally large numbers or more of medicos to attend to basic health issues in rural and semiurban areas. Therefore, what is required is to identify and promote the young aspirants into paths that better suit their temperament and capabilities than stereotypes. There is no point in trying to fit square pegs in round holes or vice-versa. Facilitation of suitable matches and optimally promoting their activities is essential for us to really make excellence in India. | 2016-05-04T20:20:58.661Z | 2015-04-01T00:00:00.000 | {
"year": 2015,
"sha1": "cdd3c90b3c34e56b68f237dd892e969fc682b129",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4480256?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "cdd3c90b3c34e56b68f237dd892e969fc682b129",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33332465 | pes2o/s2orc | v3-fos-license | Association of Attention-deficit Hyperkinetic Disorder with Alcohol Use Disorders in Fishermen
ABSTRACT Introduction: Alcohol use is a widely prevalent problem and poses hazard during work for certain groups such as fishermen. Disorders such as Attention-Deficit/Hyperkinetic Disorder (ADHD) correlate with early onset and greater severity of alcohol use disorders. Aims: We planned to study the frequency of ADHD among fishermen in a fishing hamlet of southern India using adult ADHD self-reported scale (ASRS) and correlated with the severity of alcohol use disorder as evidenced by age at initiation of alcohol use, presence of harmful use, or dependence use as defined by Alcohol Use Disorders Identification Test (AUDIT). Subjects and Methods: This was a community-based interview using AUDIT questionnaire for severity of alcohol use and the ASRS to detect ADHD. Results: The prevalence of adult ADHD among fishermen in this study was 25.7% using the critical items of the ASRS. ADHD was about twice as likely in participants with dependence as those without dependence (odds ratio = 2.10). ADHD was also more likely in participants with onset of use before 30 years of age than others (25.1% vs. 15.4%) (P = 0.27). Discussion: We found a high frequency of alcohol use among fishermen (79.8%). However, only 9.9% had alcohol dependence which is higher than the general population (2.3%) in the region. Fishermen with alcohol dependence were twice as likely to have ADHD as those without alcohol dependence. Conclusion: In a community-based survey of fishermen, the prevalence of alcohol dependence was about 10%. The presence of alcohol dependence predicted a two times higher likelihood of ADHD among fishermen than those without alcohol dependence.
Introduction
A ccording to the World Health Organization's Global Information System on Alcohol and Health, [1] an estimated 40.7% of Indian males have consumed some form of alcohol in their lifetime. Alcohol consumption contributes to about one-half of liver cirrhosis and one-fourth of road traffic accidents suffered by Indian men. The prevalence of alcohol use disorders (including harmful use and dependence) among Indian men was about 4.5%. One of the biological risk factors leading to alcohol use is said to be childhood externalizing spectrum disorders such as attention-deficit hyperkinetic prevalence of adult ADHD among fishermen and to study the association between the presence of ADHD and alcohol use disorder severity. The hypothesis was that presence of ADHD as detected by the adult ADHD self-reported scale (ASRS) will be associated with greater severity of alcohol use disorder among fishermen. This would be reflected as earlier age at onset of drinking and greater probability of harmful use or dependence to alcohol among fishermen with ADHD compared to fishermen without ADHD.
Subjects and Methods
This community-based cross-sectional study was carried out among fisherman in a coastal village of southern India. The total population of the village is around 6000 and most of them were fishermen. Only a small number of villagers are employed in other occupations such as farming and daily labor work. All fishermen above 18 years of age, who were residents of the village and went out to sea for catching fish at least once in the last 3 months, were eligible to participate in the study. Fishermen who were staying in the study setting for a minimum period of 6 months were considered as residents.
Alcohol Use Disorders Identification Test (AUDIT) questionnaire [5] was used to assess the dependence to alcohol among the study participants. It was administered in Tamil by the health-care worker. The AUDIT scores of 0-7 (Zone I) indicates no harmful patterns of use. Scores between 8 and 19 (Zones II and III) indicate harmful or hazardous drinking and scores of 20 or more (Zone IV) indicate probable alcohol dependence in the individual and require specialist referral.
Attention-deficit hyperkinetic disorder (ADHD) among adults was assessed using adult ASRS, an eighteen-item scale that elicits self-reported symptoms of ADHD in adults based on Diagnostic and Statistical Manual-IV criteria for ADHD. [6] It takes about 7 minutes to apply. We used a Tamil version of the scale. The responses are Likert type from "Never," "Rarely," "Sometimes," "Often," and "Very Often." The symptoms in six critical questions in the scale consistent with adult onset ADHD, remaining 12 questions serve additional aid in assessing the ADHD. The coding of the scores is from 0 to 4. A cutoff score of 40 on the full version of the ASRS and 14 on the six-item version were taken as suggestive of ADHD. The full version was used in this study because the specific psychometric properties of the scales in the present population are unclear. For analysis, we analyzed the full version as well as the critical items version and used it in further statistical analyses of association as the full version appeared to overestimate ADHD in our study population.
Apart from the Institute Ethical Committee clearance, we also obtained verbal consent from the village leader before the onset of the study. All the fishermen who were registered with the fishermen guild of the locality were approached. After briefly explaining the purpose of the study and procedure, written informed consent was obtained from the individual participants before administration of the questionnaire. The respondents were neither intoxicated nor experiencing withdrawal symptoms at the time of interview to avoid bias in reporting. All the houses were visited up to 3 times in a period of 6 weeks to look for and interview eligible participants.
Statistical analysis
Data were entered in EpiData 3.1 software, Epidata (Version 2.2.2.178) (EpiData Association, Odense, Denmark), and analysis was done using EpiData Analysis software. Continuous variables such as age were expressed as mean and standard deviation. Monthly income was expressed as median and interquartile range as they were following nonnormal distribution. Alcohol use disorders were defined by AUDIT scores as follows: AUDIT 0-7 = no alcohol use disorder, AUDIT 8-20 = alcohol, harmful use, and ADUIT ≥20 = alcohol dependence. The presence or absence of ADHD was coded as binary variable (yes = 1, no = 0) using a cutoff score ≥40 on full-scale ASRS and ≥14 on critical items scale of ASRS. Association of sociodemographic and occupational characteristics with the presence or absence of ADHD was assessed using Chi-square test. Similarly, we used Chi-square test to assess whether the presence of ADHD was associated with alcohol dependence syndrome. A P < 0.05 was considered as statistically significant.
Results
A total of 304 fishermen lived in the area of study. 62.5% (190/304) were between 20 and 49 years of age. Most of them (290/304; 95.4%) were married. A majority of the fishermen (249/304; 81.9%) rented a boat, and the rest were owners of boats. The detailed sociodemographic profile of the study population is depicted in Table 1.
As regards alcohol use, 241/304 (79.3%) of respondents had used alcohol at least once in the last 1 year. A total of 89/304 (29.3%) had no alcohol use disorder. This includes nonusers of alcohol. 185/304 (60.9%) had harmful use of alcohol and 30/304 (9.9%) had dependent use of alcohol.
Using the full-scale ASRS scores, 150/304 (49.3%) had ADHD and 78/304 (25.7%) had ADHD when determined by the critical items of the scale alone.
AUDIT was applied to the 241 participants. They were classified based on AUDIT scores as described under Statistics section. Only 30 (12.5%) of the 241 alcohol users were dependent on alcohol; however, a majority of them, 185 out of 241 (76.7%), had harmful use of alcohol. The remaining 26 out of 241 (10.8%) did not have alcohol use disorders. The 63 participants who did not use alcohol in the last 1 year were also diagnosed to have no alcohol use disorder.
A total of 150 (49.3%) volunteers screened positive for ADHD using the full scale (>40), but only 78 (25.7%) had scored positive for ADHD using the critical items scale (>14) of the ASRS. Table 2 depicts the sociodemographic profile of fishermen in relation to the presence or absence of ADHD. None of the parameters were significantly associated with ADHD.
The association of alcohol use parameters with ADHD (as defined by ASRS critical items scale) is depicted in Table 3. Participants with alcohol dependence were more likely to have ADHD (40%) than participants with harmful use (23.2%) (odds ratio [OR] = 2.20; 95% confidence interval [CI] = 0.98-4.93) or no alcohol use disorder (25.8%) (OR = 1.91; 95% CI = 0.80-4.57). When compared to those without alcohol dependence (no alcohol use disorder and alcohol harmful use), we found that those with alcohol dependence had a greater chance of having ADHD (OR = 2.10; 95% CI = 0.96-4.58). However, these differences were not statistically significant (Chi-square test, P = 0.14).
The frequency of ADHD was 25.1% among those with alcohol initiation before the age of 30 years. On the other hand, only 15.4% of those initiating alcohol after the age of 30 years had ADHD. Again, this was not statistically significant (Chi-square test, P = 0.27).
Discussion
A study done among fishermen in Scotland also found similarly high rates of alcohol use among fishermen (80.6%) compared to 79.3% in our sample. This is very high compared to the national average of 24.8% of Indian men as per the GIASH, India report. [1] However, 9.9% of the fishermen in our study had alcohol dependence. This indicates a greater prevalence of alcohol use (79.3% vs. 39.8%), alcohol use disorder defined as AUDIT >8 (60.9% vs. 10.9%), and a higher proportion of probable dependent users defined as AUDIT >20 (9.9% vs. 2.3%) among fishermen compared to general population of Puducherry. [7] Using the critical items of ASRS (ASRS ≥ 14), we found that 25.7% of fishermen screened positive for ADHD. The rate of 25.7% on the critical items scale compares well with the 23.6% prevalence among college students of Chandigarh by Jhambh et al. [8] and the 21.8% among Kenyan University students by Atwoli et al. [9] However, the studies done in Chandigarh and Kenya used the full ASRS scale (49.3% prevalence in our study). The study among college students of Chandigarh showed that only 5.48% could be truly diagnosed as ADHD based on symptom presence from childhood. However, we have not done a confirmation of diagnosis in our study using either a DSM IV-based structured interview or enquiry about childhood onset of symptoms. The use of full-scale ASRS may have overestimated the ADHD prevalence in our study, but the critical items' scale of ASRS has given similar estimates found in other adult population of students. One reason for such an overestimation may be the wide or nonspecific nature of symptoms in the full scale as compared to the more focused or "critical" items in the short form of the scale. The critical items of the scale may be more specific to ADHD.
ADHD has been shown to be a risk factor for the development of alcohol use disorders among adolescents. In our study, we found that alcohol-dependent participants were about twice as likely to have ADHD as compared to patients without alcohol use disorders or those with harmful use. The results were not statistically significant and the 95% CI of the OR included unity.
To summarize the key findings, the prevalence of alcohol dependence and harmful use of alcohol appears to be higher among fishermen as among the general population. Although our study did not find a statistically significant association between ADHD and the presence and degree of alcohol use disorders, we found that patients with alcohol dependence were twice as likely to have ADHD than those with harmful use or without alcohol use disorders. The results approached significance.
The study findings are important and indicate the need for improving the availability of treatment of alcohol use disorders at the primary care level, especially among occupational groups who face high levels of stress and are likely to have higher morbidity.
However, future studies that assess risk factors for alcohol use disorders in larger samples must definitely continue to study the contribution of ADHD toward addiction.
The strengths of this study are that, being a community-based study, it is free from self-selection bias and hospital bias that happens in selection of patients in hospital-based studies.
The study has the following limitations: (1) ADHD was assessed cross-sectionally in an adult population using a single measure and false positivity is a concern, especially in view of lack of a confirmatory test. (2) Self-reported questionnaires were used and a reporting bias cannot be ruled out. (3) We do not have information on other risk factors such as family history of alcohol use disorders or ADHD. We have not assessed comorbid substance use, medical diseases, or psychiatric illnesses in this study.
Conclusion
In an occupational group of fishermen, the prevalence of alcohol dependence is higher than that in the general population. Fishermen with alcohol dependence are twice as likely to have ADHD as those without alcohol dependence.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. 0.14 0.27 ASRS: Adult ADHD self-reported scale, ADHD: Attention-deficit hyperkinetic disorder | 2018-04-01T05:41:13.634Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "64c8aab7fb07a6fdb1c57421cb6d7d87e78ebe69",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.4103/jnrp.jnrp_48_17.pdf",
"oa_status": "GOLD",
"pdf_src": "Thieme",
"pdf_hash": "4652161499f88f6f5517d38eacc15f91825ed22c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265085039 | pes2o/s2orc | v3-fos-license | Species Composition of Rodents at Padang Chong Forest Reserve, Perak
. There are at least four families of rodents identified in Peninsular Malaysia namely Muridae, Sciuridae, Hystricidae, and Rhyzomidae. Although rodents are widespread throughout Peninsular Malaysia, information on rodents at Padang Chong Forest Reserve (PCFR) is scarce. Therefore, the main objective of this study is to identify and document the species composition of rodents at PCFR. Samplings were carried out at two plots along the gradient from the border of PCFR namely Plot 1 (500m) and Plot 2 (1km). The size of the respective plot is 1ha with 10 transect lines (A-J), 100m in length separately. These plots were sampled for five times from June to November 2022. Based on this study, a total of 65 individuals from nine species of rodent were documented. These nine species were identified from two families namely, Muridae (5 spp) and Sciuridae (4 spp). Of these, Leopoldamys sabanus was recorded as the highest captured species (25 individuals), followed by Maxomys whiteheadi (19 individuals) and Callosciurus notatus (7 individuals). There is no significant difference in terms of species composition between these plots, Plot 1 (n=33) and Plot 2 (n=32), which further supported by t-test value result (t-test=0.928; p(value)> 0.05). However, there is a single representative of Sundamys muelleri captured in Plot 1 which prefers the riverine area (around 10m). Callosciurus notatus were captured most in Plot 2 (6 individuals) compared to Plot 1 (1 individual). Certain areas nearby to Plot 2 were opened for agricultural purposes which explained the existence of this species at Plot 2. From this information, it is hoped that further actions can be taken to conserve the area to ensure the small mammal communities are preserved
Introduction
Padang Chong Forest Reserve (PCFR) is part of the Central Forest Spine Ecological Linkage located within the Bintang Hijau Forest Complex.This forest complex encompasses diverse habitats for various species of mammals, both large and small [1].Malaysia has recorded a total of 307 mammal species, with more than 30 endemic species [2].In Perak state alone, previous studies were conducted to understand the diversity, composition, behaviour and bait preference of terrestrial small mammals comprising of volant small mammals (Families: Rhinolophidae, Hipposideridae, Vespertilionidae and Pteropodidae) and non-volant small mammal (Families: Muridae and Sciuridae) [3,4,5,6,7,8].Among small mammal groups, rodents are the most geographically widespread and possess a high adaptation level, enabling them to survive in various environments, whether natural or modified [9].Despite the wide distribution of rodents throughout Peninsular Malaysia, there is no information on rodents at PCFR.Therefore, this study aims to generate and record the preliminary information on rodent species that reside within PCFR using live trapping.Since this is a pioneer small mammal study in PCFR, it is anticipated that the findings would help develop future conservation plans and effective management strategies for the area.
Study area
Padang Chong Forest Reserve (PCFR) is a tropical lowland forest located approximately 20km from the nearest town in Pengkalan Hulu, Perak (Figure 1).A pristine river with approximately 5 -8m width runs through the forest reserve compartments.It is composed of big rocks and boulders alongside various sizes of fallen trees.In total, there were two 1-ha plots established identified as Plot 1 (P1) and Plot 2 (P2), respectively.These plots were set at 500m (P1: N 05ᵒ41'03.4",E 101ᵒ01'11.0")and 1000m (P2: N 05ᵒ41'19.1",E 101ᵒ00'58.0")from the edge of forest reserve.Both P1 and P2 have dense vegetation of shrubs and closed canopy cover with small trees and patches of bamboo.There is a small stream nearby P1 and old logged road nearby P2.
Trapping design
Rodents of PCFR were surveyed from June to November 2022, with a total of five sampling sessions per plot.Within each plot, 10 transect lines, labelled alphabetically from Line A to Line J, were established, with each line measuring 100m.A total of 100 collapsible cage traps were deployed, 10m apart each.The traps were baited with palm oil seeds and refilled when necessary.The traps remained open for five consecutive nights for each sampling session.These traps were checked twice daily at 0800 and 1600 hour.Each trapped rodent was carefully removed from the collapsible cage trap and temporarily placed inside a cloth bag.The captured rodents were measured and identified based on key features referring to Field Guide to the Mammals of South-east Asia [10].The trapping effort was calculated by multiplying the number of sampling days with the total number of traps deployed for five sampling sessions, therefore, the total trapping effort for both plots are the same (3000 trapping effort for each plot) [11].
Data Analysis
The Paleontological Statistics (PAST) software was used to generate the value for Shannon-Wiener Index (H'), Evenness (SI) and Dominance (D) to assess the diversity, composition and abundance of the species within PCFR [12].To compare the diversity distinctions among the research plots, a non-parametric t-test was also computed using the PAST software.A rarefraction curve was generated using the package 'iNEXT to evaluate the effort basedspecies richness of both plots which were then associated with the value of Chao-1 estimator [13,14].
Results and Discussion
A total of 65 individuals of rodents representing nine species from two families were identified in this study (Table 1 Relatively, Muridae shown to be more abundance in both sampling sites.In Muridae, Leopoldamys sabanus was the most abundant species (38.5%, n=25) followed by Maxomys whiteheadi (29.2%, n=19) and M. rajah (7.7%, n=5).Species, L. sabanus was recorded at both sampling sites.This generalist species is nocturnal by nature that has the ability to climb up to 3m above ground which makes this species known as scansorial [10,15] [8,16].Both UGFR and UJFR are secondary forest that has been logged over and comprised of dipterocarp forest [16,17].Considering PCFR itself is a secondary forest the availability and abundance of L. sabanus in both plots is inevitable.Though their presence suggests the habitat structure of PCFR might promote a vegetation suitability for foraging and locomotion.Nonetheless, it is a conventional understanding that this species is widely distributed and well adapted to diverse structure of forest either logged or unlogged [18,19].The total number of individuals per species captured varied among the plots.P1 listed a higher number of species (S = 8) compared to P2 (S = 5).Although H' value reflected that P1 (H'=1.594,n=33) is richer than P2 (H'=1.413,n=32), the species distribution of P2 is more even (SI=0.8214)compared to P1 (SI= 0.6153).The deviation of the result is reflected by the abundance of individuals that each species represents per plot.While P1 recorded a higher species number, there is a huge gap between the number of individuals captured the most (L.sabanus, n=14) and the least captured individuals (Maxomys rajah, Sundamys muelleri and Callosciurus notatus, n=1).Nevertheless, the diversity t-test score indicates no significant difference in terms of species diversity composition between these plots (t-test=0.928;p(value) > 0.05).
Most of the species were recorded in P1, except for one species namely Sundasciurus lowii.Based on our observation, there is no distinct variation in term of vegetation type for both plots, however, it was seen that P2 have taller canopy level compared to P1. Besides, during the 3rd sampling session onwards, an area closer to P2 were opened up for other landuse activity.Therefore, this might have influenced on occurrence of another species, Callosciurus notatus.Although this species was recorded in both of the sampling sites, it was captured more in P2 (n=6) compared to P1 (n=1).C. notatus and S. lowii are widely distributed in a forest influenced by the availability of food resources [20].A study shown that both C. notatus and S. lowii primarily feed on fruits and barks [21].The clearing on the sites nearby that provide barks from fallen trees suggest that the abundance of these two species at P2 might be prompt by an easily accessible food resource.
Alternatively, a single representative of Sundamys muelleri from Muridae family was recorded in P1.This species prefers riverine areas where they can forage on plants and animal matter [10].P1 is located near a small stream of river that narrowed down leading to the research plot establishment.In a distribution comparison study of rodents by Paramasvaran et al. (2013) that was conducted on four different habitats, the presence of S. muelleri is the most predominant in forested areas compared to rice field, coastal or urban [22].The placement of the traps also plays major role in capturing this species.Paramasvaran et al. (2013) also mentioned that the traps were deployed along the river of UGFR, which is a foraging ground for S. muelleri in search for snails and land crabs [22,23].As P1 is closer to river area compared to P2, this explained the single representative of S. muelleri at P1. Through species richness estimation Chao-1 (Table 2), the species collected are equivalent to the total species estimated to be in P2 (Chao-1 =5, S= 5).Whereas P1 recorded almost 89.0% of the estimated species diversity (Chao-1 =9, S= 8).The species rarefraction curve (Figure 2) provides additional support for this finding.The curves depict that P2 has hit the plateau and P1 is also on the edge of reaching the asymptote.Therefore, the estimation diversity of the plots is approximately accurate.Since Chao-1 estimator is the least biased non-parametric approaches in predicting lower bound of species richness, it is almost certain that the trapping efforts on each research site is adequate [24,25].
Although the statistic results depict the sufficient amount of sampling efforts, the species diversity representations of this study should not be considered as absolute.This is because the research outcomes can be further enhanced by expanding the research areas covering all habitat types of PCFR to represent the species diversity as a whole.In addition, this study manages to record two species listed as Vulnerable (VU) namely Maxomys rajah and M. whiteheadi (Table 1).The major threat affecting these species' populations is habitat loss caused by anthropogenic activities including land conversion for agricultural and industrial development purposes.Thus, this preliminary research of rodents at PCFR will serve as a guideline for further findings and future mitigation measures to maintain the existing ecosystem.
Conclusion
This study highlights the composition and distribution of the rodent species in PCFR are affected by various factors, including the distance of the study sites from the forest's edge, canopy level, vegetation types and food resources.Long term monitoring and extensive research on the ecological aspects of the rodent species will further assess the depth of the factors influencing the distribution and habitat preference of the species.The presence of vulnerable species, Maxomys rajah and M. whiteheadi indicate that PCFR serves as an important habitat for diverse and threatened species.Therefore, the conservation and management of this site are very crucial to preserve these under-explored and sensitive sites.
Fig. 2 .
Fig. 2. The species richness rarefraction curve on estimation number of individuals caught for rodents in Padang Chong Forest Reserve, Perak.
Table 2 .
Species abundance, richness and diversity values estimated of rodents in Padang Chong Forest Reserve, Perak. | 2023-11-10T16:03:59.870Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "2706751d5f123828e5d3761324958478b32a66ec",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2023/18/bioconf_ctress2023_01005.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "45a4f704cbf85ac5d3aa18699165c44b1204e572",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
220496147 | pes2o/s2orc | v3-fos-license | Accelerated FBP for computed tomography image reconstruction
Filtered back projection (FBP) is a commonly used technique in tomographic image reconstruction demonstrating acceptable quality. The classical direct implementations of this algorithm require the execution of $\Theta(N^3)$ operations, where $N$ is the linear size of the 2D slice. Recent approaches including reconstruction via the Fourier slice theorem require $\Theta(N^2\log N)$ multiplication operations. In this paper, we propose a novel approach that reduces the computational complexity of the algorithm to $\Theta(N^2\log N)$ addition operations avoiding Fourier space. For speeding up the convolution, ramp filter is approximated by a pair of causal and anticausal recursive filters, also known as Infinite Impulse Response filters. The back projection is performed with the fast discrete Hough transform. Experimental results on simulated data demonstrate the efficiency of the proposed approach.
INTRODUCTION
X-ray computed tomography (CT) is a highly-regarded technique for medical diagnostics [1,2], industrial quality control [3,4], material science research [5,6] etc. The rapid increase of the CT scanner resolution necessitates processing a huge amount of data, so performance of the classical reconstruction method does not meet current industry demands [7]. Filtered Back Projection (FBP), is a commonly utilized analytic image reconstruction algorithm that has computational complexity (Θ(N 3 ), where N is the linear size of the 2D slice). The most popular approach to make FBP algorithm faster is based on the projection-slice theorem (or central slice theorem) [8]. Several algorithms use the fast Fourier transform on an inhomogeneous grid, followed by the transition from polar coordinates to Cartesian coordinates in Fourier space using interpolation. The quality of reconstruction in this case strongly depends on the choice of interpolation method. A detailed analysis of arising artifacts that distort the output image is given in [9]. In [10] the invariance properties of the Radon transform and its dual has been used to construct a method of inversion based on log-polar representations.
A completely different approach was reported in [11,12]. The main idea is to reconstruct not the whole image, but use the properties of Radon transform to calculate sinograms corresponding to the four quadrants of the image, and reconstruct them individually. In this case, according to the Nyquist-Shannon theorem, only half projections are required to reconstruct a quadrant of the image without losing the quality. Splitting of the image into quadrants can be continued sequentially until the size of the independently reconstructed sections reaches a value of 1 pixel. In this case, the algorithm requires Θ(N 2 log N ) multiplications. The shortcoming of this method is a large number of intermediate interpolations, which can lead to the accumulation of the error [13].
All described methods require Θ(N 2 log N ) multiplication operations. In this paper, we propose a novel approach, which allows reconstructing the image in Θ(N 2 log N ) addition operations and Θ(N 2 ) multiplication operations.
FILTERED BACK PROJECTION
Let us briefly recall the basis of the FBP method. The Radon transform defined on the space of straight lines L is the integral transform where f (x, y) is some finite continuous function defined on the plane. In computed tomography, a straight line is usually given by the slope of the normal θ and the distance from the origin r (see Figure 1a). The projection of the function f (x, y) is the set of all points in its Radon transform corresponding to a certain angle θ: The essence of the FBP method is the sequential application of two operations. At the first step, the convolution of the projections with ram-lak (or ramp) [14] filtering function is performed: (a) (r, θ)-parameterization where Then, conventional back projection is applied: where r = x cos θ + y sin θ.
In real measuring systems, it is not possible to obtain a continuous set of projections; therefore, all integral operators are replaced by appropriate summation, and the continuous convolution is converted into one-dimensional linear filtering. In computed tomography, the number of projection angles P is chosen to be of the same magnitude as the image size N . In this case, convolution can be performed with Θ(N 3 ) operations in the image space or in Θ(N 2 log N ) operations using fast Fourier transform (FFT). The back projection requires Θ(N 3 ) operations.
FAST RECURSIVE FILTERING
Consider an ideal ramp filter with an impulse response given by the expression (4). Generally, it has a singularity at the point r = 0 [15]. However, in real cases, the spectrum of measured projections is band-limited with the bandwidth 2W (|ω| < W ). For a discrete signal, W = 0.5/∆, where ∆ is the sampling rate. Without loss of generality, we can set ∆ = 1 by choosing the appropriate coordinates system. In this case, the impulse response of the discrete filter (4 takes the form where sinc(r) = sin(r)/r. Simplifying of the latter expression yields The discrete convolution of the projection p θ with kernel (7) can be recast as a finite impulse response filter (or FIR filter): where L = 2L 0 + 1 is the length of the filter kernel. Even though the function h(n) decays quadratically with n, a decrease in the length of the filter kernel leads to a significant distortion of the reconstructed image. To achieve the minimum error, the kernel length should be the same order of magnitude as N . Thus, although formally calculating the convolution (8) requires Θ(N 2 ) operations, in the real cases, the factor for N 2 is very large. The issue of reducing the computational complexity of the convolution is essential for many signal processing problems.
In paper [16], the authors presented a group of computationally effective methods for approximating a Gaussian filter, its first and second derivatives using filters with infinite impulse response (IIR filters). The difference expression describing the discrete IIR filter has the form: where M and Q are the feedforward and feedback filter orders, a k and b k are the coefficients characterizing the filter. The advantage of the IIR filter is that for M N and Q N , Θ(N 2 ) operations are required to process an image of size N × N , which is significantly less than for an FIR filter. One can note that the impulse response of the filter (7) is unidirectional, while the impulse response of the ramp filter is symmetric (r(n) = r(−n)). A symmetric recursive filter can be represented as the sum of the casual and anticasual component [17].
The IIR filter is constructed so that its impulse response is equal to the impulse response of the FIR filter h(n). Then, the impulse response (7) can be rewritten as the sum of casual and anti-casual components: Thus, The resulting function p θ can be presented as the sum of the outputs of two recursive filters: where Due to the symmetry, the coefficients a + k = a − k and b + k = b − k , so it is enough to determine the coefficients only for a casual filter. Coefficients can be found by minimizing the meansquare error between the impulse response of the FIR filter (8) and the IIR filter (9). One can use any optimization algorithm, for instance, the Powell conjugate gradient method [18] or the simplex method [19]. It is worth to note that proposed scheme could be applied not only to the ramp filter, but to any other filter used in FBP approach (e.g. [20]).
(s, t)-parametrization
Introduce (s, t)-parameterization of the line so that some point (x 0 , y 0 ) on the original image plane (x, y) defines a line on the parameter plane (s, t). The set of projections in (s, t)-space is sometimes called the linogram [21].
Parameters s and t specify the coordinates of two points of the line lying on the vertical (for L ± h ) or horizontal (for L ± v ) boundaries (see Figure 1b). Parameter t takes values from −N to 0 for L − h and L − v ; and from 0 to N for L + h and L + v . Thus, the final linogram contains four N × N images for all types of lines.
However, a careless transition from (r, θ) to (s, t) variables can lead to the appearance of an error related to the violation of the rotational invariance of Radon transform in (s, t)-space, obtained for the squared image domain. Let us consider the projection in the linogram for L + h with shift t. In the sinogram, this projection correspond to the projection with the angle of inclination φ t = θ t − π/2 = arctan(t/N ) (see Eq. (14)). The length of the corresponding line is N/ cos φ. One can note, that this length is not constant and depends on the angle φ. Since Radon transform in squared domain should preserve the Radon invariant (the sum of the values in any row is equal to the total sum in the image), the projection amplitude is underestimated relative to the conventional Radon transform (which is equal to the sinogram obtained by CT scanner) by a factor of k t = 1/ cos φ t . Thus, each linogram projection is "stretched" relative to the corresponding sinogram projection by the same factor k i . Expressing the scaling coefficient explicitly yields It is important to keep this parameter in mind when converting linogram to sinogram.
Back projection in (s, t)-coordinates
An important feature of (s, t)-parameterization is morphological symmetry: each point in the original image corresponds to a straight line in the linogram, and each point in linogram corresponds to a straight line in the image. Such symmetry allows us to establish a connection between the forward projection operator (Radon transform) R[f ](s, t) and the corresponding back projection operator B[p](x, y). Denote parts of linogram corresponding to the introduced classes of lines as P + h (s, t), P − h (s, t), P + v (s, t) and P − v (s, t). One can note, that forward projection operators for mostly horizontal lines R ± h [f ] can be obtained from the corresponding operators for mostly vertical lines R ± v [f ] by preliminary transposing the image: Rewriting the expressions for the forward (1) and back (5) projection operators in (s, t)-coordinates yields Comparing the expressions (17) and (18), one can note that the operator B ± v defining a mapping from (s, t)-space to (x, y)-space can be expressed via the forward projection operator: Thus, the back projection operation in (s, t)-space is equivalent to the forward projection operation with the change of sign of one of the parameters. Similarly, operator B ± h can be expressed as
Discrete space. Fast Hough transform
In a discrete space, the Radon transform of a function along a given line can be approximated by the sum of the functions values at points belonging to the discrete approximation of this line. With the appropriate choice of approximation, the time required for the calculation of the discrete projection operator can be significantly reduced. In the work [22], M. Brady noted that discrete representations of two lines with close slopes have a significant number of common points. In this case, there is no need to calculate the repeating section twice to find the sum along each of these lines. Brady proposed sequentially calculating partial sums for segments of length 2 i , i = 1... log 2 (N + 1). Paper [23] presents a recursive implementation of the described algorithm for the lines approximated by so-called dyadic patterns. This algorithm is also known as Fast Hough Transform (FHT). In this implementation, the results are obtained separately for each type of lines (L ± h , L ± v ). The computational complexity of the algorithm is Θ(N 2 log N ) operations. Moreover, all these operations are summation, not multiplication.
An asymptotically fast SART algorithm based on an implementation of the fast Hough transform was proposed in [24].
Brady approach for back projection (Inverse Fast Hough Transform)
According to the expressions (19) and (20), the back projection operator can be presented as a forward projection operator with the change of sign of one of the parameters. Consequently, one can apply the Brady approach: The final reconstruction is the sum of four images:
RESULTS
Experiments were conducted on Shepp-Logan phantom. We reconstructed the images with FIR and IIR filters (Q = M , N = 512, P = 512). IIR filter coefficients were found with the Simplex method. We investigated two parts of the proposed algorithm one by one. Root-Mean-Square-Error (RMSE) dependence on IIR filter order is presented in Fig. 2a. Radon transform is used for back projection. Dependence between RMSE and image size for Radon and Fast Hough back projection is presented in Fig. 2b. FIR filter is used for both (Radon and Hough) cases. Reconstructed image using the proposed algorithm (N = 1024, P = 1024, Q = M = 3) is presented in Figure 3. The algorithm requires Θ(N 2 ) operations for interpolation, filtering and formation of the output images and Θ(N 2 log N ) summations for back projection.
CONCLUSION
In this paper we propose a novel fast algorithm to reconstruct the image from tomographic projections. Following FBP strategy we apply IIR filter with precalculated coefficients to the input sinogram to speed up the filtering step and use the Fast Hough Transform to speed up the back projection step. Experimental results on phantom demonstrate computational costs gain with acceptable quality. | 2020-07-14T01:01:00.834Z | 2020-07-13T00:00:00.000 | {
"year": 2020,
"sha1": "69cc9395439a1bdd4db7a62e904e9946f6f1d0c4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2007.06289",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "69cc9395439a1bdd4db7a62e904e9946f6f1d0c4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Engineering"
]
} |
7795524 | pes2o/s2orc | v3-fos-license | Enhanced Gauge Symmetry in M(atrix) Theory
We discuss the origin of enhanced gauge symmetry in ALE (and K3) compactification of M theory, either defined as the strong coupling limit of the type IIa superstring, or as defined by Banks et al. In the D-brane formalism, wrapped membranes are D0 branes with twisted string boundary conditions, and appear on the same footing with the Kaluza-Klein excitations of the gauge bosons. In M(atrix) theory, the construction appears to work for arbitrary ALE metric.
Introduction
Recently Banks et. al. have proposed a definition of eleven-dimensional M theory in the infinite momentum frame, as a large N limit of a supersymmetric matrix quantum mechanics [1]. To support this, they start with the fact that this quantum mechanics describes the D0-branes which dominate the strong coupling limit of IIa string theory, argue that anti D0-branes decouple in the IMF, and then adapt results of [2] showing that the theory can reproduce supergravity interactions without need of the original closed strings. From this point of view, modifications to the background can be made by adapting the corresponding modifications to the IIa string, providing definitions of the five-brane [3] and toroidal compactifications [1,4,5]. Not all physics follows from IIa arguments however and a quite non-trivial non-IIa result is the appearance of the supermembrane with correct physics [6,7,1,8].
In this note we study another example of IIa-derived M-theory physics; the enhanced gauge symmetry of compactifications on K3 × IR 7 in the orbifold limit [9] which follows from the proposal by Hull and Townsend of strong-weak coupling duality between IIa on K3 and the heterotic string on T 4 [10]. Its M theory origin is clear -membranes wrapped on small supersymmetric two-cycles become particles with conventional (vector) gauge charge in the dimensionally reduced theory, and when such two-cycles degenerate to zero volume, these particles include massless gauge bosons.
Since this phenomenon is local, we can see it by formulating the theory in the neighborhood of the degenerating two-cycles, in other words on M ζ × IR 7 , where M ζ is an ALE space asymptotic to C 2 /Γ. Γ is a finite subgroup of SU (2) and it has an associated simply laced extended Dynkin diagram G, an affine Lie algebraĜ and finite Lie algebra G [11]. The enhanced gauge symmetry obtained by maximal degeneration to the singularity C 2 /Γ is simply G.
An explicit hyperkähler quotient construction of M ζ with its metric was made by Kronheimer [12], and this construction appears in D-brane physics: the natural construction of D-branes embedded at a point in an orbifold produces a gauge theory whose moduli space is M ζ [13]. We will use this construction for D0-branes in IIa string theory.
A wrapped D2-brane also has a known D-brane realization in this construction [14,15]: it is a D0-brane with twisted boundary conditions for the open strings, which project out the moduli moving it from the fixed point. An easy computation shows that it is charged under twist sector RR fields, and since the wrapped membrane is the only charged BPS state in the large volume limit, the two objects must be continuously connected. Its nonzero mass is interpreted as a consequence of an implicit B = 0 of the orbifold construction [16]; we will be able to determine B for any Γ.
This B corresponds to a Wilson line in the additional dimension of M theory, and thus the Kaluza-Klein states of massless gauge bosons and the massive gauge bosons of spontaneously broken gauge symmetry appear on the same footing in this construction.
Taking this construction for D0-branes and adapting it according to the rules of [1] provides a construction of M theory on M ζ × IR 7 . There are several differences with the string theory discussion. First, the moduli of the ALE are controlled by Fayet-Iliopoulos terms in the gauge theory. While these were derived in [13] as couplings to closed string twist fields, here they are postulated. This is appropriate as we are discussing different backgrounds in the infinite momentum frame, which should be realized by changing parameters in the Lagrangian. Second, the expectation value of B disappears in the limit, and the full enhanced gauge symmetry appears. Finally, there appears to be no analog of the upper bound on the blow-up parameter ζ at the string scale which follows from the general results of [2].
Another test can be made by introducing a five-brane wrapped on K3 or M ζ . This produces the heterotic string which dominates the strong coupling limit, and we must see a level 1 action ofĜ on the spectrum of this string.
As in [3], we define the five-brane by introducing a vector hypermultiplet into the D0-brane quantum mechanics. This theory is the dimensional reduction of the general theory of [13], corresponding to the hyperkähler quotient construction of instanton moduli space of Kronheimer and Nakajima [17]. Following Harvey and Moore [18], if we assume that the space of bound states is the sheaf cohomology of this moduli space, then results of Nakajima [19] imply the existence of these bound states as well as theĜ action.
D0-branes on orbifolds
D-branes on an orbifold are defined as in [13]; we take the U (N ) gauge theory of N D-branes at the fixed point in C 2 and quotient by a combined action of Γ on space-time and the Chan-Paton factors: The derivation can be made for 5-branes and the result is a N = 1, d = 6 gauge theory whose Lagrangian (at leading order in α ′ , which is the only part used in [1]) is determined by the choice of gauge group and matter representation; this is given in [13] for the A series, and in [20] for the D and E series. The quantum mechanics of D0-branes is its dimensional reduction.
The field content is determined by a choice of Γ representation R. Let the irreducible representations of Γ be R i with 0 ≤ i ≤ rank G and their dimensions be n i . R 0 is the trivial representation, R 1 the fundamental (the same as the action on C 2 ), and the R i are associated with the extended Cartan matrixĈ by the McKay correspondance, In terms of the extended Dynkin diagram G, each node is an irrep R i , and the non-zero terms in (2.1) are links. If R decomposes as the resulting gauge symmetry is i U (v i ), and each link of G comes with a hypermultiplet in the (v i ,v j ). Besides the overall coupling, the parameters of the Lagrangian are an SU (2) R triplet of Fayet-Iliopoulos terms for each of the U (1) factors, ζ n i with 1 ≤ n ≤ 3, 0 ≤ i ≤ r and i ζ n i = 0. These determine the periods of the metric on the hyperkähler quotient and in string theory are determined by expectations of twist fields [13].
The simplest case to interpret is N copies of the regular representation, v i = N n i . The Higgs branch of the moduli space for generic ζ is the symmetric product (M ζ × IR 5 ) N /S N , the positions of N independent D0-branes in space-time. Following [9], these are interpreted as Kaluza-Klein states of M theory on M ζ × IR 7 with momentum p 11 = 1/R 11 . Consistency of this interpretation predicts bound states with p 11 = N/R 11 corresponding to all partitions of this momenta among up to N particles.
Along with the bulk supergravity fields, we expect additional bound states localized near the fixed point; in particular the BPS states which come from the decomposition where ω (i) are the normalizable harmonic forms on the ALE, and their supersymmetry partners forming a full gauge multiplet. The simplest case in which to check this is Γ = Z Z 2 , v 1 = v 2 = 1. This is U (1) 2 gauge theory, but the diagonal U (1) decouples, and the non-trivial dynamics is that of U (1) gauge theory coupled to two hypermultiplets of charge 2. This is a system for which the arguments of [21,22,2] establish the existence of bound states; it is identical (up to the unit of charge) to the system of a D0-brane in the presence of two D4-branes and using string duality, has the same bound states as two D0-branes and a single D4-brane. Thus one predicts a "new" bound state and an additional set of BPS states in the symmetric product of two single 0-4 bound states. The new bound state will be interpreted as the p 11 = 1/R KK mode of the U (1) gauge multiplet, while the others could be interpreted as the product of a bound state with (v 1 , v 2 ) = (1, 0) and one with (v 1 , v 2 ) = (0, 1). Now there was no consistency condition in the string theory requiring all v i equal, and the interpretation of more general states is briefly described in [14] (it was known to the authors of [13] but not mentioned there): taking a single v i = 1 and the rest zero produces a D2-brane wrapped around a non-trivial two-cycle of the ALE. In string theory, the test of this is to check that it is a source of twist sector RR field, the orbifold realization of the fields (2.3). This is shown in the appendix to [13], for Γ = Z Z n .
The representations R i are associated with homology two-cycles σ i in M ζ , and R 0 is associated with σ 0 = − i n i σ i . The intersection form σ i ∪ σ j is the extended Cartan matrixĈ ij [12]. This leads to a further association of R i and σ i with the simple roots α i and the lowest root α 0 of G; these translate directly into the charges Q i = α i of the r + 1 elementary wrapped two-branes.
The coupling to the untwisted sector is universal, so all of these D0-branes have mass 1/c 2 g s , where c 2 = i n i is the Coxeter number of Γ. This mass is also determined by the central charge formula [10] to be where J is the Kähler form with respect to the complex structure for which σ is a holomorphic curve (i.e. with Ω = 0). Matching this for ζ = 0 determines the background B for the orbifold. * For Z Z 2 , it is B = 1/2 as found in [16]. For Z Z n , the cycles σ i , i > 0 associated with simple roots have B = 1/n. This corresponds to a non-zero C 11µν in eleven dimensions and thus to an SU (n) Wilson line where σ = n−1 2 . There is a strong analogy with the Wilson line breaking E 8 to SO(16) in the relation of type I' string theory to M theory [23]. Perhaps there is a general rule determining such symmetry breakings.
Thus the v i = 1 states provide the gauge bosons corresponding to simple roots of G. They fall into 8 + 8 component supermultiplets; more explicitly the orbifold projection retains half of the 16 components of the gaugino, transforming in the doublet of the SU (2) ⊂ SO(4) which is singlet under the orbifold projection and the 4 of SO(5). These * The dependence of the mass of a finite wrapped D2-brane on Bmod1 is realized by the additional dependence on a world-volume gauge field F and the possibility of F = 0. In the present definition, it appears to be reflected in subtleties in the D0-brane coupling to the twist sector B similar to those found in the appendix of [13]. act on a multiplet whose bosons are a vector of SO(5) and three scalars, the physical states of a d = 7 gauge boson and the metric fluctuations.
Their bound states must provide all of the gauge bosons, and thus we predict that a new supermultiplet of bound states exists for each root α and integer N , with p 11 = (N + 1 2 B · α)/R 11 ; in other words for each sector with ( i v i α i ) 2 = 2. This must be true in the full IIa theory for consistency of M theory; the explicit bound state we described lends support to the conjecture that these are bound states in the pure D0 quantum mechanics.
Although enhanced gauge symmetry is spontaneously broken, it is fairly manifest. In the D-brane realization, supergravity Kaluza-Klein states and wrapped membrane states appear on an equal footing.
Turning on the moduli ζ n i modifies the gauge symmetry breaking (they correspond to Wilson lines A n in the T 3 of the dual heterotic string) and the effective Hamiltonian. For |ζ| << B , (2.4) has the expansion The O(ζ 2 ) dependence can be seen explicitly for v i = 1 in the D-term potential, which degenerates to V ∝ n (ζ n ) 2 . For ζ ∼ 1, stringy corrections are known to be important [13].
From M theory to M(atrix) theory
Following [1], we now regard this system as the definition of M theory on M ζ in a sector with longitudinal light-cone momentum P − = N/R, and take the R 11 → ∞ limit. The Wilson line (2.5) disappears, and the massless charged gauge bosons at ζ = 0 are manifest. Now it is this observation which confirms the identification of v i = 1 states as wrapped membranes.
It is important to check the basic tenets of [1] in this context, for example that supergravity interactions between these particles are correctly reproduced by quantum open string effects. This issue will be discussed in [24].
For any |ζ| >> l 2 p11 , the classical analysis of the resulting Higgs branch appears to be valid, meaning there would be no restriction on the blow-up parameter in this construction. Consistent with this, the D-term potential exactly reproduces the term m 2 /p 11 = ζ 2 /p 11 in the IMF 0-brane energy.
The construction must work for all states, not just BPS states. There are clear predictions for the states on the blowup with ζ >> l 2 p , where the conventional supergravity analysis is valid: we diagonalize the basic supergravity and membrane Hamiltonians to get higher modes in the KK expansion (2.3), and local excitations of the wrapped membranes.
However, estimating their couplings to the bulk states using the known membrane coupling h µν ∂X µ ∂X ν leads to the conclusion that they are unstable and thus only the full dynamics can be sensibly compared, a very interesting open problem.
Five-branes
We add a five-brane as in [3], by adding vector degrees of freedom. Different orientations will have different physics. We can put it at a point in the ALE (by starting off with images), and get a theory which should contain "tensionless strings" in the limit ζ = X = 0. Seeing these should be quite interesting but requires knowing how to construct membranes ending on the five-branes.
If we instead embed the longitudinal dimensions in an ALE, we get a piece of the five-brane wrapped around K3. This is the heterotic soliton which dominates the small K3 limit. We are treating the large K3 limit, but we must see BPS states of this soliton in any case. These are excitations of the bosonic left movers admitting unbroken (0, 4) supersymmetry and the action of world-sheet current algebra, affineĜ at level 1. This symmetry is broken both by ζ = 0 and, at finite R 11 , by the Wilson line (2.5), but this will be realized by explicit terms in the Hamiltonian.
Physical five-brane degrees of freedom are new bound states of zero-branes. In the IMF, we identify the left and right world-sheet stress tensors with Note that we are not restricted to L 0 =L 0 , because we are considering a finite piece of an infinite string.
The choice of which chirality has world-sheet supersymmetry is determined by the chirality of the additional vector degrees of freedom. If we make this compatible with the unbroken supersymmetry on the orbifold, we get non-trivial supersymmetries commuting to produce H, so L 0 is the supersymmetric side (say right movers) and BPS excitations can have non-zeroN 0 . If we make the other choice, the supersymmetry becomes trivial andL 0 is the supersymmetric side. In this case, BPS states will not be realized as bound states of D0-branes.
The full gauge theory is now a D0-D4 brane system, also derived in [13] (section 5). These theories are parameterized by a set of non-negative integers w i where w i is the total number of D4-branes. They are obtained from the pure D0-brane theories by adding w i hypermultiplets in the fundamental representation of U (v i ) for each i. As shown in [17] (and reviewed in section 9 of [13]), the Higgs branch of moduli space is generally equivalent to a moduli space of instantons in the D4-brane gauge theory. The choice of w i translates into a choice of first Chern class in this language; a single heterotic string would have a single w k = 1, with k denoting a choice of sector in the world-sheet theory.
It is natural to look for D0-D4 bound states in the supersymmetric quantum mechanics on this moduli space, and thus identify them with elements of the moduli space cohomology. Of course the moduli space approximation is not exact and furthermore these spaces are typically singular. Harvey and Moore [18] discuss some of the issues here, and propose that the general identification will be between the Hilbert space of bound states and the complex cohomology of the moduli space of coherent simple sheaves. This generalization is particularly significant in the present case of a single D4-brane, as "U (1) self-dual instantons" are at best rather singular objects.
Existing results on bound states in quantum mechanics along with the string duality arguments of [18] all support this identification, and we will assume it here. This allows us to make use of the results of Nakajima [19] on the cohomology and especially the celebrated Kac-Moody algebra which acts on the cohomology. For our present case of i w i = 1, this will be aĜ action at level 1. The generators of this algebra E i , F i and H i are as follows: the Cartan subalgebra H i acting on a cohomology class of the sector of moduli space characterized by integers v i has eigenvalue v i ; the operators E i add a single twisted D0-brane (and thus increase v i by one); the operators F i are their conjugates. These are effectively 'second quantized' operators and their natural physical interpretation is in terms of Harvey and Moore's "correspondance conjecture" [18], defining their action on the BPS states.
We claim that this is the standard world-sheet current algebra which acts on the spectrum of a single heterotic string. One test of this is that the left-moving Virasoro generators L n must contain the Sugawara stress-tensor as one component. This requires that P 11 as defined in M theory, i.e. ( v i + c k )/c 2 R 11 where c k is a constant possibly depending on the sector k, be equal to the Sugawara L 0 . This follows from Nakajima's results, which make L 0 the second Chern class of the sheaf. The constant c k is a contribution from the non-zero first Chern class present for w i > 0, i = 0.
Nakajima's results also support this identification of the spectrum -in particular, it is shown that the cohomology contains all highest weight representations -but we have not verified that the full cohomology is isomorphic to the spectrum of BPS states. Following Harvey and Moore, this must follow from IIa -heterotic string duality, because the D0 and D4 branes of the construction are sensible objects in the IIa string. Indeed, the present discussion differs from theirs mainly in that we are considering a state containing an infinitely long heterotic string rather than perturbative heterotic string states.
Conclusions
In this note we showed that Dirichlet branes on orbifolds provide a simple and explicit way to see the enhanced gauge symmetry of the IIa string and M theory on K3. The construction produces an explicit realization of world-sheet current algebra for the wrapped five-brane which becomes the dual heterotic string. Although these are not first quantized operators (they change the D0-brane number), it should be possible to describe their action fairly explicitly.
In principle, the same operators adding and removing zero-branes act on the space of pure 0-brane bound states. They will realize the subgroup of global gauge transformations and it might be (extrapolating beyond Nakajima's results) that this is contained in a Kac-Moody algebra at level zero. The natural interpretation of such an algebra in M theory (with X 11 space-like, so before going to the IMF) would be the subgroup of gauge transformations with X 11 dependence. Such an interpretation would imply that the Kac-Moody action can be extended to all states, not just BPS states. | 2014-10-01T00:00:00.000Z | 1996-12-11T00:00:00.000 | {
"year": 1996,
"sha1": "4acaa54cead294f4f55268f1cd46cfeb79f01604",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9612126",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c04366e3ae3a927b913326c60ed48f15533861da",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237385044 | pes2o/s2orc | v3-fos-license | Haematological abnormalities and pharmacotherapy in severe acute respiratory syndrome corona virus 2
The first case of SARS-CoV-2 (severe acute respiratory syndrome corona virus 2) was reported in Wuhan, China at the end of year 2019. It shows flu-like symptoms, but anosmia, fatigue, persistent cough and loss of appetite, that collectively might spot individuals with COVID-19. The aim of writing this review was to gather the information about blood abnormalities and pharmacotherapy for COVID-19 as a resource for healthcare professionals. A blood workup as well as continuous tracking hematological changes could divulge the risks of disease progression. The indirect indicators such as C-reactive protein (CRP), D-dimer, albumin, ferritin and LDH levels which are used as markers to estimate the severity of COVID-19 infection and prognosis. The most common hematological findings include lymphocytopenia, neutrophilia, eosinopenia, mild thrombocytopenia and less frequently, thrombocytosis. Clinical management includes prophylactic and therapeutic measures. Supportive care including supplemental oxygen and mechanical ventilatory support as and when indicated. Several class of drugs like anti-malarial, anti-viral, antiinflammatory drugs are being used for the treatment and prevention of COVID-19. The target for development of most of the vaccine for COVID-19 is S protein of the corona virus. Various vaccines available for use across the globe are COVAX, Covishield, Moderna, Johnson and Johnson, Sputnik V, Novavax, Sinopharm, SinoVac. Serial monitoring of hematological manifestations is recommended and the treating doctor should stay vigilant and consider proper screening. The therapeutic intention is to decrease viral load and pharmacological thrombo-prophylaxis in high risk patients.
demiological purpose. Indirect indicators of COVID-19 such as CRP, D-dimer, albumin, ferritin and LDH levels are used to estimate the severity of infection. 4 Hematological changes are useful for checking the status of SARS-CoV-2 infection, since the hematopoietic system and hemostasis suffer significant impacts during the evolution of COVID-19. 5 The most common hematological findings include lymphocytopenia, neutrophilia, eosinopenia, mild thrombocytopenia and less frequently, thrombocytosis. 6 Presently, United States food and drug administration (USFDA) have approved very few drugs or other therapeutics to prevent or treat COVID-19. Clinical management incorporates infection prevention, control measures and supportive care including supplemental oxygen and mechanical ventilatory support when indicated. Several drugs like anti-malarial, anti-viral, antiinflammatory drugs are being used for the treatment and prevention of COVID-19.
The aim of writing this review was to gather the information about blood abnormalities and pharmacotherapy for COVID-19 as a resource for healthcare professionals.
Blood abnormalities in SARS-CoV-2
The severity of the disease and clinical outcome are indicated by the most apparent abnormalities which are the decrease in lymphocytes and the increase in neutrophillymphocyte ratio (NLR) ratio. The lower count of eosinophils and its delay in rising can also be the signs of poor outcome of COVID-19. Thus, dynamic monitoring of the parameters of peripheral blood as a routine has important reference value for judging the progression and prognosis of COVID-19. 6 The disease progression of COVID-19 is demonstrated by abnormal coagulation function. One of the most consistent abnormal haemostatic laboratory markers in COVID-19 is raised D-dimers. It is an early and helpful marker to improve management of COVID-19 patients. This occurs due to activation of broncho-alveolar hemostasis in response to SARS-CoV-2. In healthy individuals, the coagulation-fibrinolysis balance of the broncho-alveolar haemostasis is shifted towards fibrinolysis. This high fibrinolytic activity clears fibrin deposited in alveolar compartment and allows uninterrupted gas exchange. 7 However, in patients who develop acute lung injury secondary to COVID-19, this balance shifts towards procoagulant side, with the purpose of creating pulmonary thrombi possibly to limit viral invasion and the breakdown of these thrombi would cause an increase in D-dimers. 8 Patients with severe COVID-19 have a higher level of Ddimer than those with non-severe disease and D-dimer greater than 0.5 μg/ml is associated with severe infection in patients with COVID-19.
The other laboratory abnormalities noted were hypoalbuminemia, lymphopenia, neutrophilia, elevated CRP, LDH and decreased CD8 count. Severe disease can develop in COVID-19 patients with raised alanine aminotransferase (ALT) and aspartate aminotransferase (AST). 9 In a study conducted by Burugu et al it has been observed that the serum ferritin was elevated among the COVID-19 patients who could not survive the treatment as compared to the recovered patients. The serum concentrations of ferritin could be used as a prognostic marker in the management of COVID-19 patients. 10 It has been reported in literature that hypo-albuminemia is a potent, dose-dependent predictor of poor outcome. 11 Therefore, albumin therapy might be a potential remedy for NCP.
A genomic sequence study reveals that the new coronavirus shared the ACE2 receptor of SARS-CoV which is a critical enzyme in the renin angiotensin system (RAS). 12 RAS plays important roles in maintaining blood pressure homeostasis and salt and fluid balance. ACE and ACE2 play different roles in RAS; ACE generates angiotensin II, whereas ACE2 is a negative regulator of the system, decreasing angiotensin II. The abnormal increase of angiotensin II was reported mostly associated with hypertension and heart failures and sometimes also lung and renal dysfunctions. 13 The plasma levels of angiotensin II from COVID-19 patients were considerably higher than that of healthy individuals and it was strongly associated with viral load and lung injury, suggesting that the imbalanced RAS in patients was caused by SARS-CoV-2 and drugs of angiotensin converting enzyme inhibitor (ACEI) and angiotensin receptor blocker (ARB) balancing RAS may be used repurposing on SARS-CoV-2 infected patients. 14 The other common finding in COVID-19 patients are elevated CRP, decreased lymphocyte count, as well as increased LDH. In a study conducted by Deng et al showed that the CRP, ALT, AST and creatinine levels were higher in the death group compared to the recovered group at the time of admission and the CRP levels remained high during the progression of the disease. 15 Raised CRP might be an early marker to anticipate hazard for seriousness of COVID-19. A meta-analysis of four published studies showed that increased pro-calcitonin values were associated with a nearly 5-fold higher risk of severe infection. 16 Higher serum ferritin and increased IL-6 levels have been associated with increased risk of death in COVID-19 patients. 17 The main pathophysiology of SARS-CoV-2 infection in severe cases could be hypercytokinemia related to traumas. A hyperinflammatory syndrome called secondary hemophagocytic lymphohistiocytosis (sHLH), is usually activated by viral infections. 18 A cytokine profile, similar to sHLH, is related to the patients with severe COVID-19, as demonstrated by enhanced TNF-α, IL-7, IL-2, granulocyte-colony stimulating factor (GCSF), monocyte chemoattractant protein 1 and macrophage inflammatory protein 1-α. 19 Raghute
Lymphopenia
Several studies confirmed that the raised pro-inflammatory cytokines play a vital role in the induction of lymphopenia. Accordingly, hypercytokinemia impacts the lymphopenia and hence is incapable to guard against SARS-CoV-2 infection. 18 Various mechanisms might work together to cause lymphopenia. SARS-CoV-2 might directly attack the lymphocytes or destroy lymphoid organs. Lymphopenia could be due to elevated blood lactic acid levels in patients with severe phenotype of COVID-19. 20 In a study conducted by Bhandari et al 52.38% patients were presented with lymphopenia and it was more common in male patients as compared to female patients. 21 Lymphopenia was observed in 80% of symptomatic patients when compared to 11.5% of asymptomatic patients in the study conducted by Sharma et al. 22 Prevalence of lymphopenia was 83.20% in a study conducted by Guan et al. 23 Thrombocytopenia It is also one of the most important hematological findings in COVID-19 patients. 24 The proposed mechanisms for thrombocytopenia are inhibition of platelet synthesis by direct infection of bone marrow cells by the virus, cytokine storm destroys bone marrow progenitor cells and leads to the decrease of platelet production following virus infection, lung injury indirectly results in reduction of platelet synthesis; destruction of platelets by the immune system; aggregation of platelets in the lungs forming microthrombi and platelet consumption. 25 Thrombocytopenia was seen in 23.81% patients in a study conducted by Bhandari et al and 40% symptomatic patients and 6% asymptomatic patients in a study conducted by Sharma et al. 21,22 While in a study of Guan et al 36.20% patients showed thrombocytopenia. 23 Procalcitonin (PCT) has arisen as a promising prognostic biomarker in COVID-19. Various studies have upheld the view that PCT levels are below the optimal cut-off in COVID-19 and any considerable increase from baseline reflects the development of a critical state. 26 The increased PCT and hypersensitivity C-reactive protein (hs-CRP) are strongly suggestive of secondary bacterial infection, as bacteria attack the already fragile immune system. In multivariable regression analysis, it has been shown that patients had poor prognosis when hs-CRP greater than 86.7 mg/l. 27
Pharmacotherapy in SARS-CoV-2
The definitive therapy does not exist for COVID-19. The aim of therapeutic intervention is to reverse hypoxaemia and provide adequate organ support and also to reduce viral load and thus halt disease progression.
Anti-malarials
In-vitro studies have revealed that chloroquine and hydroxychloroquine causes alkalinisation of the intracellular phagolysosome, which prevents virion fusion, uncoating and viral spread thus inhibiting SARS-CoV-2 transmission. 28 Chloroquine has been found to have immunomodulatory effects through the suppression of TNF-α and IL-6 release, which can prevent the cytokine storm that leads to rapid deterioration of patients with COVID-19. 29 In a study conducted by Gao et al chloroquine and hydroxychloroquine was found to be superior to the control treatment in inhibiting the exacerbation of pneumonia, improving lung imaging findings, promoting a virus-negative conversion and shortening the disease course. 30
Anti-virals
Remdesivir is a nucleotide analogue which gets fused into the viral RNA chain resulting in premature chain termination. 31 Remdesivir may be considered in patients with severe disease and respiratory failure. It cannot be used in conjunction with hydroxychloroquine due to an increased risk of QT prolongation and fatal dysrhythmias. 32 A study showed more clinical improvement in severe COVID-19 hospitalized patients who were treated with remdesivir. 33 It is not currently FDA-approved to treat or prevent any diseases, including COVID-19. Under the revised emergency use authorization (EUA), remdesivir is authorized for emergency use by healthcare providers for the treatment of suspected or laboratory-confirmed COVID-19 in all hospitalized adult and pediatric patients, irrespective of their severity of disease. 34 A nucleoside analog favipiravir acts by inhibiting viral RNA polymerase and was initially used for the treatment of RNA viruses such as ebola and influenza. 35 The most common adverse effects are hyperuricemia, abnormal transaminases, psychiatric symptoms and gastrointestinal discomfort like diarrhea, nausea and vomiting. 36 In a public notice dated 21 June 2020 issued by CDSCO states "considering the emergency and unmet medical need for COVID-19 disease", CDSCO has approved restricted emergency use of remdesivir injectable formulations for treatment of patients with severe COVID-19 infection and favipiravir tablets for mild to moderate COVID-19 infection subject to various conditions and restrictions. 37 An open-label, non-randomized trial of 80 patients with COVID-19 in China identified a significant reduction in the time to SARS-CoV-2 viral clearance in patients treated with favipiravir compared with historical controls treated with lopinavir-ritonavir. 38
Anti-parasitics
Ivermectin is an FDA-approved broad spectrum antiparasitic agent that have demonstrated to have anti-viral activity against a broad range of viruses in vitro. 39 Ivermectin acts by inhibiting the host importin alpha/beta-1 nuclear transport proteins, which are part of a key intracellular transport process that viruses capture to augment infection by suppressing the host's antiviral response and it also interfere with the attachment of SARS- CoV-2 spike protein with human cell membrane. 40 Ivermectin in the dose of 12 mg BD alone or in combination with other therapy for 5 to 7 days may be considered as safe therapeutic option for mild, moderate or severe cases of COVID-19 infection. It is cost effective especially when the other drugs are very costly and not easily available. 41 An observational propensity-matched case-controlled study conducted by Patel et al showed an association of ivermectin use with lower in-hospital mortality. 42
Corticosteroids
Methylprednisolone and dexamethasone have potent antiinflamatory activity. They bind to cytoplasmic receptors to change the transcription of mRNA and reduce production of inflammatory mediators. Dexamethasone reduces mortality by one-third in mechanically ventilated patients hospitalized with severe COVID-19 and by one-fifth in patients requiring oxygen without mechanical ventilation. The drug did not improve survival in patients not requiring respiratory support. 43
Anti-coagulants
Severe COVID-19 is commonly complicated with coagulopathy and disseminated intravascular coagulation (DIC) may exist in the majority of deaths. Anticoagulants may not benefit the unselected patients, instead, only the patients meeting sepsis-induced coagulopathy criteria or with markedly elevated D-dimer may benefit from anticoagulant therapy mainly with low molecular weight heparin. 44
Fibrinolytics
COVID-19 has caused thrombotic coagulopathy and respiratory failure in extraordinary numbers and pulmonary microvascular thrombosis is particularly prominent in COVID-19 respiratory failure. t-PA fibrinolytic therapy shown to be effective in decompensating patients and such approach could be rapidly widened globally due to t-PA's availability at most medical centers. 4
Statin therapy
Guideline-directed continuation of statin therapy among COVID-19 patients with a history of atherosclerotic cardiovascular disease or diabetes should be recommended. But de novo initiation of statin therapy for management of COVID-19 episode can be done only as a clinical trial, not routinely. 46
Non steroidal anti-inflammatory drugs (NSAIDs)
NSAIDs act by inhibiting cyclooxygenase 1 and 2, thus, blocking production of prostaglandins, which are important mediators of fever and inflammation. The WHO declared that there is no evidence of severe adverse events, acute health care utilization, long-term survival or quality of life in patients with COVID-19 with the use of NSAIDs. 47
Pegylated interferon alfa-2b
Recently this drug has received restricted emergency use approval from drug controller general of India (DCGI) for the treatment of moderate COVID-19 infection in adults. It has direct inhibitory effects on viral replication and it supports an immune response to clear viral infection. In the multi-center, randomized, open-label clinical trial, it had shown lesser need for supplemental oxygen, indicating that it was able to control respiratory distress and failure which has been one of the major challenges in treating COVID-19. 48
2-deoxy-D-glucose (2-DG)
This new anti-COVID oral drug has been developed by the defence research and development organisation's leading laboratory, institute of nuclear medicine and allied sciences in alliance with Dr Reddy's laboratories. The 2-DG drug was recently granted emergency use approval by the DCGI as an adjunct therapy in moderate cases of COVID-19. Like glucose, this drug spreads through the body, reaches the virus infected cells and prevents virus growth by stopping viral synthesis and destroys the protein's energy production. The drug also acts on virus spread into lungs which helps to reduce patients dependability on oxygen. 49
Monoclonal antibody
Tocilizumab is a recombinant monoclonal antibody against IL-6 receptors and IL-6 is implicated in immunologic response in patients with cytokine-release syndromes (CRS). Increased levels of IL-6 have been associated with hyperinflammatory states and CRS in severe COVID-19 cases and can potentially lead to increased rates of mortality. 50 Patients who develop evidence of COVID-19 associated CRS may be treated using this agent. In a recent study it has been found that acute phase reactant levels were decreased and the patients were getting to a stable condition reflected by a later gradual decrease of IL-6 after tocilizumab administration. 51
Passive immunity
Convalescent plasma, donated by persons who have recovered from COVID-19, is the acellular component of blood that contains antibodies, including those that specifically recognize SARS-CoV-2. These antibodies when transfused into COVID-19 patients exert an antiviral effect by suppressing virus replication before patients have
COVID-19 vaccines
Several approaches to COVID-19 vaccines are currently being evaluated. The vaccine target and platform mainly decides vaccine efficacy. Among all platform technologies, whole-virus such as live-attenuated viral vaccines and killed whole virus vaccines, subunit vaccines, plasmidbased DNA vaccines, RNA replicons and virus-like particle have been developed to induce protective responses to viral infections. 53 The target for development of most of the vaccine for COVID-19 is S protein of the corona virus. Various vaccines available for use across the globe are COVAX, Covishield, Moderna, Johnson and Johnson, Sputnik V, Novavax, Sinopharm, SinoVac (Table 1). 54
Supplements and immune boosters
Supplements like ascorbic acid, zinc and vitamin D can be used in treatment of COVID-19 patients. The powerful antioxidant ascorbic acid helps to protect against damage induced by oxidative stress. Zinc is a vital component to white blood corpuscles which combat infections. Respiratory infections can be prevented and pulmonary function can be improved when vitamin D-deficient patients are supplemented with vitamin D. 55 | 2021-09-01T15:12:15.007Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bbf0a3a72b8497b86444de5428a8d0e06e5cf02a",
"oa_license": null,
"oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/4694/3241",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "298d62a3df89bb6934e68bfeabe2fdac2f41abd6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
1177149 | pes2o/s2orc | v3-fos-license | Unbounded Software Model Checking with Incremental SAT-Solving
This paper describes a novel unbounded software model checking approach to find errors in programs written in the C language based on incremental SAT-solving. Instead of using the traditional assumption based API to incremental SAT solvers we use the DimSpec format that is used in SAT based automated planning. A DimSpec formula consists of four CNF formulas representing the initial, goal and intermediate states and the relations between each pair of neighboring states of a transition system. We present a new tool called LLUMC which encodes the presence of certain errors in a C program into a DimSpec formula, which can be solved by either an incremental SAT-based DimSpec solver or the IC3 algorithm for invariant checking. We evaluate the approach in the context of SAT-based model checking for both the incremental SAT-solving and the IC3 algorithm. We show that our encoding expands the functionality of bounded model checkers by also covering large and infinite loops, while still maintaining a feasible time performance. Furthermore, we demonstrate that our approach offers the opportunity to generate runtime-optimizations by utilizing parallel SAT-solving.
Introduction
Software has become an important part of almost all modern technical devices, such as cars, airplanes, household appliances, therapy machines, and many more. The cars of tomorrow will drive on their own but will be controlled by software. As shown by serious accidents like the rocket crash of Ariane flight 501 [25], the massive overdoses of radiation generated by the therapy machine Therac-25 [24] or the car crash of the Toyota Camry in 2005 [23] software is never perfect, it almost always contains errors and bugs. While testing of software can only cover a limited number of program executions, software verification can guarantee a much higher coverage while producing proofs for the existence or absence of errors. There exist several different software verification approaches, as for instance symbolic execution [21] and bounded model checking [13]. Bounded model checking inlines function calls and unrolls loops a finite number of times, say k-times, where k is called the bound of the program. This unrolling This work was supported by Baden-Württemberg Stiftung project HIVES reduces the complexity of the problem to a feasible level, though it limits the coverage and precision of these approaches.
By means of extending the functionality of bounded model checkers, we developed a novel unbounded model checking approach. To this end, we removed the bound that limits all bounded model checkers and created a transition system that is traversed by an incremental SAT-solver or an invariant checking algorithm. We focus on sequential programs written in C and use the low-level code representation of the compiler framework LLVM as an intermediate language.
Based on this representation we derived an encoding of the program verification task into a DimSpec formula. A DimSpec formula uses four CNF formulas to specify a transition system and is often used in SAT based automated planning. We first encode the program into an SMT formula and, subsequently, we generate the SAT-problem in DimSpec format. The resulting DimSpec formula is then solved by either an incremental SAT-solver that unrolls the transition system to find a transition path to the error state or an invariant checking algorithm that refines an over-approximation of a transition path to the error state.
Our verification system uses Clang and LLVM version 3.7.1 to compile Ccode into the LLVM intermediate language. Then our new tool LLUMC (Low-Level-Unbounded-Model-Checker) generates DimSpec formulas representing the presence of certain errors in the program. To solve the generated formulas [18] we either use the incremental SAT-solver IncPlan [7] or the invariant checking algorithm implemented in the solver MinireachIC3 [8]. LLUMC was inspired by the bounded model checker LLBMC [27] but runs independently. Our evaluation is based on the Software Verification Competition (SV-Comp) and shows the correctness and feasibility of our approach. LLUMC is available online at [3].
Preliminaries
We assume the reader is familiar with propositional logic, first-order-logic and SAT and use definitions and notations standard in SAT. This section will introduce incremental SAT-solving and describe the theory of bit-vectors in the context of SMT-solving. Furthermore, the software bounded model checking approach is briefly described.
Incremental SAT-Solving In the assumption based interface [16], two methods are used: add(C) and solve(A), where C is a clause and A a set of literals called assumptions. All clauses can be added with the add method and their conjunction can then be solved under the condition that all literals in A are true by solve(A). To add a removable clause C, we add (C ∨ a), where a is an unused variable. The clause is only relevant, if we add the literal ¬a (called activation literal) to the assumptions A. If the activation literal is not added to the assumptions C is essentially removed from the set clauses. DimSpec Formulas A DimSpec formula represents a transition system with states t 0 , t 1 , . . . , t k , where each state is a full truth assignment on n Boolean variables x 1 , . . . , x n . It consists of four CNF formulas: I, U, G and T , where I are the initial clauses, i.e., clauses satisfied by t 0 , G are goal clauses satisfied by final state t k , the U clauses are satisfied by each individual state t i and finally the transitional clauses T are satisfied by each pair of consecutive states t i , t i+1 . The clause sets I, U, G contain the variables x 1 , . . . , x n and T contains x 1 , . . . , x 2n . Testing whether the goal state is reachable from the initial state within k steps is equivalent to checking whether the following formula F k is satisfiable.
where I(i), G(i), U(i) and T (i, i + 1) denote the respective formulas where each variable x j is replaced by x j+i * n . One way to find the smallest number of steps to reach the goal state from initial state is to solve F 1 , F 2 , . . . until a satisfiable formula is reached. An efficient way to implement this is to use an incremental SAT solver with the assumption based interface via the following steps.
This algorithm works only if the goal state is reachable from the initial state, otherwise it does not terminate. A more sophisticated approach that can detect unreachability is described next. IC3 algorithm A different approach to solve a transition system reachability is described in [14] and implemented in the tool IC3 (Incremental Construction of Inductive Clauses for Indubitable Correctness). Given a transition system S and a safety property P the algorithm can prove that P is S − invariant, meaning that regarding S the property P is true in all reachable states or produces a counterexample. IC3 incrementally refines a sequence of formulas F 0 , F 1 , ..., F k that are over-approximations of the set of states reachable in at most k steps. It can extend the formula sequence in major steps that increase k by one. In minor steps the algorithm refines the approximations F i with 0 ≤ i ≤ k by conjoining clauses to F 0 , . . . , F j with 0 ≤ j ≤ k. Given a finite transition system S and a safety property P , the IC3 algorithm terminates and returns true, iff P is true in all reachable states of S [14]. The IC3 algorithm was implemented and adjusted 1 to the DimSpec format in the tool MinireachIC3 by Suda [8].
Satisfiability Modulo Theories (SMT) Due to quantifiers, first-order-logic is generally undecidable but there are numerous decidable subsets. The problem of solving those subsets or theories is called satisfiability modulo theories or SMT. There is a lot of research on various theories, there are for example the theory of arrays, bit-vectors, floating points, heaps, linear arithmetic and many more. These theories can be seen as restrictions on possible models of firstorder-logic formulas [26]. In this paper, we will restrict ourselves to the theory of bit-vectors. SMT was standardized by the SMT-LIB initiative [9]. We will use the same notations, especially when referring to SMT functions defined in the different theories. Such an SMT-LIB function could for example be bvadd(b 1 , b 2 ), describing the addition of two bit-vectors b 1 and b 2 . A more complex function is called if-then-else (ite) and is defined by: We refer to the theory of fixed-size bit-vectors defined by the SMT-LIB standard in [9]. The theory of bit-vectors models finite bit-vectors of length n and operations on these vectors into first-order-logic. The set of function symbols contain standard operations on bit-vectors as for example the addition, multiplication, unsigned division, bit-wise and, bit-wise or, bit-wise exclusive or, left shift, right shift, concatenation, and extraction of bit-vectors.
Software Bounded Model Checking The general idea of bounded model checking (BMC) is to encode the states of a system and the transition between them. Furthermore, you unroll any loop and function calls k-times. The number k is called the bound and is the reason for the decidability of bounded model checking but also for its limitations. After the unrolling and encoding of the program, a formula that represents the negation of a desired property is added and the formula is solved with a SMT or SAT-solver. If the solver finds a model for the formula, the approach has found an error and the model can be used as a counterexample. The loop-bound can be increased step by step until a fixed bound k is reached. Thus, the counterexample is always minimal and easier to comprehend for a user. The question to which bound the loop should be unrolled is complex and further discussed for example by Biere et al. [13].
Bounded model checking is implemented for example in the tool LLBMC (Low-Level-Bounded-Model-Checker). It was developed at the research group "Verification meets Algorithm Engineering" at the KIT with the aim to verify safety-critical embedded systems [26]. To support large parts of the C and C++ languages it uses the compiler framework LLVM as it's foundation. With it's algorithm LLBMC is able to create very positive results and earned a number of gold, silver and bronze medals in the Software Verification Competition (SV-Comp), which we will describe and refer to in our evaluation in Section 4. We will use LLBMC as a state-of-the-art reference to compare it to our approach.
LLVM Representation LLVM is an open source compiler framework project that consist of a "collection of modular and reusable compiler and tool-chain technologies" [1]. It supports compilation for a wide range of languages and is known for its research friendliness and good documentation. To work directly on C-code is very complex and it is nearly impossible to support all features and libraries. Thus, we use the intermediate language of LLVM, which describes the statements more directly and provides a number of optimizations and simplifications. We define a LLVM-module bottom up. The smallest executable unit is called an instruction. An instruction is an atomic unit of execution that performs a single operation. A basic block is a linear sequence of program instructions having one entry point and one exit point. It may have many predecessors and many successors and may be its own successor. The last instruction of every basic block is called terminator. Every basic block is part of a function. A function (n, B, e) is a tuple of a name n, a sequence of basic blocks B = (b 0 , b 1 , ..., b m ), and an entry block e ∈ B. Hereinafter, we will denote the main function of every program with f main . A module m = (F m , G m ) is a pair of a set of function symbols F m and a set of global variable symbols G m .
To optimize our encoding, we run some predefined optimization passes from LLVM and LLBMC on the generated LLVM-module. Among other things, these optimizations remove undefined behavior in C-code, promote memory references to register references and inline the program into one main function. These optimizations are described in more detail in [22]. The resulting LLVM-module is then used as input for our encoding.
LLUMC Encoding
A bug or error in a software program is a well-known notion but there exists no universal definition. A general concept is that a program has an error, if it does not act according to its specification. For our approach this definition is not specific enough. We will not cover all possible errors but concentrate on two main properties. One of them is the occurrence of an undefined overflow for the signed arithmetic operations addition, subtraction, multiplication and division. We define undefined overflows independent from the variable type and thus independent of the bit-vector length representing the variable. Let v be a variable in two's complement and let be the bit-length of v, then max v returns the maximal value for v: 2 −1 − 1 and min v returns the minimal value −2 −1 . In the C language unsigned overflows are defined by a wrap around. The addition of two unsigned integers x u I and y u I is e.g. defined modulo max_int: x u I + y u I = x + y mod max_int + 1 .
Thus, we can consider undefined overflows solely on signed variables.
Definition 1 (Undefined Overflow)
. Let x s l , y s l be signed variables of length , then an undefined overflow occurs, if 1. x s + y s > max , 2. x s − y s < min , 3. x s · y s > max or x s · y s < min , 4. x s ÷ y s with x s = min and y s = −1.
The other property for our error definition, regards calls to assume and assert. A program acts according to its specification, if the assert statements are true under the condition that the assume conditions are met. If the assume condition is not met, the further run of the program is not specified and thus no errors can occur. With these two properties in mind, we can define the term error for our approach.
Definition 2 (Program Error in LLUMC). Let p be a program. Then there exists an error in p, if all calls to assume that are prior to an assert statement or possible overflow are true and one of the following holds.
1. An assertion is false: a call to assert with the parameter false. 2. The occurrence of an undefined overflow for an arithmetic operation.
Of course, there are other errors that can happen during a program execution like irregular bit-shifting, non-termination and many more. These errors can be regarded in future work and for the remainder of this paper the expression "error" is equated with the above definition.
To find these errors we regard an LLVM-module as stated in Section 2. After inlining all function calls, we can concentrate on just the main function. Every basic block together with its variable assignment can be seen as a state. We then add a special error state and try to find a path from the entry state, defined by the entry block of the main function, to the error state. Therefore, we first define the state space of our encoding.
State Space Transition from one state to the next state will always represent the transition from one basic block to the next with respect to its current variable assignment. Often this kind of encoding is called small block encoding [11]. According to the theory of bit-vectors, we define every state variable as a bit-vector of length n. The number of bit-vectors in the state, including the bit-vectors representing the current and previous basic block, define the number of SMT variables that are needed to encode the state and the number of bits in total represent the number of CNF variables needed.
The focus on the theory of bit-vectors, allows us to ignore the state of the main memory and concentrate on the immediate LLVM-module 2 . First of all, every state has to save the current basic block. Hereinafter |B| denotes the number of basic blocks of the main function. For our encoding we need two additional blocks. The ok block represents a safe state from which on, no more errors can occur. This block is reached when the program terminates with the output 0 or when an assume condition is not met. The second block is called error and is our goal state, representing that an error occurred. With the function enc(bb) : BasicBlock → N we uniquely map every basic block to a natural number. If there are |B| basic blocks in main, then the bit-vector needs to have the length log 2 (|B|+2) to encode the current basic block. We call this variable: curr = curr 1 , curr 2 , ..., curr log 2 (|B|+2) , for Boolean variables curr i .
In LLVM the value of a register can depend on the previous basic block and must thus also be encoded: Furthermore, we need to save the current variable assignment. We do not need the assignment of all variables, but should concentrate on those that will be accessed later on and cannot be optimized away. Those variables can be classified by two properties and we call the set of those variables V : 1. Variables that are used in more than one basic block and 2. variables that are read before written in the same basic block, which is part of a loop.
It is enough to add only those variables to the state space, because all other variables are included during the encoding of the entailing basic block and their value is not directly used for a transition step. The length of the variables depends on their type. The standard integer in C has a width of 32 bits, long has 64 and Boolean values have a width of 1. There are other types but their lengths is always specified by LLVM and can thus be easily extracted.
Definition 3 (State).
The state space is the set of bit-vector variables: State-Space = {curr, pred} ∪ V . Every variable of the state space has a fixed bitlength and can take on concrete bit-vectors of length as values. For a specific time point k the state state(k) is the assignment of concrete bit-vectors to every variable.
Encoding to SMT Our aim is to develop an encoding for an LLVM-module defined in Section 2 that fits the DimSpec format. Therefore, we must define the four CNF formulas {I, G, U, T } in such a way that if there exists a transition from I to G defined by T and restricted by U then there exists an error in the given program code. The initial formula I can be created by encoding the entry block of the LLVM-module. Due to the restriction on the theory of bit-vectors global variables are not regarded, because they always include a memory access. The encoding has to represent the state that we are currently at the first basic block and that there were no prior actions. We declare the entry block itself as the predecessor to exclude any prior actions. The initial formula is thereby time-independent, because the entry block is the same for every time step. The rest of the variable assignment is arbitrary at this point and can be left undeclared.
Definition 4 (Encoding of Initial Formula). Let entry be the name of the first block, then the initial formula I(k) for the LLVM-module and for k ∈ N is defined as: The encoding of the goal formula G is also time-independent and can be defined accordingly.
Definition 5 (Encoding of Goal Formula). Let error be the name of the error block, then the goal formula G(k) for the LLVM-module and for k ∈ N is defined as: curr = enc(error) .
The universal formula consists of constraints that have to be true in all states. In our case, that are boundaries for the variables curr and pred. In the previous section, the number of bits needed to encode the current and previous basic block were defined as |B| + 2. In most cases |B| + 2 is not a power of two and thus bigger numbers can be represented. These numbers must be excluded at all times in the universal formula U.
Definition 6 (Encoding of Universal Formula). Let |B| be the number of basic blocks in the LLVM-module, then the universal formula U(k) for k ∈ N is defined as: At last, we have to define the transition formula. It represents the transition between state k and state k + 1. It is important to notice that the transition formula has twice as much variables as the other formulas. To distinguish between the variables in time-point k and k +1 every variable v of our state space is called v at time-point k + 1. Otherwise, every transition formula would be evaluated to false and thus no transition step could ever be taken. In general, the encoding of one transition has the form: We call state(k) antecedent and state(k + 1) consequent. For each state(k) that is reachable from our initial state, a transition must be defined. An undefined transition leads to an undefined state(k + 1) with arbitrary values. Thus, if there is a reachable, undefined transition all goal states can be reached. For the same reason, we determine that for each state(k) the transition must be explicit. Variables that are not important for the transition should not be declared in the antecedent but should be specified in the consequent to avoid undefined values. We will use the auxiliary function same(bb) : Basic Block → SMT-formula to encode that variables which are not modified in a basic block maintain their current value. The function same(bb) returns the conjunction of all var = var , for all variables in our state space, that have not been modified in the transition of our basic block bb.
To encode the transition between steps, we take a closer look at the current basic block, further denoted as bb and customize Equation 2 for different branching possibilities. We divide basic blocks into three groups and distinguish them by means of their terminator. Afterwards, we will have a special look at the function calls of assume, assert and exit. These function calls together with the possibility of overflows will extend the encoding. The three different types of terminator instructions are called unconditional branching, conditional branching and return.
Unconditional branching (br %bb2): Branches to the basic block with the label bb2 and creates a transition from the current basic block to bb2. If the current basic block has no other instructions, only the change of basic block and the saving of the predecessor have to be encoded. Furthermore, we have to state that no variables have changed during this transition: This encoding is rarely complete, because it does not regard all other instructions in the basic block bb. Let rl bb be the ordered list of instructions from bottom to top in bb. Then we iterate over rl bb and regard all instructions that (1) are part of our state, (2) have not been visited before and (3) are not the terminator instruction. The instruction is encoded according to its type and its operands. When an instruction like %tmp3 = add i32 10, %tmp2 is encoded, the algorithm checks the operands first. When regarding the value %tmp2, the algorithm checks whether it is a variable that is part of our state or a value calculated by an instruction, which the algorithm has to encode recursively. The stop criterion is always the occurrence of a state variable, a constant like for example 10 or a call to assert, assume or error. For the add instruction the encoding would result in tmp3 = bvadd (10, tmp2). This generated SMT formula is then conjuncted with the consequent of equation 3. For arithmetic operations an additional overflow check formula, which is described later on, is inserted. The algorithm continues by iterating further through the list rl bb until there are no instructions left. Conditional branching (br %cond, %bb1, %bb2): Creates a transition to bb1 with the condition cond = 1 and a transition to bb2 with the condition cond = 0. Every conditional branch has a branching condition represented as a variable (cond). We can extract that condition by visiting and encoding the variable representing the branching condition. In LLVM the branching condition is a Boolean value that is assigned by the so called icmp -instruction. This instruction returns a Boolean value based on the comparison of two values and it supports equality, unsigned and signed comparison. The icmp-instruction is then encoded recursively by visiting its two operands with the same approach as described for the unconditional branching. The result could for example be the condition tmp2 > 10. Based on this condition, the algorithm creates two separate transitions.
Return value (ret val): The value val can be an arbitrary integer and represents the return value of the program as usual. This terminator creates a transition to ok. In an extended and already implemented version, another check is inserted verifying that the result value of a correct program is 0 and if this does not hold a transition to error is created. Now we have to look at the calls to assume, assert, error and the possibility of overflows. During the instruction iteration of a basic block, we regard these instructions differently because they lead to a split of our transitions. Method calls (error, assume, assert): If the error-method, which is used to specify program errors in C-code, is called inside a basic block, we do not have to regard any other instructions and thus delete all other transitions from this basic block. We produce a single transition: The other three possibilities lead to a split of our transitions similar to the conditional branching. A call of assume(var) divides the set of current transitions for our basic block. The condition is var = 0 and leads to a transition to the ok state with s = enc(ok). The call to assert(var) is similar only with the transition to s = enc(error) if var = 0 holds true. In both cases, the encoding continues normally with the next instruction if the conditions are not met. Overflow Checks: While calls to error, assume and assert are explicit calls in the LLVM-module, we have to recognize possible overflows while still encoding the operations correctly. Therefore, an overflow check is always inserted when visitInst(I) is called on an arithmetic operation with the flag nsw. In this case, we know that there is a signed operation with no defined wrap around. If the condition cond ov for an overflow is true, we transition to the error state. We will give the formula for the signed addition, the formulas for subtraction, multiplication and division are similar and comply with the undefined overflow in Definition 1.
Addition: The result of adding up two positive numbers must always be positive and the addition of two negative values must always result in a negative value. Whether the result is positive or negative can be seen by the sign-bit. Starting with 0, we will refer to a single bit at position i of a bit-vector b by b[i]. The position of the sign-bit has the special index sb. Let res be the result of adding the two bit-vectors a 1 and a 2 , then the condition cond ov for an undefined overflow is defined by: All components of the transition formula have now been discussed. To obtain the complete transition formula the algorithm has to iterate over all basic blocks of the main function. Depending on their terminator instruction, every basic block has do be encoded according to the definitions above. To predict which transition is taken in which step would be equal to solving the whole formula. Thus, the transition formula is time independent and the transition possibilities for all time steps are part of the formula.
Definition 7 (Encoding of the Transition Formula). Let BB be the set of all basic blocks of f main and let encode(b) with b ∈ BB be the encoding as shown above, then the transition formula T (k, k + 1) for k ∈ N is defined by: Claim. There exist an error as defined in Definition 2 for the program p, iff 1. p is transformed into a LLVM-module as described in Section 2 and 2. there exists a transition path in from the initial state to the goal state while the universal formula holds in all states.
Proof idea: We forego on a formal proof, because it would require a structural induction over huge sets of C-Code and the LLVM-language. Instead, we present short arguments and references for our claim.
(1): Using LLVM as a representation for C-code is widely accepted and used in research and industry. We assume that the transformation from C-code into a LLVM-module does not remove or add any errors based on the high number of research papers [4,6,10] and tools like LLBMC [26] and SeaHorn [19].
(2): The error node has three types of incoming edges: from an assert statement, from an overflow check and an edge from the error node itself. We disregard the edge that points to itself and are left with the two options that match the properties defined in 2. If the encoding of the variables is, as we claim, correct and our state space is closed under T and U we can assume that the a transition path from the initial state to the error state complies with an error in the LLVMmodule.
From SMT to SAT formula The encoding of the LLVM-module gives us four SMT formulas. These formulas have to be translated into CNFs. The most widespread approach to transform SMT to CNF formulas is called bit-blasting.
We have taken one approach implemented in STP [17] and the ABC-library [20] and modified these algorithms to correspond to some technical requirements of the DimSpec format. Finally, a CNF in the DimSpec format is created that can be used as input for a number of SAT-solvers.
Experimental Results
The LLUMC-approach is implemented as a tool-chain. First, the input file in Ccode is compiled with Clang (version 3.7.1) and then optimized with LLVM and LLBMC passes. This optimized LLVM-module serves as input for the program LLUMC, which performs the encoding as described above. To transform the created SMT formulas into CNF formulas in DimSpec format, the tool STP was modified. The final renaming and aggregation is implemented directly in LLUMC. Thus, the tool produces a single CNF file in DimSpec format. We tested two different approaches to solve the generated DimSpec/CNF formulas. The tool IncPlan [18] was developed at KIT and implements the incremental SAT-solving described in Section 2. It can be used with every SAT-solver that accepts the Re-entrant Incremental Satisfiability Application Program Interface (IPASIR). We have tested IncPlan with a number of SAT-solvers including Minisat [28], abcdSat [15], Glucose [5] and Picosat [12]. While Glucose and Minisat produced good results for some benchmarks, they crashed for a number of benchmarks and thus we concentrated on the usage of abcdSat and PicoSat. The IC3 algorithm was implemented and adjusted to the DimSpec format in the tool MinireachIC3 by Balyo and Suda [8]. The safety property P expresses that the error state should not be reachable and thus P is given by ¬G. Thus, we are not only able to prove the existence of errors but also their nonexistence.
Benchmarks
We evaluated our approach using benchmarks from the Software Verification Competition (SV-COMP) [10]. The SV-Comp is an annual competition for academic software verification tools, with the aim to compare software verifiers. The competition is conduced every year since 2012. The verification tasks are divided in different topics and verification tasks are contributed by a number of research and development groups. While we were not able to participate in the competition, the collected benchmarks serve as an excellent evaluation basis for every verifier. All benchmarks are available at [2] and we regarded the sub-folder c with programs written in the language C.
We screened these benchmarks for tasks that match our theory of bit-vectors. We excluded all benchmarks that do not match our theory and removed benchmarks that include memory accesses or floating point arithmetic. Furthermore, we checked that all instructions used in the examples were implemented in LLUMC. It is notable, that nearly all instructions were implemented and only the truncate instruction, which cuts the length of values, restricts the usable benchmarks. The truncate instruction is not included in most theories of bitvectors e.g in tools like LLBMC, because on a programming level there is not enough (signedness) information about the bit-vector to truncate it easily. Lastly, we excluded recursive and concurrent tasks due to the inlining in our approach.
We evaluated our approach on 14 incorrect and 10 correct programs. Our approach creates a CNF formula representing the problem of finding a transition path to the error state. Thus, the desired result of our approach should be sat in case there exists an error and unsat if there is none. Whereby most benchmarks are smaller and have the purpose of demonstrating the correctness of our approach, we were also able to evaluate our approach for some larger problems. The benchmarks vary between 14 and 646 lines of code (LoC) and 151 to 116777 number of clauses. The evaluation was performed on a system with 64 CPUs with 2.4GHz from which, for our sequential approach, only one was used and 483 GB memory. Each benchmark had a time limit of 600 seconds and a memory limit of 8 GB. The time needed to generate the CNF formula and to read and write CNF formulas in and out of files is negligible for larger problems. Thus, we decided to measure only the CPU time needed to solve the generated CNF formulas. Table 3 displays the result of solving the generated DimSpec/CNF formulas both with the tool IncPlan and MinireachIC3. The results of running IncPlan with the SAT-solver abcdSat were most stable and are thus displayed. One can see, that our approach generates correct encodings of the C-code and that In-cPlan is able to find a satisfying model representing a transition path to the error state for erroneous programs. We also recognize that for small problems the time and memory needed is insignificant and for larger problems it is still manageable. For programs without an error we are not able to prove anything, but the timeouts indicate the correctness of our encoding. The jain benchmarks show that the number of iterations the SAT-solvers are able to perform in the given time depends on the complexity of the individual basic block and varies for all benchmarks. MinireachIC3 in comparison is not only able to prove the existence of errors but is also able to prove their non-existence. For erroneous programs the time difference between IncPlan and IC3 is negligible for smaller benchmarks. For some of the larger benchmarks the algorithm produces a timeout. In general it is harder to prove the absence of errors than to prove their existence. To prove the existence of an error, the solver only needs to find a valid transition path to the error label, while needing to exclude all possible transition paths to the error label for proving the absence of an error. This complexity is displayed in Table 3. The "jain false" and "jain true" benchmarks only differ in a slightly changed assert statement but to prove the absence of an error always takes more time than to prove its existence. In the case of "jain 7" even 25 times longer.
After evaluating the feasibility of our approach Figure 1 shows the comparison between the LLUMC-approach with the state-of-the-art bounded model checker LLBMC. When comparing an unbounded model checker like LLUMC with a bounded model checker, we have to determine a bound until which the bounded model checker unrolls the program. When setting the bound too small, LLBMC runs very fast but has a high chance of producing incorrect results but if we set the bound to high, LLBMC needs a long time to encode and solve the formula. We tested LLBMC with the bounds of 10, 100 and 1000 and compared it with our results generated by IncPlan and MinireachIC3.
Looking at Figure 1, we can recognize the time difference depending on the defined bound. Setting the bound to 10 leads to a really fast solving process but it can solve fewer problems compared to the bound of 100. Setting the bound to 1000 results in timeouts for more complex benchmarks and thus regresses the number of solved problems. After some overhead for smaller problems, solving the benchmarks with IncPlan and abcdSat leads to good results but due to its restriction of only finding errors and not disproving them, it can not solve as many benchmarks as MinireachIC3. The IC3 algorithm can solve 20 out of 24 benchmarks and has a performance advantage compared to all other approaches.
The experimental evaluation illustrates the correctness of our approach for a wide variety of problems. Furthermore, it indicates that the time needed for most problems is reasonable. For model checking in general, the scalability for large programs is always a challenge.
Conclusion and Future Work
We introduced a novel unbounded model checking approach to find errors in software or prove their non-existence by using the DimSpec format. We have developed a new encoding from C-code to a CNF formula in the DimSpec format. Using the intermediate language LLVM, we are able to transform the existence of an error in C-code into four SMT formulas representing the problem of finding a transition path from the initial state of the program to a defined error label. By means of an AIG-supported bit-blasting algorithm, the four SMT formulas are then transformed and added into one CNF in DimSpec format. The encoding has been implemented in the tool LLUMC and we have evaluated this encoding using both the incremental SAT-solving algorithm implemented in the tool IncPlan and the invariant checking algorithm implemented in MinireachIC3. Based on benchmarks from the SV-Comp, the evaluation shows that we extended the functionality of current solvers for infinite-loops while providing correct results and are also comparable to the state-of-the-art solvers regarding solving-time.
Transforming C-code and the existence of errors into CNF formulas in Dim-Spec format results in a wide range of possibilities to solve the given problem. While we tested incremental SAT-solving and the invariant checking algorithm of IC3, there is also the chance of utilizing advances in parallel SAT-solving for our approach. IncPlan can be run with parallel SAT-solvers as back-end tools, and IC3 was designed to fit both sequential and parallel SAT-solving.
In addition to parallel solving, the performance of the LLUMC approach can also be improved by enlarging the incremental steps of the solver. A first evaluation shows that merging basic blocks in LLVM leads to performance improvements, indicating that a large block encoding could be advantageous. Furthermore, the functionality of the approach can be extended. As a next step, an implementation of other theories like the theory of arrays would make LLUMC usable on a greater range of programs.
Running Example
To illustrate the transformation from C-code into a LLVM-module and later on into a CNF formula, we demonstrate the encoding on an example. The example was taken out of the benchmark verification tasks of the competition on software verification (SV-Comp). It can be found under the category bitvectorloops. Example 1 iterates through a while-loop until x is smaller then 10. In every loop the value 2 is added to the even number x. At the first glance, the loop will never terminate but after a high number of iterations an overflow occurs and the value x becomes smaller then 10, while still being an even number. The maximal value of an unsigned integer (max_uint) is the uneven number 4294967295. After a high number of loop iterations the value x would be max_uint − 1. The addition with 2 mod (max_uint + 1) would then result in x = max_uint−1+2−(max_uint+1) = 0 and thus the assert condition (x%2) will fail, because x is still an even number. This example shows the limitations of bounded model checkers, because they would only unroll the loop to a specific bound that often is not high enough to find errors like these.
Example 1 (C-Code). In theory one could work on this LLVM-module, but it is more efficient and easier to first run some predefined optimization passes from LLVM and LLBMC. In the first step, we remove undefined values in LLVM. Furthermore, the optimization mem2reg promotes memory references to be register references. The pass called inline tries to inline all functions bottom-up into the main function. Afterwards the two passes instnamer and simplifycfg simplify the program. After running these optimizations on our Example 2 we get the following LLVMmodule function as input for the LLUMC-approach.
Example 3 (Optimized LLVM-module). We can see the results of running the LLVM-passes when comparing the resulting main function with the earlier Example 2. The result of running the instname pass is obvious when looking at the naming of basic blocks and variables. The mem2reg pass replaced all allocate, store and load instructions with the phi instruction. Hence, the value of %x.0 is set either to 4294967194 when coming from the entry block or to the earlier calculated %x + 2. The inlining pass inlined the function __V ERIF IER_assert and checks in line 15 whether the assert condition was true(1) or false(0). The state space of this example consists of the two variables curr and pred with a bit-length of four. Furthermore, the variable tmp2 with a bit-length of 32 is added to the state space, because it occurs in the basic block bb1 and also in return. The SMT function bvmod represents the modulo calculation and the function enc(bb) assigns values to the basic block as following: The encoding algorithm iterates over all basic blocks of the LLVM-module and encodes them as described in the paper. The encoding of the example leads to the following formulas, which are then transformed to CNF-formulas by an AIG-based approach.
Details from the Experimental Evaluation
Details about the benchmarks used for the experimental evaluation are given in table format. Furthermore, detailed evaluation results are displayed. Table 4. Runtime data for Benchmarks from the SV-Comp run with MinireachIC3, where the column Phases represents the number of major steps performed by the solver. | 2018-02-12T16:43:06.000Z | 2018-02-12T00:00:00.000 | {
"year": 2018,
"sha1": "eecd715b12e87d39f755ddad92a202337f4bd94d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b53f24816687dc3c5eedc394eb724b6b98c507d9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
231758054 | pes2o/s2orc | v3-fos-license | Early risk markers for severe clinical course and fatal outcome in German patients with COVID-19
Background Some patients with Corona Virus Disease 2019 (COVID-19) develop a severe clinical course with acute respiratory distress syndrome (ARDS) and fatal outcome. Clinical manifestations and biomarkers in early stages of disease with relevant predictive impact for outcomes remain largely unexplored. We aimed to identify parameters which are significantly different between subgroups. Design 125 patients with COVID-19 were analysed. Patients with ARDS (N = 59) or non-ARDS (N = 66) were compared, as well as fatal outcome versus survival in the two groups. Key results ARDS and non-ARDS patients did not differ with respect to comorbidities or medication on developing a fatal outcome versus survival. Body mass index was higher in patients with ARDS versus non-ARDS (p = 0.01), but not different within the groups in survivors versus non-survivors. Interleukin-6 levels on admission were higher in patients with ARDS compared to non-ARDS as well as in patients with fatal outcome versus survivors, whereas lymphocyte levels were lower in the different subgroups (all p<0.05). There was a highly significant 3.5-fold difference in fever load in non-survivors compared to survivors (p<0.0001). Extrapulmonary viral spread was detected more often in patients with fatal outcome compared to survivors (P = 0.01). Further the detection of SARS-CoV-2 in serum showed a significantly more severe course and an increased risk of death (both p<0.05). Conclusions We have identified early risk markers for a severe clinical course, like ARDS or fatal outcome. This data might help develop a strategy to address new therapeutic options early in patients with COVID-19 and at high risk for fatal outcome.
Introduction
Recently, a new type of Coronavirus, SARS-CoV (Severe Acute Respiratory Syndrome Corona Virus)-2, led to a worldwide pandemic outbreak of an infectious disease, called COVID-19 (Corona Virus Disease 2019). The clinical manifestation of this disease is very broad and variable, ranging from asymptomatic carriers to symptoms of acute infection of the upper airways and occasionally severe acute respiratory insufficiency and death [1][2][3]. Various risk factors and comorbidities potentially modulating susceptibility to infection and severity of disease are discussed, but it is not clear which factors determine not only the clinical course, but also the fate of patients with COVID-19 [4].
Although COVID-19 appears to have a lower fatality rate than infections with SARS-CoV or Middle East Respiratory Syndrome (MERS)-CoV, the absolute number of deaths is high due to the global burden of infection. Beside possible regional differences in health care, an age-related increase in mortality has consistently been observed. Recently, based on results from an observational database of 169 hospitals in Asia, Europe, and North America, cardiovascular and pulmonary comorbidities have been reported to be independently associated with increased in-hospital death [4]. Furthermore, a decrease in kidney function and need for mechanical ventilation have been described as prognostic factors for fatal outcome in 5,700 patients hospitalized with COVID-19 in the New York City area [5]. Next to increasing age, mechanical ventilation and higher PEEP level requirements, were associated with increased mortality in 1,591 COVID-19 patients admitted to the ICU departments in the Lombardy Region in Italy [6]. A large retrospective cohort study from Wuhan in China proposed older age, a high SOFA score, and D-Dimer levels greater than 1 μg/mL, as markers to identify poor prognosis [7].
However, none of these studies focus on predictors of severe clinical course and fatal outcome soon after hospital admission. In addition to being clinically relevant, such predictors are crucial for early identification of high-risk individuals, as these patients may benefit from early novel treatment strategies.
In a preliminary report, we presented clinical data from 50 patients hospitalized due to COVID-19 [8]. In the present study, we retrospectively evaluated 125 COVID-19 patients admitted to the University Hospital in Aachen, Germany. We compared patients with fatal outcome versus survivors with a disease severity of ARDS or non-ARDS, and propose early clinical markers that may help predict fatal outcome.
Methods
For the clinical description of the first 125 patients admitted with COVID-19 consecutively, we retrospectively evaluated data from all patients admitted to the University Hospital in Aachen, Germany, at the start of the SARS-CoV-2 pandemic on February 24 th 2020, until July 30 th . Observation of the first 50 patients has been described initially [8]. A diagnosis was made based on a positive SARS-CoV-2 result in respiratory samples in our hospital, externally before admission, or transferred from another hospital. Patients were either isolated under standard care or treated in our intensive care unit. The different treatment strategies and consequently the group definition was defined by severity of the disease. Severity of ARDS was classified according to the degree of hypoxia as defined by the "Berlin definition". Patients with ARDS were treated in our intensive care units. Patients without ARDS not needing intensive care medicine were isolated under standard care. To identify potential predictors of clinical outcome in COVID-19 patients, we focused on the analysis of various parameters in non-survivors and survivors. Survivors were discharged from the hospital after treatment, whereas nonsurvivors died in connection to COVID-19 disease.
Comorbidities (such as hypertension, overweight or obesity, pre-existing respiratory or cardiovascular diseases, smoking, chronic kidney disease, malignancies, chronic liver disease), and medications prescribed at the time of admission were recorded in hospital, or taken from existing medical records. We evaluated early symptoms, as well as timing of initial physician contact and hospitalization.
A body mass index (BMI) of 25 to < 30 kg/m 2 was classified as overweight, and obesity as � 30 kg/m 2 . Diabetes or prediabetes was defined by clinical history, medication and HbA1c values � 6.5%, or � 5.7 to < 6.5%, respectively.
Febrile days were defined as the time from fever onset until the last documented value above 38.5˚C.
Vital parameters presented in this study were taken between four and 24 hours following hospital admission or intubation, with the worst values being depicted. Severity of ARDS was defined using P/F-ratio, or the Horowitz index: an index below 100 mmHg defines severe ARDS, below 200 mmHg moderate ARDS, and below 300 mmHg mild ARDS.
Diagnostics of viral infection was performed by broncho alveolar lavage (BAL) in each intubated patient. In spontanous breathing patients, sputum was used for testing. Viral load was determined by real-time (rt)-polymerase chain reaction (PCR) of the sample. The threshold value Ct represents the time point, at which the exponential phase of amplification begins, which therefore is inversely proportional to the virus concentration in the material and reflects the relative difference in a logarithmic scale. The threshold value of the sample gene < 20 was classified as high. Values > 30 were classified as low viral load, and values of 20 to < 30 were medium. The same applies when serum, urine or stool were analysed for the presence of the SARS-CoV-2 virus.
The blood tests after hospital admission were also analysed for white blood count and lymphopenia; the latter was diagnosed with relative lymphocyte counts below 22% using flow cytometry and 25% using microscopic analysis, or with an absolute lymphocyte count below 1,0/nL. Further blood tests were analysed regularly as indicated, therefore patient numbers vary between different time points in the figures, but the time point refers to the initial admission for each patient.
Further technical and imagery tests were performed based on clinical decision making and evaluated in a standardized manner.
All parameters were tested for significance as described in the legends for all tables and figures. Nominal scaled parameters testing according to Fisher was performed, whereas ordinal scaled variables were tested for normal distribution and the Welsh test was used. Otherwise the Wilcoxon-Mann-Whitney-test was used. Categorial variables were tested by Pearson's chisquared test. Data are presented as mean ± standard error of the mean or median values with interquartile range (IQR) or confidence interval of 95%. Statistical significance was assumed for a p-value of < 0.05. Statistical testing was performed with GraphPad Prism version 8.4.3 and R version 3.6.3, utilizing packages ggplot2 (3.3.2) for plots, tangram (0.7.1) for summary statistics, base R generalized linear models (glm) for logistic regression and etm (1.1.1) for estimating cumulative incidence functions.
The event of intubation in days after symptom onset of all ARDS patients was estimated by the Kaplan-Meier method and described for the specific outcome.
The study obtained an ethics approval from the ethics committee at the RWTH Aachen Faculty of Medicine. All data were fully anonymized and patients provided informed written consent.
Results
This cohort summarizes the first 125 COVID-19 patients in the University Hospital of Aachen. Aachen was an epicenter of the disease in Germany, and is located close to Heinsberg, the area in which the first serious outbreak was detected in Germany. 59 patients with ARDS were treated in the intensive care unit, while 66 patients were admitted to a regular isolation ward. At the time of this analysis, 38 of the 125 patients were deceased (30%), and 87 (70%) had been discharged from hospital.
Patient characteristics
Baseline characteristics of the overall cohort and subgroups with ARDS and non-ARDS and for the subgroups of non-survivors versus survivors are summarized in Table 1. In the overall cohort, mean age was 66±1.2 years, and 30% were women. In the subgroups of ARDS and non-ARDS patients survivors were younger compared to patients with fatal outcome (ARDS: 63.1±3.1 versus 66.2±4 years; p = 0.1; non-ARDS: 65.4±4.6 versus 78.8±5.7; p<0.01) ( Table 2). Main initial clinical findings included fever (72%), dyspnea and cough (55% each), and one third of patients reported gastrointestinal symptoms. In the total population, the time from onset of first clinical symptoms to hospitalization was 5.0±0.5 days. The time from symptom onset to hospitalization was lower in patients with fatal outcome compared to survivors showing a significant difference in the subgroup of non ARDS patients (5.5±1.5 vs. 2.4±2.4; p = 0.04) ( Table 2). Admission of ARDS patients on intensive care unit after symptom onset was 9.0±0.9 days, they were intubated after 10.0±1.0 days after symptom onset. All patients had comorbiditie, but in the performed univariate and multivariate logistic regression analysis there were no highly significant differences in prevalence of arterial hypertension, pre-existing respiratory diseases, pre-existing heart diseases or medications between patients with ARDS compared to non-ARDS patients and between patients with fatal outcome compared to survivors (Table 1).
Although there was no significant difference in the prevalence of diabetes or prediabetes between subgroups, the BMI levels as a grade of overweight (BMI � 25 kg/m 2 ) was significantly higher in ARDS versus non-ARDS patients [28.6 (26.3-31.3) vs. 26.8 (23.9-29.8) kg/m 2 ; p = 0.01], but did not differ in survivors versus non-survivors [28.4 (24.7-32.6) vs. 28.6 (24.9-31.0) kg/m 2 ; p = 0.59]. Comparing a fatal outcome and survival for the subgroups of ARDS or non ARDS patients showed no significant difference in BMI levels. The mean absolute difference in BMI between ARDS and non-ARDS patients was 1.8 kg/m 2 (reflecting a difference of about 10 kg between groups) and there was no difference in median BMI between non-survivors and survivors, suggesting that BMI was associated with disease severity, but not with fatal outcome. (Fig 2).
Outcome predictors
Since an ongoing inflammatory reaction or "storm"has been discussed as a denominator for clinical outcome [9,10], we analysed temperature curves as an easy to assess clinical parameter of inflammation in this context. To this end, we calculated the respective area under the fever curve in relative arbitrary units in relation to 37.5˚C, reflecting the "load" of fever (Fig 1E and 1F). The comparison between survivors and non-survivors showed a marked and significant difference in fever load (a 3.5-fold increase in relative arbitrary units; p<0.0001) between these two groups, as a clinical indicator of inflammation (Fig 1F). With respect to viral load (absolute copy number), there were no significant differences between non-survivors and survivors, but an extrapulmonary manifestation of SARS-CoV-2 in non-survivors was detected significantly more often than in survivors (69% vs. 27%; p = 0.002) ( Table 3). In addition, detection of SARS-CoV-2 in serum at admission was associated with a significantly increased risk of death (60% vs. 29%; p = 0.01) and a significantly more severe clinical course (60% vs. 17%, p = 0.0002) (Table 3). Interestingly, when patients were dichotomized according to SARS-CoV-2 viremia, other potential risk indicators, such as platelet count over time, showed a marked difference between non-survivors and survivors, suggesting that viremia has detrimental effects via various mechanisms (Fig 3).
Cumulative incidence analysis suggests that 50% of all non-survivors died within the first 20 days after the onset of symptoms (Fig 4A). In the subgroup of ARDS patients, 50% of all non-survivors required intubation and invasive ventilation within the first 7 days of onset of symptoms. In contrast, if there was no need for mechanical ventilation within 16 days of symptom onset, 90% of patients survived (Fig 4B).
Discussion
This report of 125 patients hospitalized due to COVID-19 infection in Germany, is showing an increased inflammatory burden and SARS-CoV-2 seropositivity as early markers in patients with ARDS or for fatal outcome.
In different reports of large cohorts, mainly from China, Italy, or the US, comorbidities like pulmonary and cardiovascular diseases, as well as diabetes and kidney diseases, have been proposed as risk factors, or were associated with worse outcomes [5,6,11,12]. Interestingly, in our study, there were no highly significant differences in comorbidities between subgroups, although all patients had comorbidities; neither in ARDS versus non-ARDS patients, nor in non-survivors versus survivors. This also applied for renin-angiotensin-aldosterone system (RAAS)-inhibitor medication, as was recently reported by others [13,14]. In our cohort, overweight and higher BMI was associated with severe clinical course of disease but not with fatal outcome. This has not been reported previously, and might be due to most reports coming from asian populations with a lower degree of overweight in general [15,16]. However, the prevalence of known or unknown diabetes and prediabetes were comparable between patients with and without ARDS, and survivors and non-survivors. Additionally, there were no significant differences in HbA1c levels, although a trend might exist. This must be further evaluated in larger populations, as chronic elevated blood glucose levels may trigger an inflammatory response and increased susceptibility to endothelium damage, which has been described postmortem as a characteristic feature of COVID-19 [12,17,18]. The most prominent difference in survival between the subgroups, was time between symptom onset and hospitalization, with patients who died having significantly less time between symptom onset and hospitalization. Age was different in both investigated subgroups, confirming other data that increasing age is associated with more progressive disease and fatal outcomes [5,7]. In order to identify discriminating early markers for severity and disease outcome at hospital admission, we observed significant differences in inflammatory markers. An increased inflammatory reaction, or so called "cytokine storm", has been previously described [9,19], and our data suggest that an early increase in these markers is associated with poor prognosis. To further analyse the ongoing inflammatory burden, we used fever as an easy to assess clinical indicator. The calculated area under the fever curve (fever load) was significantly different between survivors and non-survivors, supporting the concept of a higher inflammatory response and burden in patients with severe outcomes. In addition, elevated levels of urea in patients at admission were associated with ARDS or fatal outcome, indicating a pronounced catabolic state early on in the disease. Despite some reports and observations about peculiar alterations in coagulation and thrombembolic events [20], we didn´t find any significant differences in D-Dimer, INR or PTT at time of admission between survivors and non-survivors, but D-Dimer were significantly higher in patients with than without ARDS. Viral load dynamics in relation to disease severity has been reported recently [21]. Overall, in our cohort viral load was comparable between all subgroups, but additional analyses revealed that extrapulmonary SARS-CoV-2 detection, and especially viremia, was associated with more severe disease and fatal outcome. Given this, it is worth mentioning that thrombocytopenia has been reported in COVID-19 patients with poor outcomes [22][23][24]. In our study, the number of thrombocytes were not different between subgroups on admission. However, in patients with viremia-but not in those with SARS-CoV-2 RNA negative blood-thrombocyte levels significantly diverged between survivors and non-survivors throughout the course of disease. This might explain the difference to the studies cited above; the thrombocyte count timing is important when comparing studies. It didn´t escape our attention that this is the first data of a tertiary care center in Germany, which is currently able to provide sufficient intensive care to patients with COVID-19. At the same time, the rather specific cohort of patients treated in our university hospital is the major limitation of this study, the evidence and scientific contribution is rather descriptive. Furthermore, we are aware that some significant associations might be due to small numbers or lack of multiple testing, for which we didn´t adjust. However, we hope that the results and conclusions will incite further evaluation in different ongoing studies worldwide.
Study limitations
This study has several limitations. First, the character of retrospectively collected data limited the completeness of the data and made missing data unavoidable. Second, recruitment of the patients differed in the disease stage including patients with and without ARDS concerning the onset of the disease. Some patients were transferred to our hospital from other ICUs already diagnosed with ARDS. Third, Given the comparable small number of patients and events further analyses including logistic regressions are limited in their potential to identify significant relations.
In conclusion, we have identified early risk markers for a severe clinical course like ARDS or fatal outcome in patients being hospitalized for COVID-19. Simple laboratory markers in addition to SARS-CoV-2 viremia, age and time of symptom onset to hospitalization seems to be feasible in predicting survival. | 2021-02-03T06:18:22.376Z | 2021-01-29T00:00:00.000 | {
"year": 2021,
"sha1": "5410666f111bc1051bbcdc241bf0938fb8db1d5e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246182&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b72e3506a2d4c0264810f0edfccc48b3e4788a94",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54048145 | pes2o/s2orc | v3-fos-license | Meromorphic Solutions of Modified Quintic Complex Ginzburg-Landau Equation
In this paper, the meromorphic solution of the modified quintic complex Ginzburg-Landau equation (CGLE) is analysed. We found the general explicit solutions to the equation in three different forms, yield simply periodic, doubly periodic and rational solution. Firstly, this equation was transformed to nonlinear ordinary differential equation and then we solved it by using a powerful algorithm proposed by Demina and Kudryashov, based on the existence of Laurent series. Finally, we have the meromorphic solution of the equation, and to verify these solutions, we showed a special case which we constructed from the general form.
Introduction
Nonlinear partial differential equations have been applied in many areas including nonlinear physics. The main challenge is to find the analytical solutions of nonlinear partial differential equations. A general approach for obtaining analytical solutions is to transform the nonlinear partial differential equations into nonlinear ordinary differential equations. A number of research have been conducted to find exact solutions of nonlinear partial differential equations [1,2,3] In recent years there have been many papers that mention about the exact solutions of various nonlinear differential equations using various methods. The results in these papers they generally refer to them as new solutions of various equations the nonlinear differentials studied. However, against these newly perceived solutions, finally by Kudryashov and some writers [4,5,6,7,8,9,10,11,12] have shown that most of these solutions are "well-known" solutions. Kudryashov pointed out that "new solutions" as in [13,14,15] and many other papers, are identical forms with others and is only arXiv:1708.01309v3 [nlin.SI] 26 Jun 2018 distinguished by the expression of trigonometric identities, hyperbolic functions, and constants, which actually come from the same Laurent series. For example, Salaz et al. [15] showed that they found nine new solutions of the Burger equation. However, Kudryashov [4] showed that solutions are identical and are only distinguished by trigonometric identities, hyperbolic functions, and constants. Furthermore, the solutions are derived from Laurent series of Riccati Equations, which has been changed from Burger's Equation. We quote an interesting statement from Kudrashov [4] that "We will illustrate the exact solution of the nonlinear equations in the equations determined by the Laurent series for nonlinear solutions on differential equations." Therefore, if we aim to find a general solution of nonlinear differential equations, then the first step is to look for the existence of the Laurent series from the equation. This is a condition that must be fulfilled.
A novel algorithm to construct explicit meromorphic solution for autonomous nonlinear ordinary differential equations has been proposed by Demina and Kudryasov [16]. This method is built on the existence of Laurent series of differential equations studied. This method is very powerful to find the general solutions of nonlinear differential equations. By using this algorithm, we can find analytical solutions in three different forms, which cannot be found using the other methods. The other methods can only find a special case of the general solutions of using Demina and Kudryashov approach. An important point of a good method is that the method provides space for each solution obtained using the method to be re-entered into the studied equation, in order to verify the correctness of the results obtained. This method expressly requires that any solution obtained using the algorithm must be verified in this way. Details of the algorithm can be found in [16,17].
The aim of this paper is to analyze the following equation where the coefficients P , Q 1 , Q 2 , Q 3 and γ are real and are physical parameters. Equation (1) is called the Derivative Nonlinear Schrodinger (DNLS) equation with potential term, or the modified quintic complex Ginzburg-Landau equation (CGLE). Equation (1) can be found in several problems in physics such as transmission line or as wave propagation on a discrete nonlinear transmission line. Kengne et al. [18] attempted to solve this equation but they can not find the general solution of this equation. In parts A and B of their work, they always use ansatz for their solution. This can not be the foundation for ensuring that the solution is common. Therefore, a more fundamental approach is needed to determine the general solution, and that approach is the algorithm mentioned earlier.
The solutions discovered by Kengne and Liu have also been proven by Nickel and Schurmaan in [19] that such solutions are not common (based on the solution by Whittaker and Watson [20]). They showed in detail that all solutions [18] are merely special cases of the general solutions that they describe in [19].
However, the general solution stated in [19] is still in an implicit form. This can be seen from equation (2) in [19]. The nonlinear ordinary differential equation (transformed from the quintic Ginzburg-Landau equation) is still defined again with a new function R(w), so in the end, the solution formulation generally contains the function. Actually, the definition of this new function is not necessary if we use the Demina-Kudryashov algorithm. In this algorithm, we directly solve the ordinary nonlinear differential equation without using a new function definition. Consequently, the solution we get really only contains all the variables and constants that play a role in the equation. The solution equation analysis (1) begins by transforming the equation to a nonlinear ordinary differential equation. Then, we use the algorithm proposed by Demina and Kudryashov to calculate the exact meromorphic solution. An important aspect of this analysis is to find Laurent series. This series will be useful for constructing the right meromorphic solution, and simultaneously proving the existence or nonexistence of the meromorphic solution [21]. At the end of this paper, we show the solitary wave solution as a special case of the modified quintic complex Ginzburg-Landau equation by selecting the condition of the physical system parameters. We show this solution to verify the solution shown in [19]. The structure of this paper is as follows. In Section 2, we describe the process of transforming the equation (1) to a nonlinear differential equation. The main part is Section 3, where we analyze the solution of the meromorphic equations using the Demina-Kudryashov algorithm.
Transformation Equation (1) to Nonlinear Ordinary Differential Equation
In this section, we will show briefly the transformation of the equation (1) to nonlinear ordinary differential equation [18]. Firstly, we take the form where a(x, t) and ϕ(x, t) are real. Inserting equation (2) into equation (1) and then separating its real and imaginary parts to obtain Then, by defining (based on traveling wave model) where a 0 , l 0 , q 0 and v are real constants; x − vt = z. Then, substituting equation (5) and (6) into equation (3) and (4) yields and Multiplying equation (8) with a and then integrating it yields where K 1 is the integration constant. Inserting equation (9) into equation (7) to obtain Multiplying equation (10) by da dz and integrating to obtain da dz where K 2 is integration constant. Then, using a 2 = w, we obtain the following form where Equation (12) is the main equation solved in this paper. In the following sections, we will construct the meromorphic solutions of this equation using the Demina-Kudryashov algorithm.
Meromorphic Solutions of Nonlinear Ordinary Differential Equation
In this section we will find the Laurent series and the meromorphic solutions of equation (12). Firstly, by inserting [16,17,4,21,22] into equation (12), without loosing the generality, and not taking z 0 into account, we find two kinds of Laurent series as follows and As prevailed on power equations, we can see that the equation (14) and (15) are satisfied if E = 0. This is one of many necessary conditions of equation solutions that we need.
At a glance we see that the K 1 integration constant does not appear in the equation (14) and (15). But, actually K 1 is contained in C constant and coefficient c 3 , and other high order coefficients.
The Laurent series solution (14) and (15) have a simple pole. We can see from the Laurent series that the total residue is zero. This is the necessary existence condition to construct an elliptic equation solution (12).
Simply periodic solution to equation (12)
Under the necessary existence condition of the solution, the Demina-Kudryashov algorithm can not be used to construct elliptic solutions for the first type. Therefore, we need to construct a simply periodic solution as follows. First, based on [16,17], we have Expanding equation (16) around z = 0 yields Comparing (17) and (14), we found c (1) So, simply periodic solution for equation (12) is with the parameter relation C, D and E being The equation (19) is a simply periodic solution of the equation (12), from which other solutions can be obtained using trigonometric and hyperbolic identities, such as soliton solutions or other solutions that differ only periodically according to their trigonometric identities. We can verify it by inserting the equation (19) into the equation (12), then using relation parameters (20) and constants (18), then the equation (19) satisfies the equation (12). This clearly proves that the solutions we produce are true and verifiable.
Rational solution
Rational solution takes the form with constants and the parameter relation C, D, and E The two rational solutions (21) that we obtain (using these constants (22) and the parameter relation (23) has also been inserted into equation (12) and the results have been found to satisfy the equation. Again, this proves the truth of our solutions.
Doubly periodic solution
Now, we construct elliptic solution (doubly periodic) for second type. Based on [16,17], we can write where c Expanding equation (24) around z = a, we found where −1 ζ(a); ℘ ≡ ℘(z, g 2 , g 3 ) is Weierstrass-℘ elliptic function and ζ is the Weierstrass-ζ elliptic function; and z = a is a pole of the second order type. Invariants g 2 and g 3 are determined from elliptic function ℘, and take the form where ℘, g 2 , and g 3 satisfy the equation We can rewrite equation (24) as Comparing equation (25) and (14) and then equation (26) with (15), yield So, the elliptic solution for second type is The equation (31) is a doubly periodic explicit solution of the equation (1). We can clearly see that this solution is more explicit than that shown in [19], since based on this algorithm we do not need to define the "new R(w)" function on the right side of the nonlinear differential equation (1). An important aspect is as we did before in simply periodic solutions and rational solutions; the obtained solution is then inserted into equation (12), and the result is found to satisfy the equation (of course by using constants and parameter relations (30)). Thus, the more reinforcing that our solutions are true. We will show that by choosing a special case, we can find a solitary wave kink solution as shown in [19].
Then, if we try to find a second simply periodic solution of the Ginzburg-Landau equation, we can find constants like L j and A j with j = 1, 2, 3 and integration constants K 1 and K 2 , and of course we can find a simply periodic solution form. But the problem is how to find the relationship parameters of other constants like C, D, E and also their relations to the integration constant. This is because, based on the two Laurent series above, even for this series, we always find the same value for both series. Therefore, we can not find the relationship of these constants and hence we can not verify the solution for the equation (12). This means that the solution (19) is enough for us to construct other types of solutions, especially to describe the traveling wave or solitary wave in the physical system.
For example, we can construct a solitary wave solution. We have explained before that without losing the generality of the Laurent series form, we neglected the z 0 constant. However, in this part, we can rewrite the simply periodic solution (16) containing z 0 as We can choose C 2 − 3D 2 16E < 0 to find cot hyperbolic function, then by using identity coth(z + iπ/2) = tanh z, and z 0 = iπ/2, we find kink solitary wave solution as follows The solution (33) is a family kink solitary wave solution, shown in (1). We can choose other values from the above parameters to find other forms of solitary wave kink solutions. Since we have chosen the condition C 2 − 3D 2 16E < 0 to find the equation We visualize the solution as shown in Figure (1) by choosing condition equal to 1, so we found the relation parameter E = 3D 2 /8(C − 2); and for C = 3, D = 1 and E = 3/8, then generally for this case we can use C = 2 and C > 2. In this case, we can not select a value for C < 2, because it can generate w(z) as imaginary. To find a solitary wave solution, we need w(z) real value. We can also choose the same condition with a higher value, and it can have an impact on the determination of the C constant. This solution is in accordance with the results shown by Nickel and Schürmann in [18] .They have found solitary kink wave solutions as the special case of this equation.
Conclusion
In this paper we solve the ordinary nonlinear differential equations using algorithms proposed by Demina and Kudryashov. This equation is transformed from the modified quintic complex Ginzburg-Landau equation using the traveling wave model. We found two kinds of Laurent series solutions with simple poles and then we constructed meromorphic solutions. We find meromorphic solutions that contains simply periodic solutions, doubly periodic (elliptic) solutions and a rational solution. These solutions satisfy the equation (12) with the parameter relation shown above.
For doubly periodic solutions, we find solutions in explicit form without having to define other variables. The solutions were more common than what Kengne and Liu invented [18]. They find the solution only for some special cases. This meromorphic solution is a family of solutions to construct many other solutions we need to describe or solve the physical system.
We have shown a kink solitary wave solution as an example or a special case of a meromorphic solution. This solution is in accordance with the results shown by Nickel and Schürmann in [19]. This particular solution is one of the other wave solutions that we can construct. We can construct many other solutions using trigonometric identities, hyperbolic functions or with other constants. | 2018-08-18T14:58:22.000Z | 2017-08-03T00:00:00.000 | {
"year": 2017,
"sha1": "84294b11b413474a471d96b72ff2039b5177cc1d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "84294b11b413474a471d96b72ff2039b5177cc1d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
195796651 | pes2o/s2orc | v3-fos-license | Snowball formation for Cs + solvation in molecular hydrogen and deuterium †
Interactions of atomic cations with molecular hydrogen are of interest for a wide range of applications in hydrogen technologies. These interactions are fairly strong despite being non-covalent, hence one can ask whether hydrogen molecules would form dense, solid-like, solvation shells around the ion (snowballs) or rather a more weakly bound compound. In this work, the interactions between Cs and H2 are studied both experimentally and computationally. Isotopic substitution of H2 by D2 is also investigated. On the one hand, helium nanodroplets doped with cesium and hydrogen or deuterium are ionized by electron impact and the (H2/D2)nCs + (up to n = 30) clusters formed are identified via mass spectrometry. On the other hand, a new analytical potential energy surface, based on ab initio calculations, is developed and used to study cluster energies and structures by means of classical and quantum-mechanical Monte Carlo methods. The most salient features of the measured ion abundances are remarkably mimicked by the computed evaporation energies, particularly for the clusters composed of deuterium. This result supports the reliability of the present potential energy surface and allows us to recommend its use in related systems. Clusters with either twelve H2 or D2 molecules stand out for their stability and quasi-rigid icosahedral structures. However, the first solvation shell involves thirteen or fourteen molecules for hydrogenated or deuterated clusters, respectively. This shell retains its internal structure when extra molecules are added to the second shell and is nearly solid-like, especially for the deuterated clusters. The role played by three-body induction interactions as well as the rotational degrees of freedom is analyzed and they are found to be significant (up to 15% and 18%, respectively) for the molecules belonging to the first solvation shell.
Introduction
Interactions between molecular hydrogen and cations of metallic atoms (H 2 -M + ) are in general non-covalent but relatively strong, as they are dominated by charge-quadrupole electrostatic as well as charge-induced electric dipole forces. 1,2 Due to these characteristics, metallic cations can be expected to solvate in hydrogen, with the formation of one or more dense, solid-like, solvation shells, similar to the well-known Atkins snowballs formed by the solvation of ions in helium. 3 Properties of hydrogen as a solvent 4-6 differ from those of helium (due to differences in mass, polarizability, onset of superfluidity, internal degrees of freedom, etc.), hence, it is worth exploring the impact of this alternative quantum solvent. There is also much interest in H 2 -M + interactions for applications of reversible storage of hydrogen in porous materials, [7][8][9][10][11] where dopant metal cations act as centers to which hydrogen molecules attach. Moreover, different nuclear quantum effects in H 2 and D 2 have been proposed to exploit selective adsorption 12 and isotope separation 13 in metal-doped materials, processes of paramount importance for the development of new fusion reactors.
In contrast to the solvation of ions in helium, [14][15][16][17][18][19][20][21][22][23] studies on (H 2 ) n M + clusters are scarce and limited to small cluster sizes. 5,[24][25][26][27][28][29] Clampitt and Jefferies 24 carried out mass spectrometry measurements of (H 2 ) n Li + clusters up to n = 7 and found indications that Li + is solvated by six H 2 molecules, a conjecture that was later confirmed theoretically. 5,[27][28][29] Interestingly, a study of the potential energy minima of these clusters 5 led the authors to suggest that, while the first solvation shell of Li + is solid-like, this shell screens the charge of the cation so that the outer shells become more diffuse. No further experiments explored these issues until the recent work by Kranabetter et al., 30 who were able to produce (H 2 ) n Cs + clusters with as many as 65 hydrogen molecules by means of electron ionization of large helium nanodroplets doped with Cs and hydrogen. Anomalies in the mass spectrum (maxima or abrupt drops in the cluster abundances) were found for n = 8, 12, 32, 44 and 52. Accompanying density functional theory (DFT) calculations for n = 1-14 found that the n = 12 cluster has icosahedral symmetry and exhibits a special stability, in agreement with the experiment. In this way, the authors attribute the anomalies at n = 12, 32 and 44 to the formation of three concentric, solid-like, solvation shells of icosahedral symmetry. More theoretical work was requested for the elucidation of the origin of these and the other magic numbers.
In the present work, previous experiments 30 are extended to deuterated clusters (D 2 ) n Cs + (n r 30). Moreover, classical and quantum Monte Carlo calculations of energies and structures of both hydrogenated and deuterated clusters are reported based on a new potential energy surface (PES) parametrized using high level ab initio calculations. As far as we are aware, this is the first combined experimental and theoretical work on the solvation of alkali cations by hydrogen that also includes a consistent study of the effects of isotopic substitution. Our goal is to investigate whether well-defined and compact shells are formed and what their structure is. In addition, since the H 2 -Cs + interaction is very anisotropic, we believe that it is worth studying the H 2 /D 2 orientational effects 10,31,32 by explicitly taking into account their rotational degrees of freedom and comparing with the more widely used pseudoatom model. The importance of three-body (3B) induction forces [33][34][35][36] is assessed as well.
Experimental details
In the present experiments, large helium nanodroplets (E10 6 atoms) are successively doped with Cs and H 2 (or D 2 ) particles. Cs atoms are strongly heliophobic and occupy dimple sites at the surface of the He droplets whereas H 2 or D 2 submerges into the droplet as they are heliophilic. Then, the nanodroplets are exposed to an electron beam, which causes significant fragmentation with the formation of a variety of positively charged clusters, whose abundance is ultimately recorded with a high-resolution time of flight mass spectrometer. Electron bombardment causes formation of Cs + via Penning ionization of Cs by He*. 37 These cations undergo rather strong attractive forces with the remaining particles and consequently submerge into the droplet 15 where association with hydrogen clusters occurs. Thorough descriptions of the experiments and data analysis are provided elsewhere 38,39 and, for details specific to the present system, see the ESI. † Measured ion abundances of (H 2 ) n Cs + and (D 2 ) n Cs + are displayed in Fig. 1. Both series clearly show an anomaly for n = 12 (local maximum and strong drop for n 4 12) followed by a peak in the abundance of n = 14 clusters. We have noticed that, in a previous work, 30 the abundance of the (H 2 ) 8 Cs + cluster was assigned a value excessively large due to a residual gas contribution. In the present work, a corrected value is reported for this cluster size after removal of the effect of the contaminant.
Potential energy surface
Two theoretical models are considered in this work, depending on whether the H 2 molecules are assumed to be rotating rigid Fig. 1 Measured abundances (in blue, refer to left ordinate) compared with computed evaporation energies (DE n = E nÀ1 À E n , in meV, refer to right ordinate) of (a) (H 2 ) n Cs + and (b) (D 2 ) n Cs + clusters as a function of the number of molecules. Calculations correspond to DMC within the rigid rotor approximation (in red) as well as to BH + ZPE (open triangles) and PIMC (black) within the pseudoatom approximation. All theoretical models are able to clearly reproduce the behavior of the measured ion yields in the region n = 11-15. In many cases, error bars (associated with measurements or quantum calculations) are not seen because they are smaller than the symbol size. rotors or pseudoatoms, referred to in what follows as ''RigRot'' and ''PsAt'' approaches, respectively. In both approaches, the PES is given as a sum of two-body (2B) terms, corresponding to the H 2 -Cs + and H 2 -H 2 pairwise interactions, and 3B terms, associated with the interaction between the dipoles that the cation induces in the H 2 molecules. A detailed account of the building of these PESs is given in the ESI, † accompanied by Table S1, where all the PES parameters are gathered. A brief summary is given below.
Within the RigRot approximation, the H 2 -Cs + 2B interaction is given analytically as a sum of an electrostatic contributiondetermined by interacting point charges -and a non-covalent component (including induction and van der Waals interactions) given by the atom-bond model 40 and the Improved Lennard Jones (ILJ) formulation. 41 The relevant parameters are optimized by comparing the resulting interaction potential with ab initio estimations obtained at the CCSD(T) level 42 using the d-aug-cc-pV6Z 43 and def2-AQVZPP 44 basis sets for H 2 and Cs + , respectively, and where the basis set superposition error was corrected by applying the counterpoise method. 45 As shown in Fig. 2, the analytical representation compares very well with the ab initio results. It can also be seen that the interaction is quite anisotropic, the minimum corresponding to a T-shaped configuration due to a leading charge-quadrupole interaction. Despite interactions between molecular hydrogen and the lighter alkalis have been previously studied, 2,46 we believe that this is the first time that a H 2 -Cs + PES is reported. Regarding the H 2 -H 2 2B potential, it is also given as a sum of an electrostatic contribution (using the same point charges) and a non-covalent (van der Waals) contribution. The latter is represented using the atom-bond ILJ formulation mentioned above, with parameters being fitted to the accurate PES of Patkowski et al. 47 (a comparison between the present and Patkowski's potentials is shown in Fig. S1, ESI †). Finally, the 3B component corresponds to the interaction between the dipoles that the cation induces on the hydrogen molecules. 23,34 For this, anisotropy in the H 2 polarizability is neglected; so this contribution is identical within both RigRot and PsAt approximations. Indeed, it is found that the anisotropic contribution just provides a difference of 0.1 meV to the total potential energy of (H 2 ) 2 Cs + at equilibrium. A comparison of the present PES with ab initio estimations for the case of the (H 2 ) 2 Cs + cluster is given in Fig. S2, ESI, † where the extent of 3B effects can be assessed.
Finally, within the PsAt approximation, H 2 -Cs + and H 2 -H 2 potentials are represented by atom-atom ILJ functions 41 reproducing the spherical average of the RigRot potentials. It should be noted that the electrostatic contribution cancels out by means of averaging.
Calculation of cluster energies
Using either the RigRot or PsAt in the PESs, energies, E n , and structures of the (H 2 ) n Cs + and (D 2 ) n Cs + clusters have been obtained by means of a combination of classical and quantum Monte Carlo methods, as in previous works. 23,48,49 First, within the PsAt model, putative global minima of the PESs were obtained by means of the basin-hopping (BH) method. [48][49][50] Quantum cluster energies, labelled as BH + ZPE, are then obtained by adding zero-point energies (in the harmonic approximation) to the BH minima. Geometrical arrangements of the clusters obtained from the BH approach are then used as initial seeds for running Path Integral Monte Carlo (PIMC) calculations, 48,51 where cluster energies are obtained at a temperature of 2 K using the thermodynamic estimator. 52 On the other hand, within the RigRot model, cluster ground state energies and probability distributions were computed by applying the rigid body Diffusion Monte Carlo (DMC) approach developed by Buch. 53 Details of the implementation of these techniques are given in the ESI. †
Results and discussion
With the methods mentioned above, the stability of the complexes is studied by means of the evaporation energies, defined as DE n = E nÀ1 À E n , i.e., the energy required to adiabatically remove the most weakly bound monomer from a (H 2 /D 2 ) n Cs + cluster. Results are reported in Fig. 1 in comparison with the experimental distribution of cluster abundances. It can be seen that all the computational approaches reproduce remarkably well the most important features of the experimental abundances, i.e., a maximum at n = 12 followed by a steep drop at n = 13 and a small peak at n = 14. This result provides a nice example for the theoretical predictions of a linear proportionality between cluster abundances and evaporation energies, within the model of the evaporative ensemble. 16,54 Moreover, agreement with experiment gives substantial support to the PES proposed here. It is noteworthy that all the theoretical approaches, ranging from BH + ZPE and PIMC within the PsAt model to the more elaborate RigRot DMC calculation, lead to very similar conclusions, as discussed below. For n 4 14, the recorded abundances follow a smooth trend for both kinds of clusters except for a drop after (H 2 ) 18 Cs + , which is not reproduced by the PsAt (PIMC) calculations, whereas PsAt (BH + ZPE) predicts a drop at n = 19 and uncertainties of the RigRot (DMC) energies do not allow a definite conclusion to be reached. For n o 11, comparison between experiment and theory is more satisfactory for the deuterated than for the hydrogenated clusters. For these sizes, the experimental distribution of (H 2 ) n Cs + clusters is affected by larger error bars due to both shorter measurement times and overlapping with signals coming from the residual gas.
To understand the origin of the special stability of the n = 12 clusters, a study of the structure of (H 2 ) 12 Cs + is presented in Fig. 3 within the PsAt(PIMC) and RigRot(DMC) approaches. Upon examination of the distributions presented therein, it can be concluded that this cluster has an icosahedral structure, in agreement with the original experimental suggestion and DFT calculations therein. 30 Indeed, the PIMC three-dimensional representation of the cluster (top-central panel) reveals a relatively diffuse icosahedron. Also, the H 2 -Cs + radial distribution ( Fig. 3(a)) shows a unique shell of radius E3.6 Å and the H 2 -Cs + -H 2 angular distribution (Fig. 3(c)) exhibits three wide peaks centered around the values corresponding to an icosahedron (63.431, 116.571 and 1801). Regarding the quantum H 2 rotational degrees of freedom studied within RigRot(DMC), the distribution of Fig. 3(b) indicates that the H 2 molecules behave as hindered rotors with a moderately large amplitude motion around the T-shaped configuration, as expected from the features of the H 2 -Cs + potential and found for related systems. 1,2 It is worth noticing that, despite this angular anisotropy, the distributions concerning the translational degrees of freedom of the molecule (Fig. 3(a) and (c)) are almost identical within the PsAt and the RigRot models. Concerning the structure of the smaller clusters (n o 12), it is found that the molecules arrange around the cation approximately filling in the positions of a nominal icosahedron (''icosahedral growth''), as could be expected from the fact that the equilibrium distances of H 2 -Cs + and H 2 -H 2 pairwise interactions are rather similar (Table S1, ESI †). Analogous conclusions are reached for the deuterated clusters, with somewhat narrower distributions of the molecules as expected from their heavier mass.
Analysis of cluster structures for larger sizes (n 4 12) is shown in Fig. 4, corresponding to RigRot (DMC) calculations. First, inflection points in the accumulated radial distributions of Fig. 4(a) and (b) indicate that the first solvation shell is composed by 13 and 14 molecules for the hydrogenated and deuterated clusters, respectively (it should be noted however that (D 2 ) 14 Cs + clearly has a compact structure while (H 2 ) 13 Cs + is more diffuse). Therefore, despite the special stability of the n = 12 clusters, this magic number does not correspond to a solvation shell as could be expected. 30 Rather, n= 12 is a cluster with a special energetic stability (with respect to clusters of similar sizes), while n = 13 or 14 leads to maximum packing structures. 55 Hence, local maxima observed at n = 14 of Fig. 1 can be attributed to maximum packing or, in other words, solvation shells. The internal structure of the first solvation shell for n Z 14 is depicted in Fig. 4(c) and (d), by means of the distributions of H 2 -Cs + -H 2 and D 2 -Cs + -D 2 angles for molecules that reside within the first shell. The radius of that shell is defined by the inflection point indicated by arrows in Fig. 4(a) and (b). As can be seen, adding extra molecules to the second shell does not affect the structure of the first shell, which remains nearly constant.
More insight into the structure of these clusters is gained by means of some indicators at the PsAt (PIMC) level: the gyration radius and the Lindemann index, defined in the ESI and displayed in Fig. S3(a-d). † First, for (D 2 ) n Cs + clusters, it can be seen that the n = 12, 14 and n 4 16 complexes are rather rigid, with localized molecules in the first solvation shell. Indeed, deuterated clusters of these sizes can be considered to be solid-like since their corresponding Lindemann indexes (E0.1) are below the critical value that discriminates between a solid-like and a liquid-like phase, which ranges between 0.1 and 0.2, depending on the authors. 56,57 The behavior of these indicators is qualitatively similar for the (H 2 ) n Cs + clusters. However, with a few exceptions (such as that of n = 12), quantum delocalization and fluidity are larger as compared with the deuterated clusters.
Apart from a solid-like behavior, an enhancement of the H 2 /D 2 density around the cation due to electrostriction is another common feature for the formation of snowballs. 14,19 As a measure of electrostriction, we have computed the percentage of H 2 -H 2 density within the repulsive region of the H 2 -H 2 potential. 19 The results are shown in Fig. S3(e and f) (ESI †), where it can be seen that electrostriction is significant and steadily increases with n until the first shell is completed (n B 15), decreasing thereafter. It should be noted that this index behaves quite similarly for the two isotopes. The analysis points to a snowball-like structure of the inner solvation shell of these clusters, especially the deuterated ones that are more rigid, as commented above. It is worth noting that cluster sizes n = 12 and n = 14 already manifest special stability at a classical level, as can be seen in Fig. S4 (ESI †), where evaporation energies computed using the minima of the PES show the same kind of anomalies for these magic numbers. The classical structure of the n = 12 cluster corresponds to an icosahedron, in agreement with the quantum structure of both H 2 and D 2 clusters. For n = 14, the classical cluster has a D 6d symmetry within the PsAt approximation. Within the RigRot approach, this structure becomes distorted and lowers its symmetry. Using the latter structure, we have computed a ''classical'' D 2 -Cs + -D 2 angular distribution (arbitrarily widening the classical sticks to emulate quantum effects) and the result is shown in Fig. 4(d). It can be seen that this classical ''skeleton'' is compatible with the quantum-mechanical results.
In addition, it is interesting to study in more detail the role of rotation of the H 2 /D 2 molecules as well as of the explicit inclusion of 3B induction terms in the PES, as these effects are often neglected in related computational studies. The extent of 3B effects is explored by comparing RigRot calculations that include or neglect 3B terms in the PES. Analogously, PsAt and RigRot approaches (including 3B terms) are compared to study the orientational effects. (H 2 ) n Cs + evaporation energies, obtained within these models, are depicted in Fig. 5(a) as functions of n. Fig. 5(b) shows relative errors (of the approximated approaches with respect to the most accurate one) in the determination of the total energy. As expected, rotational effects are significant for small cluster sizes (about 10-12% for n o 13), where the H 2 molecules close to the cation tend to orient perpendicularly to it, and become less important for larger cluster sizes. On the other hand, 3B effects steadily increase as the first solvation shell is being filled, reaching a maximum of about 15% for n = 14. This is due to the increase in the number of 3B partners as more polarizable molecules are attached to the cation. The extent of these effects does not continue to rise for larger cluster sizes because of the reduction in the polarization energy of molecules in the second shell due to their larger distance to the cation. Regarding (D 2 ) n Cs + clusters, it has been found that, while 3B effects are nearly the 5 (a) Evaporation energies of (H 2 ) n Cs + as functions of n, for various different approaches: rigid rotors (in red), pseudoatoms (in black) and rigid rotors with the removal of three-body terms in the PES (in blue). There are appreciable differences for small cluster sizes but the behavior near n = 12 and 14 is very similar, as well as for larger cluster sizes. (b) Relative errors in the total energy of the cluster due to neglecting 3B terms in the PES (blue) and not accounting for H 2 rotational degrees of freedom (black). same as those of hydrogenated clusters, rotational effects are somewhat larger, as they account for about 14-18% within the first solvation shell. As a consequence of the above, these two effects have a noticeable impact on the evaporation energies of small clusters (n o 13) but their role becomes negligible for larger cluster sizes, as can be seen in Fig. 5(a). In particular, it is worth noting that the more approximated models reproduce the behavior of the evaporation energies around the main anomalies quite well and thereby the experimental results.
Finally, one may wonder about the sensitivity of the most salient results reported here with respect to variations of the PES parameters. To explore this aspect, some parameters of the non-covalent contribution of the H 2 -Cs + pair interaction have been artificially modified (see Table S2, ESI †) so as to make the total interaction either less or more attractive by B14%. As seen from Fig. S5(a) (ESI †), the peak in the evaporation energy at n = 12 is robust with respect to these variations while that at n = 14 disappears. Interestingly, in both cases, the shell structure is different to that reported above: the more attractive PES leads to a compact shell with 12 molecules whereas the less attractive one gives more diffuse structures with about 14-15 molecules in the first shell ( Fig. S5(b), ESI †). It should be pointed out that possible inaccuracies of the present PES would imply much smaller modifications, which eventually should be tested against stringent spectroscopy measurements. 1
Conclusions
In conclusion, mass spectra of (H 2 ) n Cs + and (D 2 ) n Cs + clusters (n r 30) have been measured and calculations of cluster evaporation energies-based on a new potential energy surfacehave been able to reproduce the most important features of the experiment, namely, the anomalies for cluster sizes around n = 12 and n = 14. Icosahedral (H 2 /D 2 ) 12 Cs + clusters are found to be specially stable, while the first solvation shell becomes closed for 13 or 14 hydrogen or deuterium molecules, respectively. Solvation layers exhibit the typical characteristics of the well-known Atkins snowballs, especially for the deuterated clusters. In addition, it is found that an explicit account for rotational motion as well as three body induction interactions is important for the description of the first solvation shell. Experimental and computational methods presented here appear to be very well suited for studying the solvation of other alkali or alkali earth ions in hydrogen as well as for an extension to studies of the adsorption of hydrogen on fullerenes or polyaromatic hydrocarbons doped with alkali atoms. 11,49 Work in these directions is in progress.
Conflicts of interest
There are no conflicts to declare. | 2019-07-05T13:15:05.013Z | 2019-07-17T00:00:00.000 | {
"year": 2019,
"sha1": "58fc584a7c6163395b20c4efc9e41f0025fbf51c",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/cp/c9cp02017a",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f13dc0dfeb78bf6cc0e5c139fb7745d0181fe092",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
8775934 | pes2o/s2orc | v3-fos-license | A TALEN-based strategy for efficient bi-allelic miRNA ablation in human cells
This method paper presents a novel strategy based on the use of transcription activator-like effector nucleases (TALENs) together with homologous recombination (HR) to disrupt miRNA loci in cell lines. As such, it introduces a new tool that will be useful in many studies of miRNA function.
INTRODUCTION
MicroRNAs (miRNAs) are short noncoding RNAs that negatively regulate gene expression in mammalian cells by targeting the 3 ′ untranslated region of mRNAs. In recent years, it has become increasingly apparent that miRNAs are of crucial functional importance for normal development and physiology, as well as a factor in various diseases in mammalians (Svoboda and Flemr 2010;Pauli et al. 2011;Amiel et al. 2012;Gommans and Berezikov 2012;Iorio and Croce 2012b). For instance, miR-21 was found to be overexpressed in virtually all types of human cancers and thus has emerged as an important therapeutic target in cancer treatment Iorio and Croce 2012a;Li et al. 2012a,b). Recently, miR-21 has been shown to play crucial regulatory roles in cell growth, proliferation, and apoptosis, as well as in autoimmune and cardiovascular diseases (Buscaglia and Li 2011;Liu et al. 2013;Xu et al. 2013).
Despite their fundamental importance, the exact functions of miRNAs in the context of human development and disease processes remain largely unknown. This is, in part, due to a lack of effective methods for completely abolishing the expression of miRNAs in human cells and disease-relevant models (Park et al. 2010(Park et al. , 2012. Although targeted gene knockdown by short-interfering RNAs (siRNAs) provides a rapid and inexpensive tool to functionally study most protein-coding genes, it cannot be used to reduce mature miRNAs in a sensible way at the cellular level. Alternatives to siRNAs include small molecule inhibitors, antisense oligonucleotides, anti-miR vectors (miRZips), and miRNA sponges (Krutzfeldt et al. 2005;Zhu et al. 2011;Hu et al. 2013b). The major limitations of these methods are (1) the transient nature of their effects, and (2) a high risk of off-target effects and resulting toxicity (Jackson et al. 2003;van Dongen et al. 2008;Khan et al. 2009), limitations highlighted by reports on discrepancies between the effect of miRNA inhibitors and genetic knockouts (Patrick et al. 2010;Park et al. 2012).
Transcription activator-like effector nucleases (TALENs) are powerful gene editing tools for uncovering gene functions Clark et al. 2011;Miller et al. 2011;Joung and Sander 2013;Sun and Zhao 2013). Though relatively new, TALENs have been successfully employed in a broad variety of systems (Christian et al. 2010;Hockemeyer et al. 2011;Carlson et al. 2012;Tong et al. 2012;Sung et al. 2013;Zhang et al. 2013). TALENs are custom endonucleases that work as dimers to create double-strand breaks in their target DNA sequences (Clark et al. 2011;Miller et al. 2011;Joung and Sander 2013;Sun and Zhao 2013). Each TALEN typically recognizes a binding site of ∼20 bp, providing a high specificity of gene targeting. Coupled with the ability of context-independent binding (Cermak et al. 2011;Moore et al. 2012;Ansai et al. 2013;Kim et al. 2013a), TALENs are uniquely suited to precisely edit very small genes such as miRNAs (Hu et al. 2013a;Kim et al. 2013b).
A number of approaches have been developed for rapid assembly of custom TALENs (Cermak et al. 2011;Briggs et al. 2012;Sanjana et al. 2012;Sakuma et al. 2013;Uhde-Stone et al. 2013). With these advances, TALEN pairs can be generated easily and economically in a matter of days. Many protein-coding genes or genetic loci with long genome sequences have been documented for the purpose of gene editing, but the short and noncoding nature of miRNA presents different challenges. Hu et al. have reported disruption of miRNA genes in human cells by TALENs. While it is possible to disrupt genes in the human genome with TALENs alone at a frequency of typically 2%-40%, averaging around 16% for mono-allelic disruptions (Kim et al. 2013a;Sakuma et al. 2013), isolating cells carrying rare bi-allelic disruptions requires time-consuming single cell-derivation and subsequent screening. Our approach differs from other strategies (Hu et al. 2013a;Kim et al. 2013b) in that we combine TALENs targeting to the miRNA seed region with a homologous recombination (HR) donor vector carrying a selectable marker. Our strategy enables convenient positive selection, and the combination of NHEJ with stem-loop deletions results in efficient bi-allelic miRNA gene ablation, which is especially valuable for loci that may be difficult to target. Additionally, by using HR donors, endogenous loci can be potentially modified with custom sequences (such as IRES-fluorescent proteins) to allow functional assessment of endogenous gene expression and regulation (Hockemeyer et al. 2011).
As proof of concept, we targeted the human miR-21 seed region in cultured human HEK293 cells, and successfully selected bi-allelic miRNA knockouts with high efficiency (87%). Quantitative RT-PCR analysis of three independent clones confirmed complete loss of mature miR-21 expression. Phenotypical analysis confirmed increased protein levels of the miR-21 target gene PDCD4, reduced cell proliferation, and changes in global miRNA expression profiles. Re-expression of miR-21 in a miR21-knockout line restored miR-21 function, demonstrated by a decrease in target gene expression, further supporting the validity of our approach. Taken together, the high efficiency and ease of a positive selection protocol for bi-allelic miRNA deletions in the human genome provides a powerful tool for elucidating miRNA function in humans to realize, in the long-term, their full therapeutic potential.
System design and knockout strategies
TALENs have almost no restriction in regard to their target sequences. To take advantage of this feature, we designed a TALEN pair flanking the ∼6-bp seed region of the human miR-21, directing FokI cleavage to the seed region (Fig. 1A). TALEN-induced double-strand breaks can achieve gene knockouts in two main ways: by homologous recombination of exogenous donor DNA, or by nonhomologous end joining (NHEJ) repair. In the case of HR events, the donor replaces the entire miR-21 precursor with an RFP and a puromycin-resistance expression cassette (Fig. 1B). To completely knock out a gene of interest in diploid cells, the other allele must be disrupted by a simultaneous HR event or, more likely, by NHEJ-induced mutations. Due to the two selectable marker genes embedded in the HR donor vector (RFP and puromycine resistance), we can easily isolate the clonal populations of cells in which an HR event occurred. Because NHEJ is the predominant repair mechanism induced by double-strand breaks, selecting for HR events will most likely identify clones that harbor bi-allelic modifications, with the second allele carrying a NHEJ-mediated mutation.
Junction PCR and sequencing confirms seamless donor integration by HR at miR-21 locus
To evaluate correct TALEN-mediated HR donor integration in the human miR21 genomic locus (chromosome 17q23.2; 55273409-55273480, adjacent to the coding gene TMEM49), we conducted transient transfection studies using TALENs in combination with an HR donor vector in HEK293 cells. Junction PCR with a combination of genome and vector-specific primers (Fig. 1C) was performed as specified below. The successful amplification of the correct PCR products was obtained only in cells cotransfected with both TALENs and HR donor vectors ( Fig. 2A) but not in cells transfected with the HR donor alone, demonstrating the importance of TALENs in mediating the HR event. Sequence analysis of the junction-PCR products demonstrated the seamless fusion of donor DNA with miR-21 genomic sequences (Fig. 2B), confirming the gene targeting ability of the designed TALENs and the HR donor vector.
Enrichment and isolation of miR-21 knockout candidates from single cell-derived clones
To enrich and isolate miR-21 precursor-deleted cells, HEK293 cells cotransfected with the miR-21-specific TALEN pair and HR donor vector were put under positive selection.
Potential miR-21 HR-candidate cells were selected by puromycin resistance and RFP expression (Fig. 2C). Out of 96 single cell-derived clones, 30 lines survived in the presence of puromycin and remained RFP-positive and puromycin-resistant for >2 mo. In the group transfected solely with the HR donor vector, only six colonies were initially formed, and none of these clones survived after prolonged puromycin treatment.
Genotyping after selection reveals highly efficient generation of bi-allelic knockouts
We next determined the genotypes of the selected, single cellderived clones. As shown in Figure 2D, correctly targeted colonies were revealed by the presence of longer PCR products, while short PCR products indicate either WT or small changes induced by NHEJ. Out of 23 successful PCR amplifications, we found that ∼96% (22/23) were mono-allelic for HR-induced miR-21 deletion and ∼4% (1/23) were bi-allelic for HR-induced miR-21 deletion, demonstrating that double miRNA gene ablation occurs in HEK293 cells via HR when coupled to TALEN-directed cleavage.
To determine whether a simultaneous modification of the other allele by NHEJ had occurred, we sequenced the PCR products spanning the targeted seed region of miR-21. Among 11 mono-allelic HR clones analyzed, a significant portion of the non-HR alleles (91%; 10/11) were NHEJ-modified ( Fig. 2E). Important, and consistent with previous reports (Cermak et al. 2011), almost all modifications were centered on the targeted region, resulting in at least 4-bp deletions or complete changes of the seed sequences of miR-21. Taken together, these results show an 87% efficiency of our approach in generating bi-allelic modifications of miR-21 in the human genome.
Bi-allelic disruption abolishes miR-21 expression but does not affect expression of the neighboring gene TMEM49 We first investigated the loss of miR-21expression in three independent bi-allelic clones by quantitative RT-PCR. Clone #5 bears deletions of the miR-21 precursor on both alleles (bi-allelic HR events). Clones #1 and #7 have one deletion of the miR-21 precursor (via HR) on one allele and another small deletion or mutations (via NHEJ) of the seed sequence on the other allele (Fig. 2E). As shown in Figure 3A, there was a total depletion of miR-21expression in all three cell lines, compared to those of the parental control. We next examined FIGURE 1. Schematic overview of the targeting strategy against miR-21 in the human genome using TALENs in combination with an HR donor vector. (A) miR-21 stem-loop structure, the mature miR-21 sequence is shown in red; the seed region is underlined. TALENs were designed to position the miRNA seed region in the central portion of the spacer, directing cleavage to this functionally essential miRNA region. (B) A HR donor plasmid was created corresponding to the cleavage location of the TALEN pair and carried 509-bp (5 ′ arm) and 600-bp (3 ′ arm) regions of homology to the miR-21 sequence, which, in the native genome, are separated by 202 bp that include the miR21 stem-loop structure. Two LoxP sites are flanking the insulated expression cassette, which is composed of an EF1α promoter-driven RFP and puromycin-resistant gene (Puro), separated by a T2A linker (self-cleaving peptide sequence). (C ) Locations of primers used for genotyping of HEK293 cells targeted with TALENs and HR donor. Primer pairs P2f, P2r and P3f, P3r were designed to amplify the junctions between genome and inserted HR donor cassette (830 bp and 1281 bp, respectively). Triple primers including two forward primers (P3f1 and P3f2) and one common reverse primer (P3r) were designed to co-amplify the HR knockout allele (P3f2 and P3r; 1281 bp) and the wild-type or NHEJ-modified allele (P3f1 and P3r; 1067 bp). Primer pair P1f and P1r was designed to amplify a 430-bp portion of the miR-21 region for subsequent sequence analysis to detect possible NHEJ events.
whether custom TALEN-mediated miR-21 targeting may lead to any changes in the neighboring gene TMEM49 (also referred to as VMP1). We performed quantitative RT-PCR to examine the mRNA levels of TMEM49 in three miR-21 knockout cell lines and the parental control. We found that there was no significant change in the mRNA levels of TMEM 49, compared to the control (Fig. 3B), confirming that neither TALEN-mediated gene replacement nor small deletions caused significant changes in the expression of this neighboring gene.
Loss of miR-21 causes an increase in target gene expression that can be rescued by re-expression of miR-21
To test whether miR-21 ablation caused corresponding changes in the expression of tumor suppressor gene PDCD4 (Programmed cell death 4), a known target of miR-21 , we performed Western blot analyses. As expected, while the basal PDCD4 protein level is barely detectable, it was drastically elevated following miR-21 knockout (Fig. 3C, lanes 1,2). To further validate whether the elevated expression of PDCD4 was due to the loss of miR-21, we transduced miR-21 knockout cells with lentiviral particles containing a cassette to express miR-21 and GFP. As shown in Figure 3E, more than ∼80% of miR-21 knockout cells (RFP-positive) were also GFP-positive, suggesting expression of miR-21 in most transduced cells (Fig. 3D). This was confirmed by quantitative RT-PCR showing an ∼3000-fold increase in miR-21 RNA expression in the transduced cells vs. the control (Fig. 3E). Following restoration of miR-21 expression in miR-21 knockout cells, PDCD4 protein was partially reduced (Fig. 3C).
miR-21 ablation results in inhibition of cell proliferation and alterations in global miRNA expression
We next analyzed cell proliferation rates following miR-21 ablation in HEK 293. As shown in Figure 3F, the three independent miR-21-ablated lines tested all exhibited reduced cell proliferation compared to the parental control. The inhibition in cell proliferation occurred as early as Day 3 after plating and became more prominent with time, lasting as long as 8 d. We further tested proliferation of cell line KO#5 with reintroduced miR-21. However, proliferation of KO#5 expressing miR-21 was not restored to the wild-type level, possibly due to the relatively lower expression levels of miR-21 compared to wild type (Fig. 3E), or to complex, nonreversible changes in the KO cell line. Nevertheless, we can't rule out that the miR21 phenotype is due to possible off-target effects.
To examine whether the reduced proliferation was associated with changes in other miRNAs, we profiled global miRNA expression in three independent bi-allelic miR-21 knockout lines (clone #1, #5, #7), compared to the parental control. Seventeen out of 760 miRNAs tested (∼2%) showed more than threefold up-or down-regulation in all three mutant lines, demonstrating strong concordance between the independent lines (Table 1). A BLASTn search against the NCBI nucleotide database confirmed that these miRNA do not display sequence homology to the TALEN target sequences, indicating that the observed changes are secondary in nature and not off-target effects.
DISCUSSION
Gene targeting of miRNAs in mammalian cells by homologous recombination is inefficient (Bollag et al. 1989), which has limited the use of human disease models in elucidating miRNA functions and exploring their therapeutic potential. To overcome these limitations, we developed an approach for efficient bi-allelic miRNA ablation combining TALENs with a selectable HR donor vector. We demonstrate that FIGURE 2. Genotyping analysis of HEK293 cells targeted with miR-21-directed TALENs and HR donor. (A) Combination of genome-specific and donor-cassette-specific primers amplified expected PCR products of 830 bp (5 ′ arm) and 1281 bp (3 ′ arm) in cells cotransfected with TALENs and the HR donor vector. Transfection with only the HR donor vector did not result in any detectable PCR product. (B) Subsequent sequencing of the amplified PCR products confirmed seamless integration of both HR donor vector arms. (C) Selection reveals RFP-positive puromycin-resistant single cell-derived colonies. (D) A triple primer PCR strategy was used to determine whether mono-or bi-allelic HR events had occurred in single cell-derived clones. A donor-specific and a wild type-specific forward primer were combined with a wild type-specific reverse primer. Alleles with donor cassette integration were recognized by a 1281-bp amplicon, while WT or NHEJ events result in a 1067-bp amplicon. (E) Examples of seed region modification by NHEJ, compared to the wild type. this approach robustly abolished miR-21 expression both via seed region disruption and precise stem-loop structure removal. In addition, the HR-added marker genes allowed for efficient selection (87%) of cells carrying bi-allelic modifications. These findings established a feasible approach for bi-allelic miRNA ablation in cultured human HEK293 cells, which should advance the study of miRNA function in cell culture model systems.
Programmable nucleases, such as zinc finger nucleases (ZFNs), meganucleases, and, more recently, TALENs and CRISPR/Cas9-guided DNA endonucleases have emerged as powerful tools for targeted genome editing (Rahman et al. 2011;Hafez and Hausner 2012;Perez-Pinera et al. 2012;Cho et al. 2013;Cong et al. 2013;Gaj et al. 2013;Mali et al. 2013). Among these, TALENs offer several advantages for miRNA perturbation, including the ability to bind any sequence in the genome, which enables specific targeting of small sequences, such as the miRNA seed sequence (∼6 bp). Several recent studies indicate high specificity and low cytotoxicity of TALENs, compared to ZFNs and the CRISPR/ Cas9 system (Fu et al. 2013;Kim et al. 2013b;Sun and Zhao 2013). In addition, their simple, modular design is easy to im-plement, due to their availability as an open-source technology.
However, there are some key limitations in the use of programmable nucleases to achieve gene ablation in diploid mammalian cells. While TALENs have the potential to induce mutations in the human genome at a frequency averaging around 16% (Kim et al. 2013a;Sakuma et al. 2013) by creating targeted doublestrand breaks and subsequent NHEJ, the majority of mutations are mono-allelic and require time-consuming single cell-derivation and subsequent screening (Hu et al. 2013a). For example, an average mono-allelic efficiency of 16% would result in ∼2.6% bi-allelic modifications. A low-efficiency TALEN of 2% monoallelic disruption would result in rare biallelic modifications of ∼0.04%. To overcome these limitations, we employed an approach of combining TALENs targeting the miRNA seed region with an HR donor vector for deleting the entire miRNA stem-loop structure. The HR donor vector was engineered to include an insulated cassette for RFP and puromycin markers to allow positive selection by drug treatment and/or fluorescent-activated cell sorting. Our combined approach of miRNA gene targeting via TALEN and HR donor vectors was highly efficient and produced some intriguing results: First, we achieved robust bi-allelic gene modifications with an efficiency of 87%. By simple puromycin selection, we were able to establish more than 20 bi-allelic modified clones from one transfection experiment with a starting cell number of 2 × 10 5 . Although most of the bi-allelic clones harbor one complete hairpin deletion and one seed region disruption, we speculated that the expression of miRNAs was effectively abolished due to the precise disruption of the miRNA seed region. In fact, both bi-allelic modified clones tested showed a complete loss of miR-21 expression, similar to those of the bi-allelic hairpin deletion. Another advantage of our HR donor is the fact that the HR selection cassette can be removed via its loxP sites, enabling a second round of screening for even higher frequency of bi-allelic hairpin deletions, if desired.
Second, targeted miRNA inactivation is a powerful method capable of providing conclusive information about gene function in a cell-type or disease-specific manner. This will help to establish the crucial link between genotype and phenotype, thus yielding functional-and clinical-relevant knowledge. As expected, miR-21 knockout lines displayed an increased protein level of the miR-21 target gene PDCD4. Re-expression of miR-21 in miR-21-knockout cells restored its functional role, demonstrated by a decrease in protein expression of the target gene PDCD4, further confirming the validity of miR-21 knockout lines for functional analysis. Moreover, we observed a pronounced attenuation of cell proliferation in all three independent miR-21-ablated cell lines tested. Strikingly, miRNA profiling by qRT-PCR revealed large-scale changes of miRNA expression in the three independent knockout lines tested, highlighting the promise of this approach for future dissection of miRNA regulatory circuits. With the ease and efficiency of bi-allelic mutant generation, combined with the advantage of a stable phenotype, we envision that this approach will broaden our knowledge in deciphering the role of miRNAs in human physiology and disease.
Cell culture and transfection
The human embryonic kidney cell line HEK293 cells were maintained in high glucose Dulbecco's Minimal Essential Medium (DMEM) supplemented with 10% FBS, 2 mM GlutaMax (Life Technologies), 100 units/mL penicillin and 100 units/mL streptomycin. All transfections were performed using Purefection transfection reagent according to the manufacturer's manual (Cat# LV750A-1; System Biosciences Inc.). Transfected cells were incubated in a 37°C incubator with 5% CO 2 .
TALEN design and TALEN expression plasmids
To target the seed region of the human miR-21 locus (Fig. 1A), a TALEN pair was designed using an online tool, TAL Effector Nucleotide Targeter 2.0 (Doyle et al. 2012). To streamline the design of miRNA targeted TALENs, we followed these criteria: (1) TALEN binding sites were set to 20 bp, including the first T, to ensure high specificity of gene targeting; (2) spacer lengths of 15-25 bp were chosen to maximize cleavage efficiency; (3) the miRNA seed sequence was centrally situated within the spacer to direct cleavage to the seed region. Following these criteria, designed TALENs were assembled into a CMV-driven expression cassette using the EZ-TAL Assembly Kit (Cat# GE100A-1; System Biosciences Inc.), and the final constructs were confirmed by DNA sequencing.
miR-21 gene targeting and screening
A knockout HR donor vector that bears homologous arms of 509 bp (left arm) and 600 bp (right arm) was designed and constructed (Fig. 1B). In the genome, the regions represented on the vector arms are spaced 202 bp apart, flanking the miR-21 stem-loop region. The TALEN cut site is located 52 bp downstream from the left vector arm, within the miR-21 seed region. In the HR donor vector, the two homologous arms flank an insulated cassette to express dual selectable markers, RFP and puromycin resistance proteins.
After transfection, RFP-positive and puromycin-resistant single cells were isolated and expanded. Briefly, a total of ∼4 × 10 5 cells were transfected with 1 µg of donor plasmid and 1 µg of each TALEN-encoding plasmid. Cells were trypsinized 2 d after transfection and subsequently plated on 10-cm culture dishes for 24 h. Puromycin was then added to the culture medium (final concentration 4 µg/mL), to allow single cell-derived colony formation. After 2 wk of puromycin selection, RFP-positive clones were picked and expanded for further analysis.
Even cells that carry only one HR event may carry bi-allelic modifications of the miRNA if the miRNA seed region of the other allele is modified by NHEJ. To test for small NHEJ-induced sequence changes on the other allele, genomic PCR was performed to amplify the targeted sites using the primers P1f and P1r. The amplified PCR products were subjected to agarose gel electrophoresis and DNA purification using the QIAquick Gel Extraction Kit (QIAGEN), and the purified DNA was sequenced.
Western blot analysis
Whole-cell extracts were prepared using M-PER lysis buffer (Pierce). The concentration of proteins was measured by Bradford assay (Amresco), and equal protein amounts were used for SDS-PAGE. Briefly, proteins from whole-cell extracts were separated and transferred onto nitrocellulose membranes. The membranes were blocked with 1× TBS-T with 5% nonfat dry milk (Bio-Rad), followed by incubation overnight at 4°C with 1:1000-diluted primary anti-rabbit PDCD4 antibody (Cell Signaling). For protein loading controls, rabbit anti-human GAPDH antibody (Abcam) was used at 1:2500 dilution in Superblock T20 buffer (Pierce). The blot was then probed with 1:10,000-diluted goat anti-rabbit IgG secondary-HRP antibody (Pierce). The signal was detected by Super Signal West Femto ECL (Pierce).
miR-21 rescue
To re-express miR-21 in a knockout line, we used a lentiviral gene delivery system. Briefly, an expression vector containing the miR-21 precursor construct (PMIRH21PA-1; SBI) was packaged into pseudo-lentiviral particles using LentiSuite (LV300A-1; SBI) (Mendenhall et al. 2012). The lentiviral particles (MOI = 5) were transduced into miR-21-knockout cells in the presence of the virus-transduction reagent Transdux (LV850A-1, SBI). The re-expression of miR-21 was confirmed by both co-expression of GFP and quantitative RT-PCR measuring mature miR-21 expression levels as follows.
Quantitative RT-PCR and global miRNA profiling Total cellular RNA was prepared from cells using Trizol Reagent (Life Technologies) according to the manufacturer's instructions. Total RNA was converted to cDNA using RNA-Quant (Cat# RA430A-1; SBI), and qRT-PCR was performed. A miR-21-specific forward primer (CSRA 640A-1, SBI) was used in combination with a universal reverse primer (cat# RA420AU-3, SBI). For the miR-21-neighboring gene TMEM49 (also referred to as VMP1), gene-specific forward and reverse primers (TMEM49-F: 3 ′ -CGG CATAGGTCCATCTCTGCAG-5 ′ and TMEM49-R: 5 ′ -TCAAACA TCCAGGACAACCAG-3 ′ ) were used to evaluate mRNA expression levels. To perform global expression profiling of cellular miRNAs, miRNA-specific primers were obtained from the hsa-miRNome miRNA Profiler kit (Cat# RA660A-1; SBI) in combination with a universal reverse primer (SBI) according to the manufacturer's instructions. We used the comparative threshold cycle (Ct) method to quantify the expression levels. The Cts were normalized to three housekeeping RNAs (human U6 snRNA, RNU43 snoRNA, and Hm/Ms/Rt U1 snRNA).
Phenotypic growth analysis
Parental HEK-293 cells and miR-21-knockout cell lines were seeded at 10,000 cells/well in a 24-well plate in culture medium as de-scribed. Growth was monitored by counting cells with a hemocytometer from each cell line on Day 3, 4, 5, and 8. Culture medium was changed at Day 3 and 6. The experiments were performed in triplicate.
Data collection and presentation
For live cell monitoring, cultured cells were monitored at various times under a fluorescent microscope. RFP or GFP live cell images were taken using the same exposure conditions and magnification within the group of comparison. For qPCR assays, all data are presented as mean ± SD (n = 3), unless stated otherwise. | 2018-04-03T00:06:00.844Z | 2014-06-01T00:00:00.000 | {
"year": 2014,
"sha1": "e7de633351f17af391d6dfd0f476d55f74149076",
"oa_license": "CCBYNC",
"oa_url": "http://rnajournal.cshlp.org/content/20/6/948.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5d95b9c23a27cb066b2cc3257a846953dcd3e15",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
23364122 | pes2o/s2orc | v3-fos-license | Changes in biodistribution on 68Ga-DOTA-Octreotate PET/CT after long acting somatostatin analogue therapy in neuroendocrine tumour patients may result in pseudoprogression
Background To evaluate the effects of long-acting somatostatin analogue (SSA) therapy on 68Ga-DOTA-octreotate (GaTate) uptake at physiological and metastatic sites in neuroendocrine tumour (NET) patients. Methods Twenty-one patients who underwent GaTate PET/CT before and after commencement of SSA therapy were reviewed. Maximum standardized uptake values (SUVmax) were measured in normal organs. Changes in uptake of 49 metastatic lesions in 12 patients with stable disease were also compared. Serum chromogranin-A (CgA) levels were available for correlation between scans in 17/21 patients. Results Mean thyroid, spleen and liver SUVmax decreased significantly following SSA therapy from a baseline of 5.9 to 3.5, 30.3 to 23.1 and 10.3 to 8.0, respectively (p = < 0.0001 for all). Pituitary SUVmax increased from 10.2 to 11.0 (p = 0.004) whereas adrenal and salivary gland SUVmax did not change. Tumour SUVmax increased in 7 of 12 patients with stable disease; CgA was stable or decreasing in 5 of these patients. 30/49 (61%) metastatic lesions had an increase in SUVmax and lesion-to-liver uptake ratio increased in 40/49 (82%) following SSA therapy. Conclusion Long-acting SSA therapy decreases GaTate uptake in the thyroid, spleen and liver but in most cases increases intensity of uptake within metastases. This has significant implications for interpretation of GaTate PET/CT following commencement of therapy as increased intensity alone may not represent true progression. Our findings also suggest pre-dosing with SSA prior to PRRT may enable higher doses to be delivered to tumour whilst decreasing dose to normal tissues. Electronic supplementary material The online version of this article (10.1186/s40644-018-0136-x) contains supplementary material, which is available to authorized users.
Background
Neuroendocrine tumours (NETs) are a heterogeneous group of tumours, which arise most commonly in the gastreoenteropancreatic tract but can arise from any organ where neuroendocrine cells reside [1,2]. These tumours have several biological properties in common including the presence of somatostatin receptor (SSTR) expression in the majority of tumours [3]. Five SSTRs have been characterized to date with SSTR-2 and SSTR-5 expression exhibited in 70-90% of all NETs [4].
The high prevalence of SSTR overexpression in NETs has enabled the use of synthetic somatostatin analogues (SSA) to control symptoms related to over production of biologically active amines and peptide hormones frequently associated with NETs and to possibly delay disease progression [5][6][7][8]. These are generally administered as slow-release formulations to increase patient convenience.
Available long-acting SSA (LA-SSA) currently include octreotide (Sandostatin-LAR, Novartis, Switzerland) and lanreotide (Somatuline, Ipsen, France). 68 Ga-DOTA-Octreotate (GaTate) PET, which binds to SSTR-2, is becoming increasingly available as a superior diagnostic technique to stage and restage patients with NET. It is also used to determine suitability for peptide receptor radionuclide therapy (PRRT) based on the degree of radiotracer uptake in the tumour [9]. PRRT using 177 Lu-DOTA-octreotate or 90 Y-DOTA-octreotate have significant efficacy in controlling NETs that have progressed despite SSA therapy and is considered when GaTate PET uptake at tumour sites is greater than background liver uptake, indicating a sufficient target [10][11][12][13][14].
Administration of SSA therapy prior to GaTate PET/ CT has the potential to alter radiotracer biodistribution. The EANM procedure guidelines recommend a time interval of 3-4 weeks after administration of long-acting analogues before performing GaTate PET/CT [15]. The guidelines, however, acknowledge that the effects of SSA therapy have not been well characterised. The aim of this study was to perform intra-individual comparison of radiotracer uptake on GaTate PET/CT in both physiologic and sites of metastatic disease at baseline and following LA-SSA therapy.
Study population
We retrospectively identified 21 (13 M; 8 F, Age 30-89) patients with histologically-proven metastatic NET who had a GaTate PET/CT at baseline whilst treatment naïve (scan 1) and a restaging scan after commencing LA-SSA (scan 2) without any other intervening therapies such as chemotherapy or PRRT. All studies were performed at the Peter MacCallum Centre between June 2010 and February 2014. Scan 2 was performed after a variable amount of time of SSA therapy (mean and median 6 months, range 2-12 months) at the discretion of the referring physician. We recommend different intervals before restaging depending on the grade of the tumour. For European Neuroendocrine Tumour Society (ENETS) Grade 2 tumours which may progress more rapidly there is a greater imperative to restage earlier (eg. 3-6 months) so that other therapies such as peptide receptor radionuclide therapy (PRRT) can be used in the event of rapid progression. For ENETS Grade 1 tumours, the likelihood of progression within such a short period is remote, and we therefore recommend anatomic restaging in 6 months and GaTate PET/CT restaging in 12 months intervals, unless clinical or biochemical assessment raises suspicion of earlier disease progression. Serum chromogranin-A levels at the time of PET scans were available for comparison in 17 of 21 patients. Chromogranin-A levels were performed within 1, 2 and 3 months of the follow-up PET scan in 82%, 15% and 3%, respectively. Patient characteristics are presented in Table 1. The study constituted a clinical audit and quality assurance activity and institutional ethics approval was therefore not required. The study was undertaken in accordance with the Helsinki Declaration of 1975, as revised in 2008.
68
Ga-DOTA-Octreotate (GaTate) PET/CT 68 Ga-DOTA-Octreotate was synthesized as previously described [16,17]. For each production, 42μg of peptide was used but the product was divided and administered to several patientsdepending on patient weight, generator yield and number of patients scheduled. The administered peptide mass therefore ranged from 10-40μg. Beginning 35-88 min after intravenous injection of 85-307 MBq 68 Ga-DOTA-Octreotate (GaTate), patients were imaged from vertex to proximal thighs on a PET/ CT scanner (Discovery 690 GE Healthcare, USA or Siemens Biograph Siemens Healthcare, Germany). A low-dose CT acquisition was obtained first followed by the PET acquisition. No fasting was required. Patients were encouraged to void during the uptake phase. For patients on LA-SSA therapy, we perform GaTate PET/ CT in the week prior to next LA-SSA administration, i.e. 3-4 weeks after LA-SSA administration. A longer period after LA-SSA injection before repeating GaTate PET/CT is not feasible, particularly in patients with symptoms from hormone secretion deriving symptomatic benefit. A shorter period is more likely to result in competitive effects between LA-SSA and radiotracer. Therefore, the time period just before the next administration is most pragmatic. Importantly, the uptake time of second scan in relation to LA-SSA injection was consistent throughout the cohort. 14/21 scan pairs were performed on the same PET/CT machine with both PET/CT machines calibrated and standardized for SUV measurements.
Image analysis
A 3-D fusion workstation (MIMvista 5.0, MIMvista Corp. Cleveland, OH, USA) was used for image analysis. For quantitation at sites of physiological GaTate uptake and metastatic disease, a 3-D volume of interest (VOI) tool was used to draw VOIs around the pituitary gland, thyroid gland, parotid and sub-mandibular glands, adrenal glands, liver, spleen and metastatic deposits to measure maximum standardized uptake value (SUVmax). Splenic activity was not analysed in one patient owing to prior splenectomy. Four small VOIs were drawn over the proximal limbs and combined together to calculate the average body background. Using an automated SUV threshold of 10 to encompass all tumour with adjustment to exclude any sites of physiologic uptake such as spleen and kidney, total body tumour volume (mL) was also measured.
Subanalysis to account for potential confounders
Distribution of GaTate is potentially confounded by a 'tumour sink effect' [16] whereby higher tumour volumes act as a 'sink' for the injected radiotracer resulting in decreased bioavailability and lower SUV measurements at other physiologic body sites. Therefore, if significant disease progression or regression occurred between scans, this could potentially result in changes of uptake at physiologic sites. To minimize this bias, a subgroup analysis was performed in patients with stable disease between the two studies as defined by < 10% change in total body tumour volume or low (< 20 ml) total body tumour volume on both scans (Patients 1-12 Table 1). An additional sub-analysis on the cohort was performed in patients with longer or shorter uptake times following radiotracer administration.
Statistical analysis
Statistical analysis was performed using Analyse-it (Analyse-it Software Ltd., Leeds, UK). Comparisons were made using paired student's t tests for normally distributed variables with a two-sided p-value of 0.05 considered statistically significant. Bland Altman analysis was performed to evaluate variability in GaTate uptake time between scans.
In the subgroup analysis of patients with stable disease (n = 12), findings were similar with mean splenic activity decreasing from 31.2 to 24.9 (p = 0.0006), thyroid from 5.8 to 3.4 (p = 0.0006) and liver from 10.4 to 8.3 (p = < 0.0001). No change was seen in adrenal or salivary gland SUVmax and pituitary gland SUVmax increased from 10.0 to 12.2 (p = 0.02).
Variability of intra-individual GaTate uptake time between scans did not appear to influence findings with Table 2 significant reduction in mean splenic, thyroid and hepatic SUVmax demonstrated in patients with either longer (n = 12) or shorter (n = 9) uptake times on their second GaTate scan (Additional file 1: Figure S1, Additional file 2: Figure S2 and Additional file 3: Figure S3).
Changes in metastatic lesion uptake
SUVmax of 49 metastatic lesions in patients with stable disease (n = 12) were measured at baseline and following long acting somatostatin analogue therapy (1-5 lesions measured per patient) ( Table 2). 30/49 (61%) of metastatic lesions had an increase in SUVmax following SSA therapy. On a per patient analysis, metastatic lesion SUVmax increased in 7/12 (58%) patients. In 5/7 of these patients chromogranin-A levels were available for correlation and all 5 demonstrated stable or decreasing serum levels at the time of scan 2. Average metastatic lesion SUVmax decreased in 5/12 (42%) of patients with all 5 of these patients also demonstrating stable/ decreasing serum chromogranin-A levels at the time of scan 2. For bone, nodal and liver disease the SUVmax increased by 5.0 ± 11.6, 3.0 ± 10.9 and 6.2 ± 5.9, respectively. An analysis of metastatic lesion SUV relative to hepatic activity was also performed. In patients with stable disease, 40/49 (82%) had a SUVmax higher than liver at baseline compared to 44/49 (90%) following SSA therapy, resulting in a change in Krenning Score. Metastatic lesion:liver SUVmax ratio increased in 40/49 (82%) of lesions following SSA therapy (Table 3).
Discussion
Our findings demonstrate that long-acting SSA therapy has variable effects on physiological 68 Ga-DOTA-Octreotate uptake in different organs with reduction of uptake in the thyroid gland, spleen and liver, slight increase in uptake in the pituitary gland and no effect on salivary and adrenal gland uptake. By performing a subanalysis in patients with relatively stable disease between scans, we were able to confidently exclude 'tumour sink effect' as a contributing cause for these changes. The changes were also not explained by differences in uptake period.
The approximate 25% and 20% reduction in physiologic splenic and hepatic GaTate SUVmax demonstrated following SSA therapy has implications for both interpretation of imaging and management of patients for PRRT. Our study demonstrated SSA therapy increased metastatic lesion:liver uptake ratio in 82% of lesions, thereby potentially increasing the Krenning score, a visual scoring system which uses tumour intensity relative to liver and spleen to grade uptake [18]. Although originally developed for interpretation for planar Indium-111 octreotide scanning, we and other groups apply the same scoring for GaTate PET/CT. The increase in metastatic lesion:liver uptake ratio was primarily due to a decrease in liver SUVmax.
These results have significant implications for interpretation of GaTate PET/CT for response assessment following commencement of SSA therapy. Increased tumour to hepatic and splenic ratio may thereby result in increase in Krenning Score which may be misinterpreted as disease progression. Moreover, SUVmax of metastatic lesions increased in 58% patients with stable disease. Correlation with stable or decreasing Chromogranin-A in these patient support the rationale that the change in uptake intensity merely reflected altered biodistribution. In patients without interval disease progression in whom tumour SUVmax increased, the change did not meet the EORTC criteria of 25% increase [19] in intensity to define progressive disease. Furthermore, the increased sensitivity could result in visualization of small volume disease not seen at baseline. In our experience, this is most likely to occur with sub-cm lesions subject to partial volume effects. Our findings also suggest that SSA therapy increases the likelihood of a patient being considered suitable for PRRT, as most groups use a Krenning Score of 3 or greater (uptake greater than background liver activity) to determine suitability [10].
Our results also have significant implications for delivery of PRRT, with agents such as 177 Lu or 90 Y DOTAoctreotate. Of most relevance is the decrease in splenic uptake following SSA therapy, as myelosuppression secondary to bystander splenic irradiation may be a potential dose limiting factor of PRRT [20,21]. Our findings suggest pretreatment with SSA therapy reduces physiologic splenic uptake, and therefore may reduce total splenic radiation exposure from PRRT and related myelosuppression. The higher tumour uptake potentially increasesthe therapeutic index of PRRT. These findings raise the possibility that is somewhat analogous to the administration of 'cold' Rituximab to saturate physiological binding sites prior to treatment with radiolabelled Rituximab in patients with B cell lymphoma to increase binding at sites of disease [22]. The administration of LA-SSA therapy prior to PRRT may improve efficacy of PRRT in a proportion of patients by altering biodistribution and increasing binding in metastatic lesions (Fig. 5). On the contrary, however, it does appear to decrease metastatic lesion uptake in a proportion of patients potentially rendering PRRT less effective in some patients.
The increase in pituitary gland uptake, decrease in thyroid uptake and lack of change in the salivary and adrenal glands following SSA therapy indicate that SSTR-uptake kinetics vary in different organs. The clinical relevance of these changes are uncertain but the authors note that absent or faint thyroid uptake on GaTate Fig. 5 Maximum Intensity Projected Ga-68 DOTATATE PET Images of representative patient. Increase in metastatic lesion uptake post 30 mg Sandostatin LAR in the setting of decreasing serum chromogranin levels and no change in anatomic size of lesions suggests altered DOTATATE biodistribution following SSA resulting in pseudoprogression. Also note diffusely decreased thyroid uptake, which is almost universally seen in all patients following SSA PET/CT is a feature that the patient is likely receiving SSA therapy. The significant change in splenic and liver intensity also cautions against using these in isolation as reference organs for windowing nuclear medicine images; scaling images according to a fixed SUV threshold may be preferred.
Our results are supported by Velikyan et al. [23] in which different doses of short-acting SSA administered immediately prior to injection of 68 Ga-DOTATOC for PET scanning influenced the degree of NET uptake, with a dose of 50 μg Octreotide associated with increased NET uptake and higher doses of 250 μg and 500 μg associated with a decrease in NET uptake in the same patient. The majority of patients (16/21) in our study were receiving 30 mg Sandostatin LAR monthly, with the remainder (5/21) on 20 or 40 mg Sandostatin LAR or 90-120 mg of Lanreotide monthly that was ceased at least 4 weeks prior to the second PET scan. It is uncertain how much biologically active SSA was present in each patient at the time of the second PET scan and whether changes demonstrated were due to residual biologically active SSA or whether they were longer term effects of prior SSA exposure that was no longer active at the time of scanning.
We acknowledge there are several significant limitations of this retrospective study including variability of GaTate uptake times between scans performed in each patient and 7/21 scan pairs being performed on different PET scanners. All PET/CT cameras at our institution are calibrated and standardized for SUVmax measurements so any differences between machines would be expected to be minimal. Differences in intra-individual GaTate uptake times between scans appeared to have minimal influence on findings with significant reduction in mean splenic, thyroid and hepatic SUVmax demonstrated in patients with either longer or shorter uptake times on their second GaTate scan. A further limitation was the large variation in time (2-13 months) that had elapsed between baseline and post SSA PET scans with any tumour progression or regression occurring during this interval likely to directly affect metastatic lesion uptake. We accounted for this as best possible by only measuring changes in metastatic lesion uptake in patients with relatively stable disease between scans. Despite the above limitations, we believe our findings are important and contribute to the paucity of literature evaluating the effects SSA therapy has on GaTate uptake at physiological and metastatic sites in NET patients and potential implications this has for PRRT. Based on these results, it is quite possible that the EANM procedure guideline recommendations of waiting 3-4 weeks after administration of long-acting somatostatin analogues before performing GaTate PET/CT may not be justified as an earlier timepoint could provide greater sensitivity in some patients [15]. It does, however, appear appropriate to perform GaTate PET/CT at a consistent timing relative to administration of long-acting somatostatin analogues. Nevertheless, given the changes that in biodistribution it appears that a consistent time point follow LA-SSA should be used for consistency. Further prospective, more controlled research of cohorts at multiple time points following differing doses and preparation of long-acting SSA would provide further insights.
Conclusion
Long-acting SSA therapy decreases GaTate uptake in the thyroid gland, spleen and liver but in most cases increases metastatic lesion:liver uptake ratio. This has significant implications for interpretation of GaTate PET/CT as SSA therapy may thereby increase Krenning Score or other quantitative parameters resulting in apparent progression. In patients on therapy, consistent timing of GaTate PET/CT in relation to LA-SSA administration is pragmatic to minimise any bias attributable to competitive or other effects of LA-SSA at time of imaging. Caution should be made not to interpret changes in intensity of uptake alone as progression when comparing a post-therapy to baseline LA-SSA naïve patient or when the dose of LA-SSA is changed. The changes observed after LA-SSA therapy,may increase the likelihood of a patient being deemed suitable for PRRT. Our findings also suggest predosing with SSA prior to PRRT may enable higher doses to be delivered to tumour whilst decreasing dose to normal tissues in a proportion of patients, potentially reducing myelosuppression as a consequence of lower splenic irradiation. | 2018-01-28T22:00:42.773Z | 2018-01-24T00:00:00.000 | {
"year": 2018,
"sha1": "62ceb6c6cf7883f0418151f7dcf954e8963bca49",
"oa_license": "CCBY",
"oa_url": "https://cancerimagingjournal.biomedcentral.com/track/pdf/10.1186/s40644-018-0136-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62ceb6c6cf7883f0418151f7dcf954e8963bca49",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
28382242 | pes2o/s2orc | v3-fos-license | Ovariectomy and chronic stress lead toward leptin resistance in the satiety centers and insulin resistance in the hippocampus of Sprague-Dawley rats
Aim To evaluate the changes in the expression level of gonadal steroid, insulin, and leptin receptors in the brain of adult Sprague-Dawley female rats due to ovariectomy and/or chronic stress. Methods Sixteen-week-old ovariectomized and non-ovariectomized female Sprague-Dawley rats were divided in two groups and exposed to three 10-day-sessions of sham or chronic stress. After the last stress-session the brains were collected and free-floating immunohistochemical staining was performed using androgen (AR), progesterone (PR), estrogen-β (ER-β), insulin (IR-α), and leptin receptor (ObR) antibodies. The level of receptors expression was analyzed in hypothalamic (HTH), cortical (CTX), dopaminergic (VTA/SNC), and hippocampal regions (HIPP). Results Ovariectomy downregulated AR in the hypothalamic satiety centers and hippocampus. It prevented or attenuated the stress-specific upregulation of AR in these regions. The main difference in stress response between non-ovariectomized and ovariectomized females was in PR level. Ovariectomized ones had increased PR level in the HTH, VTA, and HIPP. Combination of stressors pushed the hypothalamic satiety centers toward the rise of ObR and susceptibility to leptin resistance. When exposed to combined stressors, the HIPP, SNC and piriform cortex upregulated the expression of IR-α and the possibility to develop insulin resistance. Conclusion Ovariectomy exacerbates the effect of chronic stress by preventing gonadal receptor-specific stress response reflected in the upregulation of AR in the satiety and hippocampal regions, while stress after ovariectomy usually raises PR. The final outcome of inadequate stress response is reflected in the upregulation of ObR in the satiety centers and IR-α in the regions susceptible to early neurodegeneration. We discussed the possibility of stress induced metabolic changes under conditions of hormone deprivation.
Ivić et al: Ovariectomy and chronic stress lead toward leptin resistance in the satiety centers and insulin resistance in the hippocampus www.cmj.hrAll cells in the body are working together in order to maintain homeostasis or physiological variance of conditions that support a multitude of special functions.When the physiological variance is violated we say that the body is under stress.Any external or internal condition that disturbs the homeostasis is considered a stressor (1).The body can cope with stressors if they are short and act suddenly.Both shortterm (acute) and long-term (chronic) stress affect the hypothalamus-pituitary-adrenal axis (HPA) and the sympathetic autonomic nervous system (SNS).The joined outcome of the two stress-related systems is the increase of adrenal glucocorticoids, which leads to enhanced metabolism and cognition, while immune and reproductive systems are inhibited (2,3).These changes are known as the stress response and they are beneficial in short periods of time.When stressors act chronically these changes may lead to reduction of the HPA and SNS stress resilience and a failure in the maintenance of homeostasis.Chronic stress is increasing the risk for metabolic disorder and cardiovascular disease (CVD), characterized by high blood pressure, obesity, and consequently hormonal imbalance (4).Stress response is sex specific and regulated by gonadal steroid hormones (5,6).In general, women manage stressors better, but only till the period of menopause, because of the presence of protective gonadal steroid hormones (7).Menopause is accompanied by weight gain and obesity, and menopausal women eventually exhibit the characteristics of CVD and metabolic syndrome (8).Obesity, in particular abdominal phenotype, occurs as a response to chronic stress (9).Obesity is a result of an imbalance between the mechanisms controlling energy intake and energy consumption.Overall, long-term energy balance at the level of organism and individual cell is controlled by hormones leptin and insulin (10,11).Leptin is a hormone produced by the fat tissue which acts via its receptor ObR (12,13).It was noted that leptin suppresses HPA axis (14) and has an impact on stress-related behavior (15).Insulin is a pancreatic hormone whose level positively correlates with the amount of fat (11,16).Ovariectomy promotes menopause in the animal models (17).High fat diet and ovariectomy increase ObR expression in the lateral hypothalamic nuclei and barrel cortex (18).
Contrary to the energy intake, which is governed by well described hypothalamic nuclei and peptide hormones that control them, energy consumption is related to the pattern of behavior governed by expression of gonadal steroids in the brain.The gonadal steroids influence gene expression in the central nervous system, and in this way they change reproductive behavior and behavior overall (19).Particularly responsive to gonadal steroids are neural circuits regulating autonomous functions, food seeking behavior, and memory (20)(21)(22).We have previously shown that chronic stress induces changes in the distribution of the receptors for gonadal steroids at the level of adrenal gland (23) and also proposed that certain changes induced by chronic stress also happen in the brain.The aim of this study was to evaluate changes in the expression level of receptors for gonadal steroids, ObR and insulin receptor alpha (IR-α), in the brain of adult Sprague-Dawley female rats due to ovariectomy and/or chronic stress.
experimental animals
This study was performed at the Animal Facility of the Faculty of Medicine Osijek and was approved by the Ethics Committee of the Croatian Ministry of Agriculture, approval number: 2158-61-07-11-51.Thirty two 16-week-old female Sprague-Dawley rats were divided in two groups: non-ovariectomized (NON-OVX) and ovariectomized (OVX).These groups were subdivided into chronic stress and control group (Figure 1).Each group consisted of 8 animals that were housed within standard laboratory setting.Standard laboratory rat food and tap water were available ad libitum except in cases when food deprivation was a stressor.All procedures were carried out in agreement with EU Directive on Laboratory Animals.
ovariectomy and chronic stress protocol
Ovariectomy was performed at the age of 12 weeks.All surgical procedures regarding ovariectomy were performed on the same day on all animals by a surgeon proficient in such procedures.The Harlan protocol was followed (Harlan HUS-QREC-PRD-932, Issue: 01, Revision 03).Animals were anesthetized with isoflurane (Forane® isofluranum, Abbott Laboratories Ltd, Queenborough, UK).After the procedure, food and tap water were provided ad libitum.Animals were intensively monitored 72 hours after the procedure.In previous studies we showed that if ovariectomy was performed 4 weeks before the stress protocol, the stress caused by surgical procedure can be considered as irrelevant and fully compensated.This strategy allowed us to omit sham operated animals and reduce the total number of animals used.There was no difference in the behavioral response of OVX animals.Together with chronic stress protocol, we performed acute stress protocol in one part of the animals.The samples from this part of the experiment were analyzed and published by our Hungarian collaborawww.cmj.hrtors who measured gonadal steroid hormones in the serum of ovariectomized animals and confirmed the procedures (24).
Animals were exposed to chronic stress in three sessions lasting 10 days each, according to Balog et al (23).In short, when rats turned 19 weeks, stress (S) animal groups were exposed to a combination of various stressors, such as cold restraint, food deprivation, irregular noise, and others stressors.Control (C) animal groups were exposed to the same environment and were handled equally, but the stressor was not present.After chronic stress protocol completion, animals were of 28 weeks age.Animal body weight was measured at the beginning of the study and after each session (Figure 1).
tissue sampling
Animals were sacrificed after the last chronic stress session.They were anesthetized with the combination of Ketamine IM (Ketanest, Pfizer Corporation, New York City, NY, USA; concentration: 30 mg/kg) and inhalation gas (Forane® isofluranum, Abbott Laboratories Ltd, Chicago, IL, USA).The brains were isolated, fixed with 4% paraformaldehyde for 24h, sucrose cryoprotected, frozen by immersion in precooled isopentane and stored at -80°C till analysis.
Counting of immunopositive cells
This study analyzed distribution of gonadal steroid receptors: AR, PR, ER-β, and receptors for hormones in charge of long-term energy balance maintenance: ObR and IR-α.The immunopositive cells (both neurons and glia) were count-ed using ImageJ software (US National Institutes of Health, Bethesda, MD, USA) in five rat brain regions: hypothalamus (HTH), cortex (CTX), hippocampus (HIPP), and two dopaminergic areas -ventral tegmental area (VTA) and substantia nigra pars compacta (SNC).Counting of each slide was performed by 3 different persons, using coded numbers to prevent bias.
Statistical analysis
The distribution of data was determined by Shapiro-Wilk test.As the distribution was not normal, comparisons of specific sets of two subgroups were conducted using Mann-Whitney test.The data used for comparison are shown in Supplemental Tables.The following sub-groups were compared: *(a) effect of ovariectomy, (B) effect of chronic stress on non-oVX animals, (C) effect of chronic stress on oVX animals, (d) difference between chronic stress response in non-oVX and oVX, (e) cumulative effect of ovariectomy and chronic stress response, ↑ = significant increase; ↓ = significant decrease.Statistically significant differences were determined by Mann-Whitney test with statistical significance level P < 0.01.abbreviations: aR -androgen receptor, aRC -arcuate nucleus of hypothalamus, C -control group, Ca1 -Cornu Ammonis region 1, Ca3 -Cornu Ammonis region 3, CtX -cortex, dG -dentate gyrus, eR-β -estrogen receptor beta, HIPP -hippocampus, HtH -hypothalamus, LH -lateral nucleus of hypothalamus, non-oVX -non-ovariectomized animals, oVX -ovariectomized animals, PIR -piriform cortex, PR -progesterone receptor, PVparaventricular nucleus of hypothalamus, S -chronic stress group, SnC -substantia nigra pars compacta, Vta -ventral tegmental area.
(A) NON-OVX-C and OVX-C groups were compared to observe the effect of ovariectomy, (B) NON-OVX-C and NON-OVX-S groups were compared in order to notice changes due to chronic stress exposure, (C) OVX-C and OVX-S groups were compared to determine the effect of chronic stress in ovariectomized female rats, particularly to determine if the direction of changes was similar as in non-ovariectomized females, (D) NON-OVX-S and OVX-S groups were compared to observe the influence of ovariectomy on the response to chronic stress and (E) NON-OVX-C group was compared to OVX-S group in order to reveal combined impact of ovariectomy and chronic stress.
ReSuLtS
The body weight of animals followed the same pattern described in the previous publication.Significant changes in body weight were not observed based only on ovariectomy or chronic stress.Significant increase in body weight was noticed just in combination of ovariectomy and chronic stress after the first (OVX-S vs NON-OVX-S, P = 0.021) and second (OVX-S vs OVX-C, P = 0.031), but not after the third stress session (23).
The comparisons of specific sets of two animal sub-groups revealed significant differences described further in the text.The P-values of corresponding significant changes are shown in the Tables 1, 2, and 3 (indicated in text, as in the tables, with letters A, B, C, D, and E).The data used to compare animal sub-groups are presented in the Supplement (Supplemental Tables 1-5).
distribution of analyzed receptors in HtH regions in charge of energy stores for the sake of homeostasis maintenance In the HTH, three sub-regions were analyzed: arcuate nucleus (ARC), lateral hypothalamic nucleus (LH), and para-ventricular nucleus (PV).ARC interprets leptin and insulin signaling to PV and LH.PV further activates either rise in energy expenditure or decrease in food intake.Contrary, LH activates programs that encourage eating (11).mental Table 1).(C) Ovariectomized female rats under the stress also increased AR expression only in PV.Upregulation of PR in LH was typical for ovariectomized females and became significant after chronic stress.(D) Two chronically stressed groups of animals were significantly different in ability to raise AR levels in satiety centers upon chronic stress; ovariectomized females successfully raised AR just in PV (Table 1).(E) When combined, chronic stress and ovariectomy downregulated ER-β in ARC, and AR in LH (Figure 2).Oppositely, upregulation of PR (in LH) after ovariectomy became even higher after additional impact of chronic stress (Supplemental Table 2).
As a general rule, ovariectomy downregulated AR in hypothalamic satiety centers, while chronic stress upregulated same receptor.Ovariectomy dictated direction of gonadal receptor expression under condition of chronic stress, which was particularly clear in rise of PR in LH.Effect of chronic stress overrode ovariectomy just in the case of up-regulation of AR in PV.Ovariectomy and chronic stress worked in conjunction to downregulate ER-β in ARC.
Ovariectomy or stress downregulated ObR in satiety centers, while combination of these two led toward paradoxical up-regulation. (A) Ovariectomy alone downregulated
ObR in ARC, while (B) just exposure to chronic stress downregulated the expression level of ObR in LH and PV.Paradoxically, the combination of stress and ovariectomy (E) caused ObR upregulation within ARC and PV, which might be a sign of developing leptin resistance (Figure 2).Since ovariectomy produced a change in the expression level of ObR just upon chronic stress (C), it is implied that ovariectomy was the key factor in ObR up-regulation within ARC and PV upon chronic stress (Table 2).
(A) Ovariectomy affected IR-α in LH (downregulation), while (B) chronic stress affected more IR-α in ARC (upregulation).In both nuclei, combined effect of stress and ovariectomy was downregulation of IR-α, more influenced by ovariectomy than stress (Table 3).
General effect of chronic stress on satiety centers was down-regulation of ObR, which could be an explanation for decreased food intake in stressed animals.Although ovariectomy also induced similar changes, combination of ovariectomy and stress paradoxically caused an increase in ObR, particularly in ARC -the major satiety center and PV, a nucleus that determines the overall food intake.This change is a probable sign of leptin resistance, which triggers further deregulation.
distribution of analyzed receptors in CtX region involved in impression about food and Vta region included in feeding for reward pathway One sub-region of CTX was analyzed -the piriform cortex (PIR), which is involved in perception of smell.It may have an important role in the motivation of animals to eat even if the animal does not have the need for extra energy intake.The same region suffers the first neurodegenerative changes and is probably the most sensitive to changes under conditions of ovariectomy or chronic stress.
VTA represent dopaminergic areas which may be implicated in control of reward pathway based on food and food related stimuli.The signals from this region can override the control of energy stores and motivate the animal to eat and thus lead to extra energy intake.
Combination of ovariectomy and stress upregulated expression of AR and PR in VTA.
(A) Ovariectomy caused downregulation of AR in PIR.Contrary to hypothalamic satiety centers and PIR, ovariectomy upregulated AR and PR in VTA.This was the only region in which AR increased right after ovariectomy (Table 1).Also, PR increased immediately after ovariectomy in this region, but it increased even more after additional chronic stress (C) (Supplemental Table 2).Upregulation of PR due to stress in ovariectomized females was previously observed in LH and commented as specific for them.(C) Stress response of ovariectomized females was downregulation of AR and upregulation of PR and ER-β in PIR.However, ER-β was upregulated to the levels observed in NON-OVX-C animals (Supplemental Table 3).(D) Ovariectomized females upon chronic stress exposure ended up with much higher PR levels in VTA than nonovariectomized ones (Table 1).(E) Combination of ovariectomy and chronic stress in VTA upregulated PR, which was already observed in LH (Figure 3).Also, combination of ovariectomy and stress in PIR downregulated the expression of AR, like in LH.
VTA had different response to ovariectomy and chronic stress than HTH satiety regions -it maintained AR and upregulated PR levels at the same time.Different response might generate counterbalance to satiety regions in affective food perception and lead toward higher gratification to food.
Combined effect of stress and ovariectomy in VTA was downregulation of ObR.(A)
ObR levels increased in VTA upon ovariectomy, but (B) decreased upon chronic stress (Table 2).(E) Combined effect of ovariectomy and chronic stress was downregulation of ObR -contrary to the final effect in HTH satiety centers.
distribution of analyzed receptors in regions for declarative (HIPP) and non-declarative (SnC) memory In the HIPP, three sub-regions were analyzed: dentate gyrus (DG), and two Cornu Ammonis regions -CA1 and CA3.These regions are involved in learning and declarative memory management.Also, neurogenesis is proved to occur in DG.SNC is a dopaminergic region involved in nondeclarative memory.We were interested if dysregulation of energy expenditure under conditions of ovariectomy and/ or chronic stress could explain the susceptibility of memory regions toward neurodegeneration.
Ovariectomy and stress had opposite effects on steroid gonadal receptors expression in HIPP and in combination they canceled the effects of each other. (A) General effect
of ovariectomy was downregulation of AR in all HIPP subregions, and PR in CA regions.Ovariectomy caused rise of ER-β in CA3.(B) Chronic stress had same effect as ovariec-tomy on CA1, but opposite effect in DG and CA3.At the same time (E), combination of stress and ovariectomy canceled the influence of each other on gonadal steroid receptors expression in all HIPP sub-regions (Figure 4).
(B) Chronic stress upregulated PR expression in SNC, which was also observed in CA3 and was a typical effect of stress after ovariectomy in HTH.(C) Ovariectomy inverted stress response within SNC and (E) combination of ovariectomy and chronic stress brought the levels of steroid gonadal receptors to the control values (Supplemental Tables 1, 2 and 3).We can conclude that stress and ovariectomy within SNC acted oppositely and they mostly annulled each other's influence.
Ovariectomy and chronic stress lead toward significant upregulation of IR-α in all HIPP regions and SNC, but had no effect on ObR in HIPP.(A) ObR was upregulated in CA3 due to ovariectomy, while (B) individually chronic stress had no impact on its expression in HIPP or SNC (Table 2).In ovariectomized females chronic stress upregulated ObR in CA1 (C).Finally, the levels of ObR in HIPP and SNC were not affected upon combined ovariectomy and stress (Supplemental Table 4).
Ovariectomy (A) and chronic stress individually (B), and in combination (E) caused significant IR-α upregulation in HIPP sub-regions and SNC (Table 3 and Supplemental Table 5).These results imply HIPP and SNC sensitivity to development of insulin resistance in case of ovariectomy and chronic stress. dISCuSSIon Results of this study showed that ovariectomy and chronic stress affected the expression of gonadal steroid, leptin, and insulin receptors in the rat brain.These effects were analyzed in the hypothalamic regions involved in control of satiety and dopaminergic areas involved in control of feeding for reward and non-declarative memory.Furthermore, they were analyzed in the cortical region, involved in impression about food and feeding motivation (25), and the hippocampus, a brain structure that manages declarative learning and memory (26,27) and provides environment for neurogenesis (28).
ovariectomy downregulated gonadal steroid receptors and prevented or attenuated stress specific upregulation of aR.
Ovariectomy caused downregulation of AR in hypothalamic regions that mediate promoting or inhibiting the signal for energy intake.Studies have shown that AR is related to anxiety behaviors in rats.Increased AR activation inhibits stress response and vice versa.Knockout mice that lack the androgen receptors show increased HPA activation (29,30).However, there are no similar data on testosterone and progesterone receptors after ovariectomy.Our conclusion is that not just downregulation of AR but also the rise of PR might serve as a marker of ovariectomy and be the underlying cause of physiological changes of satiety regions, particularly under conditions of chronic stress.
The effect of chronic stress on animals in reproductive age in our study was estimated by comparing NON-OVX-S with NON-OVX-C group.Stress caused an increase in AR in ARC and PV.Results indicate the possibility that physiology of ARC after ovariectomy is characterized by inability to increase AR, particularly in chronic stress response.Since ARC is the satiety-regulating brain center we concluded that this combined effect reflected on feeding behavior and body weight.In our previous study non-ovariectomized animals exposed to chronic stress kept constant weight during the stress period (23).
It was unexpected, because some previous studies reported weight loss under the chronic stress (31).We suppose that the rise of AR in ARC during reproductive age of females is a protective factor under conditions of stress which helps in maintaining constant weight.At the same time animals that were ovariectomized gained body weight in spite of stress.
Ovariectomy downregulated AR in PIR region.We still have to explore if change in expression of AR affect animal behavior in direction of looking for different source of food.In general, there are no studies exploring animal's affinity toward certain taste of food under conditions of chronic stress.
Ovariectomy downregulated AR and PR in all regions of HIPP, with exception of PR in DG.On the other hand chron-ic stress in non-ovariectomized animals caused increase in all gonadal steroid hormones in DG and CA3.Chronic stress had such an impact on DG and CA3 region that even ovariectomized animals after chronic stress successfully upregulated all gonadal steroid receptors, except PR in CA3.Interestingly, CA1 sub-region differs in response to chronic stress; instead of rise of AR and PR we observed downregulation, like in ovariectomy.What is even more interesting, individual effects of ovariectomy and chronic stress (overall down-regulation) in CA1 became completely inverted if combined and we saw overall upregulation of gonadal steroid receptors after chronic stress even in this region.Most studies dealing with the effect of reproductive hormones on hippocampal tissue overlook a possible role of PR.Our observation of significant changes induced in expression level of PR after ovariectomy and stress indicate a possible role of progesterone in the regulation of stress response in hippocampus.
ovariectomy-induced and chronic stress-induced effects on the expression of leptin and insulin receptors
Ovariectomy downregulated the levels of ObR in ARC, while chronic stress downregulated ObR in LH and PV.On the other hand, if we exposed ovariectomized animals to additional stress, levels of ObR ended up significantly upregulated in ARC and PV.Mesencephalic gratification region VTA reacted in the opposite way, upregulated ObR after ovariectomy, downregulated upon chronic stress and in case of both ended up with downregulation.We can say that satiety regions are more likely to respond to a variety of stressors with changes in ObR levels than any other brain region.Due to the fact that the hypothalamus regulates autonomic nervous functions and controls overall body energy balance (32,33), we might expect profound long-term changes in the regulation of body weight in combination of stress and ovariectomy.
Ovariectomy alone, but also chronic stress alone, significantly elevates the level of IR-α in all analyzed regions known for being susceptible to early neurodegeneration (PIR, SNC, HIPP).Satiety regions are spared from fluctuation in IR-α and are probably not susceptible to insulin resistance.We can imply the possibility that rise in IR-α alone could be a good predictor of insulin resistance, but further functional studies are required for clarification.In the light of the recently discovered connection between neurodegeneration and insulin resistance (34), our results might point to the chronic stress exacerbated by gonadal hormone deprivation as a possible cause of diabetes-C (35). www.cmj.hr The lack of ovariectomized animal group with estrogen replacement therapy (ERT) might be considered as limitation of this study.Considering the fact that exogenous estrogen also influences HPA axis, we did not include this animal group in the study because ERT would introduce additional stress (36), making this group incomparable with other groups.Besides, it has been shown that after ovariectomy the levels of endogenous estrogen are slowly being restored by other peripheral tissues (37).
In conclusion, our data suggested that ovariectomy in general downregulated the levels of gonadal steroid receptors with exception of VTA.The general effect of chronic stress response was rise of AR and PR in the brain of female rats in the period of reproductive life.When combined with ovariectomy, stress response nullified ovariectomy and brought levels of steroid hormone receptors to those common for age or even higher.
While ovariectomy downregulated the levels of ObR in ARC, chronic stress brought down ObR in PV and LH.Combination of ovariectomy and stress reversed individual effects and led toward significant upregulation of ObR in hypothalamic satiety centers, but not in VTA, which probably works as counterbalance.The most significant finding of our study is the possible link between chronic stress response (amplified by ovariectomy) and development of insulin resistance in the hippocampus and other brain regions susceptible to early neurodegeneration.
Described effects of chronic stress and gonadal steroid hormone deprivation were assessed in adult female Sprague-Dawley rats (38).Since AR notably changed in females, it would be interesting to clarify whether the observed differences under the same conditions could be observed in male counterparts or not.Also, further studies might reveal the alteration of chronic stress response of aged females in the reproductive senescence period.Finally, to determine the implications of observed differences, functional studies on cell lines with overexpressed IR and/or ObR are needed.
FIGuRe 1 .
FIGuRe 1. experimental animals and chronic stress protocol.
acknowledgment This study was supported by Cedars Sinai Medical Center's International Research and Innovation in Medicine Program, the Association for Regional Cooperation in the Fields of Health, Science and Technology (RECOOP HST Association) and the participating Cedars -Sinai Medical Center -RECOOP Research Centers (CRRC).The authors wish to thank Livija Puljak, Pero Hrabač, and Nenad Šuvak for their valuable time and advice.Funding The study has been funded in part by the Croatian Science Foundation under project number IP-09-2014-2324 and internal research grant from Faculty of Medicine of Josip Juraj Strossmayer University of Osijek (VIF2015-MEFOS-1).ethical approval This study was performed at the Animal Facility of the Faculty of Medicine Osijek and was approved by the Ethics Committee of the Croatian Ministry of Agriculture, approval number: 2158-61-07-11-51.declaration of authorship VI wrote the manuscript.RB performed ovariectomies.MB and SB performed animal experiments.MB, SB, and MH sampled the tissue.VI and LV performed immunohistochemistry and acquired data.VI, LV, SB, and MB performed quantification.IL performed statistical analysis.VI, SB, IL, MB, RB and MH interpreted the results.MH, RB and SGV designed the experiment and critically revised the manuscript for intellectual content.All authors gave their final approval for publication.Competing interests All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organization for the submitted work; no financial relationships with any organizations that might have an interest in the submitted work in the previous 3 years; no other relationships or activities that could appear to have influenced the submitted work. | 2018-04-03T02:54:44.533Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "d5865af19421c420b98502f574b15f2bdcdf8b80",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4856194/bin/CroatMedJ_57_s001.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d5865af19421c420b98502f574b15f2bdcdf8b80",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3369773 | pes2o/s2orc | v3-fos-license | Adrenal myelolipoma with hyperandrogenemia and schizophrenia
Adrenal myelolipoma with hyperandrogenemia is extremely rare. We report a case of a 26-year-old Chinese female with schizophrenia, who presented with a hormonally active tumor causing hyperandrogenemia. The mass was found by computerized tomography when she had her gynecologic examination for secondary amenorrhea, and it was confirmed to be an adrenal myelolipoma after a histopathological study. She was referred for a left adrenal laparoscopic excision, and the size of adrenal myelolipoma was found to be more than 10 cm. We report this case because large adrenal myelolipomas with hyperandrogenemia and schizophrenia are rare, and adrenal myelolipoma associated with hyperandrogenemia might be determined by the enzymes involved in the production of hormones.
Introduction
Adrenal myelolipoma is infrequently encountered nonfunctioning benign tumor of unknown etiology. In some cases, it presents endocrine disorders, such as Cushing's syndrome, Conn's syndrome, and congenital adrenal hyperplasia. 1 But few cases of virilization have been reported. To our knowledge, this kind of case is very rare, and there are only three related documented case reports in the English literature. Except for virilization and other common symptoms, this kind of adrenal myelolipoma often accompanies other abnormalities such as type-2 diabetes mellitus, acanthosis nigricans, and growth retardation. We report one case with schizophrenia, and we hypothesize that it is associated with her tumor.
Case report
A 26-year-old woman with schizophrenia was admitted to our hospital for a left adrenal mass found when she had her gynecologic examination for secondary amenorrhea.
The patient was a hepatitis B positive female with a history of schizophrenia for more than 10 years, who exhibited male characteristics (including hirsutism and menoxenia) but with no lumbago. At the age of 13 years, the patient began to menstruate, but the cycles were rather irregular. Six years ago, her menstruation cycle became normal because she was consuming ethinylestradiol and cyproterone acetate tablets. In addition, she has been taking antipsychotic drugs for more than 10 years.
On a through laboratory examination, the patient had a high level of plasma testosterone (328.79 ng/mL). Complete blood count revealed increasing leukocytes submit your manuscript | www.dovepress.com
178
Liu et al (47.00/µL), thrombocytocrit (0.41%), and epithelia (33.5/µL). The level of plasma cortisol aldosterone, basal plasma renin activity, and angiotensin Ⅱshowed no obvious abnormality. Abdominal sonography showed that there was a well-defined hyperechoic heterogeneous mass of 11.0×10.1×10.1 between the left adrenal gland and spleen. Computerized tomography scan of the abdomen with oral and intravenous contrast showed a big nonhomogeneous mass pushing the left kidney down in the left upper quadrant. It measured approximately 11.6×10.0 cm with multiple septa which showed slight enhancement in arterial phase and moderate enhancement in venous phase, so did the solid component of the tumor. This mass also had a vaguely defined border with the tail of the pancreas and spleen. Besides, a low-density lesion could be seen in the left lobe of the liver (Figures 1 and 2).
T h e l e f t a d r e n a l m a s s s p e c i m e n m e a s u r e d 13.5×10.5×6.5 cm. A mixture of mature adipose tissue and bone marrow elements could be seen in the low-power histological picture ( Figure 3). Immunohistochemical staining was performed for Syn, Ck, Hmb45, Melan-A, Hmb45, and cgA ( Figure 3).
The patient was referred for a left adrenal laparoscopic excision, and the postoperative hospital course was uneventful. She recovered quickly, and the level of plasma testosterone was 0.68 ng/mL ( Figure 4). Her menstruation soon became regular and endocrine examinations returned to normal.
The patient's family gave their written informed consent for all or any part of this material to appear in this paper and all editions of Cancer Management and Research, and any other works or products, in any form or medium.
Discussion
Myelolipoma is a rare and benign neoplasm which was first described by Gierkein 2 and named by Oberling in 1929. In the previously reported cases, endocrine disorders, such as Cushing's syndrome, Conn's syndrome, and congenital adrenal hyperplasia as a result of 17a-hydroxylase or 21a-hydroxylase deficiency, 1 were found to be associated with functioning tumors, myelolipoma with virilization was very rare, especially with such a large mass.
Most myelolipomas are asymptomatic and discovered incidentally on abdominal imaging, such as ultrasonography and computerized tomography, for some other indications. 1 Some may present with abdominal pain because of huge size or hemorrhage or necrosis within tumor. The patient in the current case had no abdominal pain or other related discomfort even though the size of adrenal myelolipoma was more than 10 cm. That was why she did not find that her virilization was associated with the adrenal tumor until she underwent a gynecologic examination for secondary amenorrhea.
The etiology of adrenal myelolipoma is still unknown. Past researches found adrenal myelolipoma is associated with long periods of elevated ACTH. In addition, chronic stressful conditions such as diabetes mellitus, hypertension,
179
Myelolipoma with hyperandrogenemia obesity, chronic inflammatory processes, and malignancy are also be observed in patients with adrenal myelolipoma. 3 On examination, the body mass index of the patient in the current case was 27.89 kg/m 2 , which is indicative of obesity. But, she has neither hypertension nor diabetes.
Adrenal myelolipoma associated with hyperandrogenemia is much rarer (Table 1). One report hypothesizes that it might be determined by the aforementioned multiple factors, extrinsic compression of the large tumor, and the enzymes involved in the production of hormones, such as melamin-A, 4 In our case, Melan-A immunohistochemical staining was also positive. In addition, positive Syn staining indicated that the mass had neuroendocrine function, which could explain the elevated levels of androgen. In addition, extra-adrenal myelolipoma with virilization and Cushing's syndrome has also been reported before in patients whose immunohistochemistry showed that the mass was positive for calretinin, Melan-A, and Syn. 5 This patient had a history of schizophrenia for more than 9 years and has been taking antipsychotic drugs for more than 5 years. Schizophrenia is a brain disorder that affects how people think, feel, and perceive. People with schizophrenia can experience both hyper-and hypofunction of the hypothalamic-pituitary-adrenal axis, 6 as shown by a new study. In addition, another study shows that patients with schizotypal disorder, compared with healthy control subjects, have an enlarged pituitary volume. 7 So, we hypothesize that her schizophrenia may have caused the adrenal myelolipoma. Or, her schizophrenia may have been caused by the adrenal myelolipoma.
People hold different opinion regarding the size criteria for adrenal myelolipomas as an indication for surgical resection. Previously, laparoscopic approach was not considered a good choice when the adrenal tumor size exceeded 5-6 cm. But through the recent advancement of diagnostic imaging methods, some surgeons demonstrated that laparoscopic adrenalectomy could be used in giant adrenal myelolipoma. 8 A tumor with a large size, 15 cm at its longest dimension, has been resected by the laparoscopic approach recently. 9 The tumor mass in our patient was removed with laparoscopic resection, and her postoperative hospital course was uneventful.
Adrenal myelolipoma with virilization is very rare. We reported here a case with schizophrenia, and we hypothesize that this may explain the pathogenesis of the present case, or, vice versa, the adrenal myelolipoma could have caused her schizophrenia. In addition, this case supports the hypothesis that adrenal myelolipoma associated with hyperandrogenemia might be determined by the enzymes involved in the production of hormones.
Limitations
Disadvantage: It is only our conjecture that her schizophrenia is related to this tumor. Long-term follow-up should been done to observe the mental disorder and adrenal gland varieties of these patients by using imaging methods. If it is possible, genetic tests are likely to be helpful too. Cancer Management and Research is an international, peer-reviewed open access journal focusing on cancer research and the optimal use of preventative and integrated treatment interventions to achieve improved outcomes, enhanced survival and quality of life for the cancer patient. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2018-04-03T04:05:43.341Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "fcb45736b352eb1ca7e4429fb95ca260de1ae95b",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=40361",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcb45736b352eb1ca7e4429fb95ca260de1ae95b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202217895 | pes2o/s2orc | v3-fos-license | Performances of a Research CFR Octane Rating Unit Engine and Dacia Single Cylinder SI Engine Ignited by a LASER System
. At this time, the severe legislation regarding the level limits of the waste and exhaust gases released by thermal engines and also the necessity of engines efficiency improvement boost the engine research domain to bring in front the use of new technologies that can be used to control the in-cylinder combustion process. Now, the new technologies is represented by LASER spark plug systems which can be successfully used at petrol engines. LASER spark plug technology can have many advantages for engine operation control, an ignition system that could provide improved combustion is the one using plasma generation and a Q-switched LASER that results in pulses with high MW power. The LASER spark plug device used in the current research was a LASER medium Nd:YAG/Cr4+:YAG ceramic structure made up of a 8.0-mm long, 1.0-at.% Nd:YAG ceramic, optically-bonded to a Cr4+:YAG c. It was developed and constructed similar to classical spark plug and could be assembled on a CFR Octane Rating Unit Engine as well as on a Dacia Single Cylinder SI Engine which led to several results among which: influences on in-cylinder pressure, combustion and pollutant emissions.
Introduction
Internal combustion engines are extremely important in transportation and energy production therefore any improvement will lead to a substantial decrease in pollutants and consequently greenhouse gases. Ignition is a complex phenomenon which greatly impacts combustion [1], especially the initial stages which result in pollutant formation, flame propagation as well as quenching. The ignition source has undergone few changes over the past hundred years. The classical spark plug is made up of two electrodes having a space between them, where an electrical arc is produced due to high voltage discharge. For several years researchers have been intent on finding a LASER based ignition source [2], which can replace a classical spark plug with a pulse focused LASER beam and they also attempted to control ignition by a LASER source [3]. The development of flame kernel size simultaneous with NOx production are highly important [4] and in this situation a LASER spark plug source can improve engine combustion compared to classical spark plugs. Laser spark plug systems are intended to protect resources and the decrease CO2 emissions thus limiting the greenhouse effect. It could be obtained through lower fuel consumption achieved by the spark ignition (SI) engine system, owing to high thermodynamic capacity resulting from direct injection. One of the main drawbacks is that with classical spark ignition the place of ignition cannot be specifically chosen. LASER induced ignition could eliminate some of these difficulties. Several other ignition systems apart from LASER spark plug are reviewed [4] such as microwave ignition and high frequency ignition.
LASER Spark Plug
LASER spark plug is the chemical-kinetic mechanism of starting combustion by the stimulus of a LASER source. Scientific literature generally classifies energetic interactions of a LASER with a gas into four schemes [5], characterised by the nanosecond domain of the LASER pulse and the duration of the entire combustion could be several hundreds of milliseconds. It takes only a few nanoseconds for the LASER energy to be deposited followed by shock wave generation. Combustion may take from 100 ms to several seconds according to the air-fuel dosage, initial pressure, pulse energy, plasma size, plasma and initial temperature in the combustion chamber.
The following main advantages of LASER spark plug are as follows [6, 7, 8, 9 10]: -a choice of arbitrary positioning of the ignition plasma in the combustion cylinder -absence of quenching effects by the spark plug electrodes -ignition of leaner mixtures than with the spark plug => lower combustion temperatures => less NOx emissions no erosion effects as in the case of the spark plugs => lifetime of a LASER spark plug system expected to be significantly longer than that of a spark plug -high load/ignition pressures possible => increase in efficiency precise ignition timing possible -exact regulation of the ignition energy deposited in the ignition plasma -easier possibility of multipoint ignition -shorter ignition delay time and shorter combustion time -fuel-lean ignition possible The disadvantages of LASER spark plug are: -high system costs -concept proven, but no commercial system available yet. figure 3, with the technical details for the LASER to be found in articles [11,12,13]. The graphs in figure 4 suggest that in a leaner mix the cyclic dispersion is more intense than in the stoichiometric mix, the curves on the right having a wider spread; moreover, the maximum values variation range of the external outlines of the curves is wider at1.276 (8.9 bar compared to 4.3 bar for the classical spark plug, and 8.6 bar compared to 4.5 bar for the LASER spark plug, respectively). However, if cyclic dispersion is estimated based on the COV variation coefficient, this particular conclusion is refuted, as seen in figure 5, for four values presented in the graphs. This proves the inconsistency of utilizing the variation coefficient as a criterion for cyclic dispersion because it represents a ratio of two values (standard deviation and average value) and as a result both of them influence the COV value. The graphs in figure 6 show the maximum and average indicated pressure values of the 50 functional cycles. As seen, the values are lower for the LASER spark plug compared to those of the classical ignition, for all the three combined values of the air-fuel dosage which suggests lower levels of power performance in the case of the LASER spark plug.
Experimental investigation
This fact is confirmed in figure 7, which presents the indicated pressure-volume (p-V) diagram for =0.9, similarly showing the same information for the other two values.
Indeed, as shown in the graph, the power of the mono-cylinder engine P e , is 7.9% lower in the case of the LASER spark plug (6.48 HP compared to 7.03 HP) and becomes obvious at 2800 RPM and a 90% charge. The decrease is justified by the smaller area of the indicated diagram p-V (area difference A 1 -A 2 , detail A), an area related to mechanical work 277.03 Nm of the classical spark plug and 255.18 Nm of the LASER spark plug. The graph also presents the estimated maximum power values of the poly-cylindrical engine with 4 engines P max , thus at a rotation of 5200 RPM and a 100% charge: 47.33 HP for the classical spark plug and 43.6 HP for the LASER spark plug. It is well known that the technical specification of the engine indicates a maximum power of 54 HP, the difference being caused by the wear and tear of the engine and the estimation error. Figure 7 also shows the values of the actual specific fule consumption c e . As indicated in the graph, the specific consumption for the LASER spark plug is 6.9% lower than that of the classical spark plug. As already stated, the LASER spark plug has been chosen also because it ensures a decrease in exhaust gases, in particular nitrogen oxides. In this sense, figure 8 and figure 9 present exhaust gases values measured on the CFR engine, while figure 10 shows the values of polluting substances mentioned in the graphs in the case of the Dacia monocylinder engine.
Conclusions
Several important conclusions have resulted from the aforementioned experiments: -the use of the LASER spark plug ensures a decrease of the specific fuel consumption for any composition of the air-fuel mix and for any ignition advance value; -the study based on all the experimental data (the present paper shows only a part of the study) indicates that the use of the LASER spark plug ensures the reduction of exhaust gases down to a leaner air-fuel mix equivalent to an excess air coefficient of =1.2 -the use of the variation coefficient to estimate cyclic dispersion has the disadvantage of being a ratio of two values. Consequently, it cannot lead to valid conclusions; -the study confirms the well-known fact that cyclic dispersion, performance and exhaust gases are influenced by the ignition advance and the quality of the air-fuel mix (by means of the excess air coefficient); -the use of the LASER spark plug leads to an engine power reduction the values depend on the quality of the air-fuel mix and the ignition advance. | 2019-09-11T02:02:51.234Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "fdaa0ba8c78bf2c44ee5dcab4faab0f48ab19a27",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/38/e3sconf_te-re-rd18_01009.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "abc3bf391af6993f2d678e370f2777fecaa86d90",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
216530004 | pes2o/s2orc | v3-fos-license | Au Quantum Dot/Nickel Tetraminophthalocyanaine-Graphene Oxide-Based Photoelectrochemical Microsensor for Ultrasensitive Epinephrine Detection.
Owing to the importance of epinephrine as a neurotransmitter and hormone, sensitive methods are required for its detection. We have developed a sensitive photoelectrochemical (PEC) microsensor based on gold quantum dots (Au QDs) decorated on a nickel tetraminophthalocyanine–graphene oxide (NiTAPc-Gr) composite. NiTAPc was covalently attached to the surface of graphene oxide to prepare NiTAPc-Gr, which exhibits remarkable stability and PEC performance. In situ growth of Au QDs on the NiTAPc-Gr surface was achieved using chemical reduction at room temperature. The synthesized materials were characterized by Fourier transform infrared spectroscopy, ultraviolet–visible spectroscopy, X-ray photoelectron spectroscopy, scanning electron microscopy, transmission electron microscopy, and electrochemical impedance spectroscopy. Au QDs@NiTAPc-Gr provided a much greater photocurrent than NiTAPc-Gr, making it suitable for the ultrasensitive PEC detection of epinephrine. The proposed PEC strategy exhibited a wide linear range of 0.12–243.9 nM with a low detection limit of 17.9 pM (S/N = 3). Additionally, the fabricated PEC sensor showed excellent sensitivity, remarkable stability, and good selectivity. This simple, fast, and low-cost strategy was successfully applied to the analysis of human serum samples, indicating the potential of this method for clinical detection applications.
■ INTRODUCTION
Epinephrine (EP), an important neurotransmitter and hormone, can improve the survival rate of cardiac arrest patients by increasing the force and rate of heart contractions. 1,2 However, excess EP or subcutaneous injection of EP into a vein can be deadly, as it can cause a sudden rise in blood pressure, cerebral hemorrhage, or even ventricular fibrillation. 3,4 For this reason, the use of EP in sports is banned by the World Anti-Doping Agency. Therefore, it is very important to realize the ultrasensitive sensing of EP. Some conventional detection techniques, such as colorimetry, 5 fluorescence spectrophotometry, 6 and high-performance liquid chromatography (HPLC), 7 have been successfully used for the determination of EP. Recently, electrochemical analysis, 8 capillary electrophoresis, 9 and chemiluminescence 10 have been shown to be applicable to the detection of EP. Furthermore, photoelectrochemical (PEC) analysis is a rapidly developing method that can provide high precision, remarkable sensitivity, and easy integration using simple equipment based on appropriate photoactive materials. 11−15 Various organic semiconductors 16,17 and inorganic semiconductors 18,19 have been used owing to their unique functions and photochemical activity.
Phthalocyanine metal derivatives have been widely used in applications in the spin-dyeing industry, medicine, and electrocatalytic analysis owing to their attractive features and functions. 20−22 Nickel tetraminophthalocyanine (NiTAPc) is a phthalocyanine metal derivative well known for its excellent optical properties, chemical stability, and low cost. 23,24 Moreover, high electrocatalytic activity can be achieved owing to the large number of amino groups in NiTAPc. 25 Significantly, NiTAPc shows strong absorption in the region of 600−800 nm and high molar extinction coefficients in the near-infrared region, which has been exploited for PEC analysis. 26,27 Recently, graphene oxide (GO) has been exploited in the field of PEC analysis. 28 GO, which is usually prepared from graphite by oxidation using a strong acid, consists of sheets bearing carboxyl, hydroxyl, and epoxy groups and exhibits photoelectric activity. 29 GO has good water solubility and excellent mechanical stability, which make it suitable for use as a template to form composites with various nanoparticles (NPs) or amino polymers for chemical analysis. 30−32 Previous studies have shown that NiTAPc, which is rich in amino groups, can be covalently attached to the surface of GO via chemical reaction with carboxyl groups to form nickel tetraminophthalocyanine-functionalized graphene oxide (NiTAPc-Gr). 33 NiTAPc-Gr exhibits the advantages of both its constituent materials, showing not only excellent PEC properties and high mechanical stability but also a large specific surface area. This structure with an enhanced adsorption capacity has been successfully used for supercapacitors and micromolecule detection. 34,35 As unique functional materials, metallic NPs have been widely researched in various fields. 36−38 To date, metallic NPs have been extensively applied in photodetectors, 39 energy applications, 40 organic compound analysis, 41,42 and medical diagnosis and therapy. 43 Notably, localized surface plasmon resonance in Au quantum dots (QDs) can be excited, which facilities the absorption of visible and even near-infrared photons, effectively boosts the rate of electron−hole formation, and promotes the separation of photogenerated charge carriers near the semiconductor, which can be transformed into a strong and stable electrical signal. 44,45 Owing to their high specific surface area, excellent photocatalytic activity, and good biocompatibility, Au QDs have been applied as efficient lightharvesting enhancers in PEC analysis; for instance, TiO 2 -MoS 2 -Au NPs 46 and CdS-Au QDs. 47 Au NPs integrated with ZnAgInS QDs 48 have been used for specific purposes. The development of novel photoelectrodes is inevitable because of shortcomings during the inception phase.
In this work, we report the PEC analysis of EP using a heterostructure composite consisting of Au QDs decorated on NiTAPc-Gr (Au QDs@NiTAPc-Gr) as a signal indicator. This PEC sensor has significant advantages: (1) The Au QDs@ NiTAPc-Gr composite material with specific structural features and a high specific surface area was synthesized by coupling the photoactive template material NiTAPc-Gr and a signal enhancer (Au QDs), which not only provided stability and increased the photoelectric catalytic activity but also reduced the self-aggregation of Au QDs, leading to an enhanced electrical signal output. (2) The sensitivity of the PEC sensor is comparable to that of enzymes; however, unlike enzyme sensors, the PEC sensor does not suffer from inactivation. (3) The Au QDs@NiTAPc-Gr heterostructure is ultrasensitive to visible and even near-infrared light, suggesting that the PEC sensor has broad applicability for the clinical detection of small molecules owing to the strong penetrability of the near-infrared light into cell tissues. However, such sensors have general disadvantages such as relatively short lifetimes, usually requiring replacement after 1−3 years. Moreover, the electrolytic solution should be carefully maintained and replenished regularly. Nevertheless, the PEC sensor was successfully applied to ultrasensitive EP detection, exhibiting rapid response, high stability, wide linear detection range, and selectivity. These findings provide insights into the development of heterostructures for PEC analysis and new methods for EP detection.
■ RESULTS AND DISCUSSION
Characterization of PEC Materials. X-ray photoelectron spectroscopy (XPS) was used for the elemental analysis of the Au QDs@NiTAPc-Gr composite material ( Figure 1A). The high-resolution Ni 2p XPS spectrum showed two peaks at 855.7 and 870.0 eV ( Figure 1B), assigned to Ni 2p 3/2 and Ni 2p 1/2 of NiTAPc-Gr, respectively. 49 In Figure 1C, the O 1s peaks at 531.7 and 533.2 eV were attributed to C−O and C O bonds, respectively. 50 The N 1s XPS spectra ( Figure 1D) of the Au QDs@NiTAPc-Gr composite material indicated that the peaks of nitrogen functionalities appeared at 399.2 eV (the N in N−H bonds), 400.5 eV (the N in N−H bonds), and 401.6 eV (the N in C−N bonds). 51 The C1s peaks at 284.8, 286.0, and 287.9 eV corresponded to C−C, C−O, and CO, respectively ( Figure 1E). 52 Furthermore, the peaks located at 85.2 and 88.8 eV were attributable to Au 4f ( Figure 1H). These findings imply the successful preparation of the Au QDs@NiTAPc-Gr composite material.
Furthermore, the PEC materials GO, NiTAPc, NiTAPc-Gr, and Au QDs@NiTAPc-Gr were characterized using Fourier transform infrared (FTIR) spectroscopy, as shown in Figure 53 NiTAPc showed a bending vibration at 1609 cm −1 (curve b) related to the presence of −NH 2 . In NiTAPc-Gr, NiTAPc, which is rich in amino groups, was covalently bound to the surface of GO via reactions with carboxyl groups. As a result, NiTAPc-Gr exhibited a strong absorption peak corresponding to CO in amido linkages at 1694 cm −1 (curve c). Furthermore, the symmetric telescopic vibration of GO at 1724 cm −1 disappeared, which could be attributed to the p−π-conjugated effect of amido linkages, resulting in a shift of the CO absorption frequency toward lower wavenumbers. 54 In the case of Au QDs@NiTAPc-Gr, Au QDs were in situ grown on the surface of NiTAPc-Gr, and characteristic peaks of amido linkages and L-cysteine were observed at 1687, 1644, and 1613 cm −1 . The PEC materials in dimethylformamide (DMF) were characterized using ultraviolet−visible (UV−vis) spectroscopy ( Figure 1G). The absorption spectrum of NiTAPc showed two intense Q bands at ∼637 and 715 nm (curve b), whereas the Q bands of NiTAPc-Gr appeared at ∼629 and 683 nm (curve c). After further modification with Au QDs, which can absorb a range of visible light, the PEC material exhibited an increased absorbance at 623 nm, indicating the successful modification of Au QDs on NiTAPc-Gr. 55 The morphologies of synthesized PEC materials were characterized by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). As shown in Figure 2A, after reacting with GO, Hovenia dulcis thunb-like structures were clearly observed on the surfaces of the GO sheets ( Figure 2B,C). Furthermore, TEM images revealed that NiTAPc-Gr was modified with Au QDs ( Figure 2D) and the Au QDs on the NiTAPc-Gr surface possessed dispersed lattice planes with an average size of approximately 4 nm. These results indicated that the Au QDs@NiTAPc-Gr composite was successfully prepared.
PEC Characterization of Modified Electrodes. Electrochemical impedance spectroscopy (EIS) measurements were carried out at a potential of 0.2 V to characterize the fabricated PEC sensor in a solution of 3 mM [Fe(CN) 6 ] 3−/4− containing Figure 3A, the charge-transfer resistance (Ret) of bare indium tin oxide (ITO) was approximately 49.9 Ω (curve a). A decrease in Ret was observed after modification with NiTAPc (∼41.8 Ω, curve b), which was attributed to the excellent conductivity of this material. An increased Ret was observed for the GO-modified ITO electrode (∼82.4 Ω, curve c). Additionally, the Ret of Au QDs@NiTAPc-Gr (∼279.3 Ω, curve e) was much larger than that of NiTAPc-Gr (∼233.6 Ω, curve d) owing to additional scattering at the surface of NiTAPc-Gr when the electron mean free path became comparable to the thickness of the metal film. 56 These results indicated that Au ions were reduced in the HAuCl 4 aqueous solution during the preparation of Au QDs@ NiTAPc-Gr, resulting in the successful modification of the NiTAPc-Gr surface with Au QDs. In addition, cyclic voltammetry (CV) tests of various electrodes were conducted in the same aqueous solution between −0.2 and 0.8 V at a scan rate of 100 mV/s. As shown in Figure 3C, the anodic peak current for bare ITO was approximately 820.8 μA. When the electrode surface was modified with NiTAPc, the anodic peak current increased to 908.2 μA because of the good conductivity of this material. The GO-, NiTAPc-Gr-, and Au QDs@ NiTAPc-Gr-modified electrodes exhibited lower anodic peak currents (782.1, 601.7, and 537.9 μA, respectively) owing to steric hindrance and electronic repulsion. Furthermore, Nyquist plots showing the effect of dark conditions and light illumination on the charge-transfer behavior in Au QDs@ NiTAPc-Gr/ITO are displayed in Figure 3B. The corresponding EIS measurements were performed in 0.1 M pH 8.0 phosphate-buffered saline (PBS) buffer with or without EP The photocurrent of Au QDs@NiTAPc-Gr was 4.5 times higher than that of NiTAPc-Gr, which could be attributed to the effects of Au QDs toward enhancing the response of NiTAPc-Gr. These results indicated that the Au QDs@NiTAPc-Gr composite can be selected as a photocatalytic material for fabricating PEC sensors.
Optimization of Experimental Conditions. To obtain high sensitivity for the determination of EP, two relevant experimental conditions, namely, the pH of the supporting electrolyte and the applied potential (V), were examined. In the presence of 150 nM EP, the photocurrent response increased with the increase in pH from 4.0 to 8.0, with the maximum value obtained at pH 8.0 ( Figure 4B). Thus, pH 8.0 PBS buffer was chosen as the optimum condition. Additionally, the effect of V on the PEC sensor was examined, as shown in Figure 4C. The photocurrent response increased linearly with the increase in V in range from 0 to −350 mV, which can be described by the linear equation I(μA) = −0.0087V (mV) + 1.26, R 2 = 0.9924 ( Figure 4D). At V higher than −350 mV, the photocurrent began to decrease. Therefore, an applied potential of −350 mV was used in subsequent experiments.
Detection of EP. Under optimal sensing conditions, the photocurrent responses of the PEC sensor to different concentrations of EP (0.12−243.9 nM) were recorded ( Figure 5A), with each test repeated four times. The photocurrent increased with the increase in EP concentration and a good linear relationship was observed, which can be expressed as I(μA) = 0.0195C (nM) + 1.72, R 2 = 0.9992 ( Figure 5B), with a limit of detection (LOD) of 17.9 pM.
In the Au QDs@NiTAPc-Gr structure, the NiTAPc-Gr network and Au QD sensitizer possess different absorption bands owing to their different energy gaps, allowing adequate utilization of the energy of the excitation light. 57 Au QDs and NiTAPc-Gr exhibit cascade band-edge levels that can promote the ultrafast transfer of charge and effectively inhibit the recombination of negatively charged electrons (e − ) and positively charged holes (h + ) when red excitation light is transmitted through the photosensitive Au QDs@NiTAPc-Gr material. Therefore, the photocurrent response is obviously enhanced. Under light irradiation, the photogenerated electrons in the valence band (VB) are injected in the conduction band (CB) through a cascade starting from the Au NP to form the electron−hole pairs. EP as an electron donor can block the recombination of the photogenerated holes and facilitate the transfer of electrons from the conduction band of Au QDs to NiTAPc-Gr and then to the surface of the ITO electrode, resulting in the oxidation of EP to EP + in the electrolyte and the generation of a strong current response. 27,58,59 The presence of EP enhances the electron transfer between the photosensitive materials, resulting in an increase of photocurrent. Thus, different concentrations of EP affect the magnitude of the photocurrent. A schematic diagram of the PEC detection on the sensor is shown in Figure 4A.
Additionally, using several analytical parameters, the performance of the PEC sensor was compared with that of some of the previously reported strategies for EP detection. As shown in Table 1, the current PEC strategy provides a much lower LOD and a somewhat wider linear range, which can be attributed to the large specific surface area and excellent stability and photoelectric conversion capacity of Au QDs@ NiTAPc-Gr under red-light illumination. Thus, the proposed PEC sensor has great potential for the determination of EP.
Stability, Reproducibility, and Selectivity of the PEC Sensor. To investigate the stability of the proposed PEC sensor, the photocurrent response during continuous detection of EP was recorded under periodic light irradiation for 650 s. Good short-term stability was observed with a relative standard deviation (RSD) of 1.09%. Subsequently, the PEC sensor was stored in a refrigerator at 4°C and monitored occasionally. After 1 month, 90.8% of the initial photocurrent value was recorded, revealing that the PEC sensor has long-term stability. Furthermore, the photocurrents of five newly modified working electrodes tested in the same experiment gave an RSD of 1.62%, reflecting the good reproducibility of PEC sensor. The photocurrent response of the PEC sensor was also investigated in the presence of several possible interferents at a 100-fold higher concentration than EP (75.0 vs 7.5 × 10 3 nM). As shown in Figure 5D, the initial photocurrent response increased rapidly after EP was added without interferents, and the subsequent addition of interferents, including Cu 2+ , K + , Mg 2+ , Ca 2+ , Fe 2+ , glucose, noradrenaline (NA), uric acid (UA), L-Cys, dopamine (DA), and tyrosine, did not cause a significant change in the photocurrent. These results To verify the practical applicability of the proposed PEC sensor, different amounts of EP were added to human serum samples (final concentrations of 5, 100, and 200 nM). These human serum samples were analyzed by the standard recovery method using the PEC sensor. As shown by the analysis results in Table 2, the recovery ranged from 98.80 to 99.44%. The parallel determination was performed five times (n = 5), and the RSD was less than 4%. These results indicated that the PEC sensor could be applied for EP detection in human serum samples.
■ CONCLUSIONS
In this work, to construct a PEC sensor for EP, a novel Au QDs@NiTAPc-Gr composite was devised by uniformly growing Au QDs on the surface of NiTAPc-Gr via a chemical reduction method. The integration of Au QDs and NiTAPc-Gr produced synergetic effects that enhanced the photoelectric conversion capacity and absorption efficiency, thus increasing the photocurrent signal. The synthesized composite showed low self-aggregation of Au QDs, a large specific surface area, and excellent biocompatibility. This proposed PEC sensor based on Au QDs@NiTAPc-Gr exhibited a wide linear range (0.12−243.9 nM), a low LOD (17.9 pM), high stability, good reproducibility, and good selectivity for ultrasensitive EP detection. Finally, this PEC strategy was successfully applied to the biological analysis and detection of EP in human serum samples, with recoveries ranging from 98.80 to 99.44%. In view of these results, the PEC strategy has great potential for realtime monitoring of real samples. Apparatus. SEM and TEM images were recorded using an EVO-18 microscope (ZEISS, Oberkochen, Germany) and an FEI Tecnai-G2 F30 microscope (FEI Co., Hillsboro, OR) spectrometer, respectively. XPS spectra were obtained using a K-α spectrometer (Thermo Fisher Scientific Co., Waltham, MA). The FTIR spectra were collected with a Spectrum 65 FTIR spectrophotometer (PerkinElmer Co., Ltd., Waltham, MA). The UV−vis absorption spectra were obtained using a Shimadzu UV-6100 UV−vis-NIR spectrophotometer (Shanghai Mapada Instruments Co., Ltd., Shanghai, China). Red excitation light was provided by a PEAC 200A system (Ada Hengsheng Technology Development Co., Ltd., Tianjin, China). The distance between the illumination source and the sample cell was maintained at 10 cm. PEC measurements were performed using an electrochemical workstation (CHI760e, Chenhua Instrument Co., Ltd., Shanghai, China). ITO slices (≤6 Ω, South China Xiangcheng Technology Co., Ltd., Shenzhen, China) with an active surface area of 0.25 cm 2 were used as the working electrode vs Ag/AgCl as the reference electrode.
Synthesis of NiTAPc. NiTAPc was prepared by the reduction of the corresponding nitro-substituted intermediate. 69 Briefly, 4-nitrophthalimide (1.75 g), carbamide (10 g), ammonium molybdate (0.025 g), and NiCl 2 (1.14 g) were mixed and then fused by heating to a temperature of 160°C, Preparation of the Au QDs@NiTAPc-Gr Composite. GO was synthesized from graphite by a modified Hummers' method. 70 NiTAPc-Gr was synthesized via the following steps. GO (1 mg) was added to a mixed solution of thionyl chloride (10 mL) and DMF (10 mL), stirred, and then heated at 70°C for 24 h. NiTAPc (200 mg) was added to the above mixture after thionyl chloride was removed by vacuum distillation and heated for a further 96 h. The product was washed with ultrapure water and absolute ethyl alcohol several times and then dried in a vacuum drying chamber at 70°C for 6 h. Au QDs@NiTAPc-Gr was prepared via the in situ growth of Au QDs on the surface of NiTAPc-Gr, as described in a previous report from the Yuan group. 71 NiTAPc-Gr was easily dispersed in DMF, and then 1 mL of NiTAPc-Gr suspension (0.1 wt %) was mixed with 5 mL of L-cysteine aqueous solution (1 mM) using a high-speed shaker for 1 h. Subsequently, 5 mL of HAuCl 4 aqueous solution (0.3 mM) and 10 mL of AA aqueous solution (5 mM) were added to the mixture, which was stirred rapidly at room temperature for 3 h to give NiTAPc-Gr decorated with Au QDs. Finally, the product was washed with ultrapure water several times and dried in a vacuum drying chamber at 70°C for 6 h. The synthesis mechanism is shown in Scheme 1.
Electrode Fabrication. The ITO slices were cleaned with acetone, ethanol, and ultrapure water and then dried under an infrared lamp. The ITO substrate was then coated with 5 μL of a dispersion of Au QDs@NiTAPc-Gr (1 mg) in DMF (1 mL), which was allowed to dry naturally in the air. In addition, ITO substrates were coated with NiTAPc, GO, and NiTAPc-Gr using dispersions with the same concentration and volume as mentioned above. Furthermore, a bare ITO electrode was prepared by coating an ITO substrate with 5 μL DMF. The coated area of the modified electrodes was 0. 25 | 2020-04-16T09:09:53.509Z | 2020-04-09T00:00:00.000 | {
"year": 2020,
"sha1": "8ba64b7391504efb2ffa4d820aecaa0ba4c47ad3",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://doi.org/10.1021/acsomega.9b02998",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52af6f976391a68919ed33b391c8443a33af1a16",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
257841596 | pes2o/s2orc | v3-fos-license | Safeguarding intangible cultural heritage: exploring the synergies in the transmission of Indigenous languages, dance and music practices in Southern Africa
ABSTRACT Like other forms of Intangible Cultural Heritage (ICH), Indigenous music and dance cultures have been adversely affected by significant social, economic, technological, and ecological modifications. The resultant transformations in cultural contexts, function, modes of transmission, and performance have endangered the sustainability of several music and dance traditions and their transmission languages. Moreover, efforts to actively support the vitality of jeopardised cultural heritage are being developed and implemented in the emerging fields of applied ethnomusicology, ethnochoreology and linguistics. The area of Indigenous language safeguarding has theoretical, epistemological, and practical models comparable to safeguarding Indigenous music and dance traditions. This similarity is essential to developing interdisciplinary models, policies, and strategies to support the transmission of Indigenous choreomusical and linguistic heritage. Therefore, this article demonstrates how Indigenous music, dance, and language are integral to African cultural heritage and argues for an interdisciplinary community-based model to safeguard them as part of the same cultural ecosystem.
Introduction
Indigenous living cultural practices such as music and dance suffered substantial obstacles under colonisation because of forced cultural assimilation (including music prohibitions) that accompanied the broader oppressions of colonial occupation and imperial exploitation of Indigenous lands (Harrison 2020).Colonial administration and Christian missionaries systematically tried to eliminate Indigenous cultural heritage through censorship laws such as the Witchcraft Suppression Act of 1899, which regarded witchcraft as 'the throwing of bones, the use of charms and any other means or devices adopted in the practice of sorcery' (Statute Law of Zimbabwe, 1899, 295).The Act was used to ban the mhande dance of the Karanga people in Masvingo and Midlands provinces in Zimbabwe (Plastow (1996).Using the same ordinance, the Gule Wamkulu dance of the Chewa people in Malawi, Zambia and Zimbabwe was also banned in the mid-1920s (Parry 1999).The colonial regimes also disconnected Indigenous children from their communities and families to Christian mission boarding schools that prohibited them from speaking their mother tongue, performing their dances and songs, playing local instruments such as mbira, and forcibly displacing them from their ancestral lands that are central to their cultural heritage.According to Harrison (2020), colonial strategies that forcibly changed Indigenous relationships to place, for example, by cultural practices in time (Bialostocka 2017, 17).Heritage practitioners and scholars have raised concerns that the institutionalisation of the living heritage risks 'freezing' it in time, and this kind of 'salvage ethnography' based on a 'preservationist ethos' might, in effect, hinder the development of cultural expressions (Alivizatou 2012, 14).We also argue that the implementation of the ICH Convention of 2003 and language revitalisation is rather inadequately integrated with work on dance and music as elements of the Indigenous cultural ecosystem.Although the ICH Convention of 2003 mentions language, it is only included as a vehicle of intangible cultural heritage.No specific safeguarding measures are designed specifically for language revitalisation as an element of living heritage that the Convention seeks to transmit from generation to generation.
Additionally, the cultural policy framework, for example, the National Arts, Culture and Heritage Policy of Zimbabwe (2020) and the Local Cultural Policy Framework of South Africa (2009) and other national policies for safeguarding culture and language in Southern Africa acknowledge the importance of language as a repository of living heritage, but they are implemented in a top-down approach, scantily executed, and inadequately understood, and do not integrate language and culture sustainability.To address some of the challenges associated with the implementation of the ICH Convention of 2003 and national cultural policies in Southern African countries, we propose that an indigenous humanistic approach, based on the traits of ubuntu/ unhu (social relations, respect for humanity and moral ethics), could provide a framework for developing a community-based approach to cultural vitality grounded on the social context of performance, usage of language, inclusive participation, indigenous protocol, social values, moral ethics, respect for humanity and community social responsibility.Such an approach would create a platform for linking Indigenous language revitalisation to dance and music safeguarding as elements of the same cultural ecosystem in a culturally appropriate, sensitive, and specific way.
Music ecosystems: a conceptual framework
The concept of 'cultural ecology' has been used in anthropology since the 1950s.It refers to the study of human adaptations to physical and social environments.The term ecology in ethnomusicology is more recent and has brought new terms such as music sustainability and ecosystems.Music sustainability refers to the conditions which sustain musical knowledge, sounds, practices, styles, and expressions as well as cultures closely interlaced with them over time (Titon 2009;Schippers and Bendrups 2015).At the same time, music ecosystems are the conditions that enable music to thrive in communities.From an indigenous perspective, the music ecosystem involves the interaction of musical and non-musical elements (including language), living and non-living beings (living-dead or ancestral spirits in the performance of musical arts as cultural heritage for community sustainability (Gwerevende 2022).While the sustainability of Indigenous music ecosystems has been of interest to scholars since the early development of music anthropology and ethnomusicology over the past century, ethnomusicologist Jeff Todd Titon first proposed a detailed ecological perspective on music and sustainability (Titon 2009).He submitted the analogy that music traditions behaved as ecosystems and expanded the dominant paradigm from the twentieth-century ecology of the ecosystem.His work may not be the first linking of ecology with music research, but after his work, the idea that the music field can be thought about in ecological terms became more widespread.
According to Titon (2009), a musical ecosystem involves individuals and groups interacting around a particular genre or style of music.The music ecosystem's inputs, processes and outputs are primarily contained in the language of the people to whom it belongs.Therefore, to preserve the meaning of choreomusical practices and ensure the sustainability of the cultural ecosystem, the promotion and development of Indigenous languages that 'created' the values, experiences and principles need to be considered as crucial measures in safeguarding the living cultural heritage.Corresponding to cultural and linguistic diversity, the types of Indigenous music traditions are varied and extensive in style, ranging from traditional practices to mixtures of Indigenous musical knowledge, practices, and ideas with almost numerous kinds of popular music and ever-expanding new styles of Indigenous music traditions.Many of these music cultures are expressed in local languages and performed together with other arts like drama, poetry, and dance in specific sociocultural contexts.A music ecosystem of any kind includes 'both physical and cultural factors of the musical environment such as ideas about music, sound and sound-producing instruments, recording studios, media, venues, musical education and transmission, and the economics of musicindeed music as cultural production and a cultural domain -which relate to the health of musical individuals, populations, and communities' (Titon 2009, 120).This ecological approach to music focuses on the elements, patterns, and relationships within the overall system, showing how ecosystem elements interconnect.Considering the breadth and width of Indigenous music ecosystems, African music sustainability should focus on sustaining Indigenous performing arts (dance, music, drama) and their associated languages, which are faced with severe ecological and social challenges caused by imperialism, colonialism, globalisation, and climate change.
The linguistic foundation of dance and music: a cross-cultural perspective
There are several Indigenous music and dance styles and ways of conceptualising them in local languages.In many Indigenous African communities, there are no generic terms for dance and music but specific local terms for different social events involving dance and music performance.According to Gwerevende (2020) and Rutsate (2011), the local terms used, such as mutambo in Shona and mitshino in Tshivenḓa, refer to the broader view of dance and music that incorporates the context, singing, invocative drumming, bodily movements, ritual cues, ululations, handclapping, and handheld objects that enhance the cultural performance.This way of contextualising Indigenous cultural practices gives them a broader scope than the English terms music and dance, which explains why no Indigenous word is equivalent to the English concept of music or dance.The non-existence of Indigenous terms comparable to the Eurocentric conceptualisation of dance and music has also been noted by Dave Dargie, who argues that 'There are simply no words in use in the Lumko district (outside of church and school) of the Xhosa to express abstract concepts such as music, melody, note and rhythm' (Dargie 1988, 62).The terms noted by Dargie among the Xhosa people were all related to something a person does when performing music, such as ukuhlabela, which means to lead a song.The Indigenous African dance and music genres are defined by the social functions they serve and the social context in which they are performed, such as those provided in the Table 1 below.
As shown in Table 1, it should be noted that the cultural value and meaning of Indigenous traditions in Southern Africa are carried by their local names, such as muchongoyo of the Ndau people traditionally performed for war preparation and celebration, and Maskandi of the Zulu people performed for wedding celebrations and courtships.Moreover, the cultural value of heritage (knowledge, skills, meaning) that should be safeguarded is stored in the language in which a particular expression has been created and still functions.As Wa Thiong'o (1986) language is a very important vehicle for understanding culture and worldview such that Africanist scholars conversant with African languages and who grew up in African cultures must lead the decolonisation of the mind crusade.The Indigenous conceptions of the events in which choreomusical heritage is performed are not likely without a specific Indigenous vocabulary.The names of the social events are determined by the practitioners' language, purpose of the event, participation, and cultural context.For instance, among the Vatsonga people in South Africa, Zambia and Zimbabwe, dance and music are components of nkelekele (rainmaking ceremony).According to Babane and Chauke (2015), nkelekele has always been practised among Vatsonga to ask for rain and manage drought.Music and dance are Eurocentric abstract concepts, whereas nkelekele is a social event performed by Vatsonga.Another problem with the terms 'music' and 'dance' is that they refer to products rather than a processes (Rice 2014).Such terms do not capture the holistic nature of the social events in which performing arts are performed, the interactions between performers (dancers, drummers, singers and the active audiences) and the significance they attach to these events.Postcolonial Africa is saddled with tribalised linguistic and cultural information, perpetuating certain exaggerated and false assumptions about Africans embedded in the colonial archive (Wa Thiong'o 1986;Mudimbe 1994).Writing about the uhadi music tradition from the Ngqoko district of South Africa, Dave Dargie explains: 'Music is an abstraction: (whereas) a song is something performed by people' (Dargie 1986, 10).Indigenous African choreomusical languages are culturally embedded and socially constructed through their social functions enacted through context-based performance.Mary Douglas, one of the leading Africanist anthropologists of the twentieth century, argued that there are many instances where the English and French languages do not have the vocabulary that appropriately describes some African cultural practices (Douglas 1967).This observation may help explain why most languages in several African societies have no specific local terms equivalent to music or dance.
Indigenous African dance and music lexical tone languages rely on the tonal contour of words to indicate meaning, purpose, and context.To most Indigenous communities in Southern Africa, the concepts of music and dance, which Westerners may describe as organised sound (music) and movement (dance), are redundant abstractions, as they are extensions of language history.The idea of music without language is not known, such that instruments are described as singing parts, such as the hlabela (leader) or lendela (follower), rather than playing notes (Chapman 2007, 53).Even drum ensembles from West Africa, such as those described by Stone (2005, 96), base their patterns around vocalisations: 'Words underlie rhythmic patterns'.Dargie (1988, 62) describes the inseparable integration of words, movement and instrumental performance as a gestalt, a singularly perceived whole.It is important to note that the choreomusical vocabulary in Indigenous African cultures does not indicate a lack of aesthetic values, abstraction, or ability to abstract; instead, it shows the degree of emphasis on social participation, meaning and context of the performance of indigenous African musical traditions.Therefore, the metalanguage of dance and music in most Indigenous communities, such as the Shona, Ndau, Chewa, Venda, Xhosa, Karanga and Zulu, is connected to the social function and cultural contexts.Hence, a socially and culturally embedded metalanguage may be beneficial in describing Indigenous choreomusical processes and activities related to participation and social interaction in performance.
Language plays a significant role in the performance and transmission of Indigenous musical heritage because the songs, ululations and other vocal expressions are performed in the language of the practitioners.Dance as an aspect of musical heritage is also performed choreographically and linguistically through concepts and terms in the Indigenous vocabulary.In most Indigenous communities in Southern Africa, for example, the Venda people in South Africa and Zimbabwe, the description of the community as Venda refers to the language and culture of the Venda people.Tshivenḓa refers to the totality of the Venda culture made up of interconnected music, dance, language, symbols, rituals, beliefs, and myths, which constitute enacted systems for making meaning and sense of the way of life of the Venda people (Gwerevende 2020).Thanasoulas (2001) suggests that language does not exist apart from culture, from the socially inherited assemblage of practices and beliefs that determine our lives.Music and dance are components of Indigenous people's living heritage, and language is critical to the cultural past, present and future and a guide to social reality.Stern (2009) views culture through a more interactive design, stating that it is a response to need, and believes that what constitutes culture is its response to three sets of needs: the basic needs of the individual, the instrumental needs of the society, and the symbolic and integrative needs of both the individual and the community.Music and dance, as elements of Indigenous cultural ecosystems, are communicated or transmitted orally in the language of the culture bearers.According to Patterson (2015, 4), oral culture refers to what is spoken and sung, and aural culture refers to what is heard.Indigenous dance and music traditions use performative, aural and oral transmission methods.These methods are essential for effectively transmitting choreomusical heritage and are almost always simultaneously present in Indigenous communities.Therefore, the sustainability of Indigenous musical ecosystems in Southern Africa is impossible without considering the role of local languages as a form of expression of cultural heritage and the means of its performance.
Cultural heritage: language as a vehicle and repository
The continuing losses of cultural diversity around the world remain problematic for the safeguarding of living cultural heritage.In the international cultural policy framework, for example, ICH Convention of 2003, ethnomusicology and ethnochoreology, Indigenous dance and music traditions have been pursued separately from the languages of their practitioners, a situation which seems perplexing when we consider the significance of indigenous knowledge management systems in the maintenance of biocultural diversity in many areas now 'protected' for nature (Rotherham and Bridgewater 2019).To address these broad issues, fundamental to future cultural sustainability, this article considered cultural diversity as a framework which stresses the importance of language as the repository of Indigenous choreomusical knowledge, practice, and heritage.According to the UNESCO Universal Declaration on Cultural Diversity of 2001 Cultural diversity is stated 'as necessary for humanity as biodiversity is for nature' (Article 1).Article 3 posits that this principle ought to be comprehended within the context of economic expansion, serving as a vehicle for attaining a more gratifying intellectual, emotional, moral, and spiritual livelihood.It also implies a commitment to Human Rights and Fundamental Freedoms, particularly those of Indigenous Peoples (Article 4).This diversity is embodied in the plurality and uniqueness of the cultural identities of the communities making up humanity.Moreover, it infers a commitment to fundamental freedoms and human rights, particularly the rights of minority communities and Indigenous peoples.
Cultural diversity is expressed by language, dance, and music traditions as cultural heritage components.While Indigenous communities adapt to socio-economic changes, their local languages help them to encode, convey and maintain the knowledge of their cultural ecosystems, which involves diverse performing arts.These arts are shaped by and adapted to the socioecological environment and serve as a transmitter of a specific reality (Maffi 2005, 605).Consequently, when speaking about cultural diversity, we need to recognise that it is not only the religious, political, environmental, and social factors that shape it, but it is also influenced and inhabited by the linguistic ecology.It can be further argued that since Indigenous knowledge of cultural ecosystems is implicit in the languages of their inhabitants, the natural environment can also be affected indirectly by the loss of a language (Maffi 2005, 601-603).Maffi further argues that language transmits concepts that cannot be expressed in a different 'code system' and thus represents a repository of the cultural memory of people (Maffi 2005).Dance or music as living culture exists through memory.Therefore, the preservation of linguistic diversity is directly connected to the sustainability of communities and Indigenous choreomusical practices (Maffi 2007; Skutnabb-Kangas and Phillipson 2010).
Language is the instrument of conceptualisation and categorisation of living cultural heritage and, in general, the method of intellectual comprehension of reality, reflecting the nature of cultural performances, contexts and meaning of indigenous performing arts in African communities.It is a natural substrate of cultural heritage, a means of fixing ethnic perception of the world and optimising intercultural interaction and a form of ICH alongside music and dance traditions.However, article 2 (1) of the 2003 UNESCO Convention, which defines ICH for safeguarding, does not explicitly mention 'language' as a cultural heritage.Nevertheless, it states that cultural heritage is transmitted from generation to generation and constantly re-created.Languages are also transmitted from generation to generation, recreated continuously, presuppose knowledge and skills, and speech acts can be described in terms of linguistic practices and expressions (Smeets 2004).The ICH Convention mention language as a vehicle for oral traditions and cultural expressions (UNESCO 2003, 2).Although the ICH Convention does not explicitly refer to the language as a form of living cultural heritage, we argue that Indigenous languages represent people's living cultural heritage as they display all the traits to be regarded as ICH.For example, they are transmitted from one generation to another; constantly recreated; speech can be treated as linguistic practice and expressions; language bestows identity upon people in the same way social practices, rituals, or indigenous knowledge do (Smeets 2004).
Furthermore, UNESCO inscribed various languages on its lists of intangible cultural heritage.The examples include the Language, dance, and music of the Garifuna, inscribed in 2008 on the Representative List of the Intangible Cultural Heritage of Humanity after being nominated by Belize, Guatemala, Honduras, and Nicaragua (UNESCO 2008).In 2009 UNESCO also recognised as a best safeguarding practice a multinational initiative submitted by Bolivia, Chile, and Peru, titled 'Safeguarding intangible cultural heritage of Aymara communities in Bolivia, Chile and Peru'.(UNESCO 2009).The initiative targets all domains of ICH, including language and presents as one of its main areas: 'strengthening language as a vehicle for transmission of the intangible cultural heritage through formal and non-formal education' (UNESCO 2009).What is interesting about this project is that it is interdisciplinary as it involves safeguarding measures to ensure the viability of oral expressions, language, dance, music, and traditional knowledge.Another example is China's nomination of the Hezhen Yimakan storytelling tradition aimed at revitalising the Hezhen language.The nomination was approved by UNESCO when it inscribed this storytelling tradition in 2011.At present, only the elders can speak their native language, while most adults and teenagers have lost their mother tongue and have increasingly become strangers to the legacy of their ancestors (UNESCO 2011).In this case, the Hezhen language was safeguarded as an essential repository for living cultural heritage and a vehicle for expressing and transmitting the Yimakan tradition, which was on the verge of disappearance.Although UNESCO has recognised numerous language repositories as cultural heritage, language itself is not explicitly listed as an Intangible Cultural Heritage (ICH) domain in Article 2.2 of the Convention.Nonetheless, we contend that language should be considered a form of living cultural heritage.
Indigenous communities require support from both local and national authorities, and potentially international intervention, to safeguard languages as a means of preserving their cultural heritage.The specialised lexicon in use among practitioners, especially in Indigenous knowledge, handicrafts and performing arts, may need to be collected to preserve the knowledge concerned and promote its transmission.Music dictionaries used in schools, colleges and universities in most African countries are skewed towards Western music or only contain terms related to Western music.For endangered Indigenous languages, dictionaries are essential resources that cover understandings outside the meaning of words and provide insights into language structure and indigenous cultural knowledge.Hawaiian linguist Candace Galla argues, 'Dictionary-making is one of many initiatives that can create capacity in Indigenous communities to document and (re)access language again to reclaim and revitalise knowledge and identity' (Galla 2020).Previously, some work has been done in Southern Africa to collect and explain the vocabulary of regional musical forms, instruments, and practices (Smit 1992).For instance, in 2000, Reino Ottermann published a dictionary of music titled 'Suid-Afrikaanse Musiekwoordeboek/South African Music Dictionary'.
In the introduction, the dictionary says, it 'concentrates on terms from Western music culture.A small number of terms used in the Indigenous African musics in South Africa, which pupils and students will commonly encounter, have nevertheless been added' (Ottermann 2000, 6).This statement reveals a patronising (if not downright offensive) attitude (King and Steyn 2003).The dictionary assumes that twenty-first-century South African music researchers, teachers and students are supposed only to know about 'Western music culture' plus a token smattering of Indigenous African musical terms thrown in for good measure -this in a hard-won pluralistic and democratic society (King and Steyn 2003).Besides this dictionary being inclined towards Western music, there is no comprehensive book covering Indigenous music terms in local languages such as Zulu, Xhosa, Venda, Swati, and Tswana.Since oral tradition predominates in most African communities, preserving it in written and digital form for future generations is expedient and essential to prevent cultural extinction (Kamtchueng 2019).Lexicologies have been employed in Cameroon to protect cultural heritage.Kamtchueng further argues that literary works of Anglophone Cameroonian authors have made it possible to highlight the various facets of this cultural heritage that are perceived by including lexis that falls under the categories of traditional events and songs; traditional products and titles; foods, local dishes, and drinks; socialisation, relations, and acquaintances.
Mother language is an essential carrier of indigenous knowledge, norms and values, often used in performing rituals or ceremonies, practising and transmitting living cultural heritage, especially in oral cultures.Using their mother language, Indigenous practitioners of specific traditions often use highly specialised sets of lexicons, concepts, terms and expressions, which reveal an intrinsic relationship between language and the ICH.Therefore, Indigenous performing arts are expressed in specific language registers, which must be safeguarded together with the traditions.For example, epics often abound with aspects and expressions that need study and special attention in transmission processes.Documentation may also be required for the transmission of the expressions in question.In exceptional cases, such as the Mhande dance and music of the Zezuru people in Zimbabwe, proclaimed a masterpiece by UNESCO in 2008, the language used in the representations is fundamentally different from the everyday language of the bearers of the tradition, which is Shona.The determination of language planning actions to preserve an endangered indigenous language should not only depend on the intangible cultural heritage to be protected.Those affected by the language extinction should also decide whether the conservation efforts should target a small group or include wider groups.
The extinction of specialised lexicons means a loss of important local knowledge systems.Chichewa or Chinyanja language carries sacred and secret knowledge connected to the Gule Wamkulu tradition (also known as Vilombo or Zilombo, meaning the world of beasts), which is a ritual performed by members of the Nyau secret society in several countries in Southern Africa.Chichewa is a language of the Bantu family that is widely spoken in parts of Central, East, and Southern Africa, particularly in Malawi, Mozambique, Zambia, and Zimbabwe.To promote the language, the Government of Malawi designated Chichewa the national language in 1968 and established a Chichewa Board, which oversaw its safeguarding and coordinated research into grammar, usage, linguistic structures, spelling, songs, folklore, idioms, and other aspects.Gule Wamkulu, literally 'the big dance' or 'the dance of the elder', is a performance of the Nyau secret society that is central to the education of male youth and in ritual ceremonies (Kambalu 2016).The performance of Gule Wamkulu is associated with certain rules.For instance, the performers are not allowed to disclose the proceedings of the initiation ceremony to the public.The dancers -initiated Nyau men -wear masks and costumes made of banana leaves.This attire is meant to hide the dancer's identity.
UNESCO declared the Gule Wamkulu an Intangible Cultural Heritage of Humanity in 2005 and 'is now part of our world heritage since 2005, one of UNESCO's 90 Masterpieces of the Oral and Intangible Heritage of Humanity' (Boucher 2012, 257).The nomination forms for the tradition were written in English rather than in Chichewa.Although the documentation was done in Chewa, the interviews were transcribed verbatim and translated into English.The translation resulted in the loss of specialised lexicons that carries Nyau secrets and sacred knowledge.The loss of knowledge derives from the attempts to translate Gule Wamkulu terms, concepts, techniques, and specialised lexicons from Chewa to English directly without appreciating the deeper meaning of the symbolism involved in the performance of the secret ceremony.Language is the most important conveyor of meaning and culture, elements which are often lost in translation, especially when such translation is across languages from distant cultural zones (Chirikure 2017).Therefore, translating the Chewa language into English or French as the official language of UNESCO, when writing about Gule Wamkulu for documentation and safeguarding distorted the meaning of local words and traditions.The transmission of Indigenous knowledge among the Chewa is done orally and communally through participation and observation of the performances in cultural contexts.
Policy framework for safeguarding cultural heritage in Southern Africa
Numerous national and international cultural and indigenous language conventions form the cultural policy framework for safeguarding linguistic and choreomusical heritage in Southern Africa.The cultural policies were designed in line with the regional and international agreements that originated from the need to protect choreomusical practices and linguistic expressions as forms of ICH, thereby promoting cultural and linguistic diversity across national signatories to the conventions (UNESCO 2003).The most prominent documents related to ICH include the UNESCO Convention for the Safeguarding of Intangible Cultural Heritage ( 2003) and the UN Declaration on the Rights of Indigenous Peoples UNDRIP (2007).The UNDRIP asserts that the right of Indigenous peoples to their languages is inherent, an intangible possession of the peoples who speak it (UNDRIP 2013).The advancements in international human rights rules are vital in preventing further linguistic and cultural heritage loss.What needs to be understood is that the UNDRIP embraces numerous international human rights instruments, making it more than just aspirational (UNDRIP 2013).However, the UNDRIP expands on the already-existing human rights of Indigenous peoples rather than establishing new ones.
The UNESCO Convention of 2003 articulates the urgent need for measures to ensure the viability of ICH worldwide, including languages as oral expressions and performing arts, including dance and music.The Convention's actions for safeguarding ICH include identifying cultural expressions that need support and activities relating to documentation, research, protection, promotion, transmission, and revitalisation.It is the first binding multidimensional Convention for safeguarding ICH that reinforces existing international instruments, resolutions and recommendations concerning cultural heritage.The ICH Convention serves as a framework for developing national policies that reflect current global models and strategies for safeguarding ICH.It has been created to promote the safeguarding of ICH, ensure better visibility of ICH, raise awareness of its importance, and encourage a dialogue respecting cultural diversity.Between 2009 and 2017, the Government of Flanders supported several Sub-Saharan African countries with a grant to support the implementation of the 2003 Convention.The grant resulted in a series of pilot projects to safeguard ICH at the grassroots level in several African countries, such as Botswana, Malawi, South Africa, Swaziland, Uganda, Zambia, and Zimbabwe.One of the projects is titled 'Safeguarding intangible cultural heritage in basic education in Namibia and Zimbabwe ' (2022-2024).The project focused on capacity building to promote the transmission of living cultural heritage in schools through teaching minority indigenous languages, performing arts and indigenous knowledge systems, which were not part of the education curriculum in most Southern African countries.
The UNDRIP adopted by the General Assembly in 2007, unlike the ICH Convention, explicitly mentions languages in article 13.It says, 'Indigenous peoples have the right to revitalise, use, develop and transmit to future generations their histories, languages, oral traditions, philosophies, writing systems and kinds of literature, and to designate and retain their names for communities, places, and persons' (UN 2007).In addition to the specific references to Indigenous languages, the UNDRIP considers a plethora of other rights relevant to this issue, including the right of Indigenous peoples to practise, safeguard and revitalise their cultural traditions (art.11); to teach their cultural heritage and religious practices (art.12), and to preserve their cultural expressions (art.31).The UNDRIP also captures several ICH elements connected to indigenous languages, such as dance, musical arts, and religious rituals.In addition, southern African countries such as Malawi, Zambia, and Zimbabwe ratified the UNDRIP to show their commitment towards promoting indigenous rights and interests, rights to cultural, religious, spiritual, and linguistic identity, and selfdetermination.
The national constitutions have substantially influenced the creation and implementation of cultural heritage policies for safeguarding cultural heritage in South Africa and Zimbabwe.One on hand, the Zimbabwean Constitution under Article 16 says, 'it is the obligation of the State and all its institutions and agencies and indeed all Zimbabwean citizens to preserve and protect Zimbabwe's Cultural Heritage while at the same time respecting the dignity of traditional institutions' (Government of Zimbabwe 2013).The National Arts, Culture and Heritage Policy (2020), designed in line with the national constitution, aims to create a progressive, cohesive, and culturally vibrant society where cultural heritage and various artistic expressions, performing arts and indigenous languages celebrate the nation's diverse heritage.On the other hand, the South African Constitution, under the language clause supported by the Bill of Rights, recognises language as a fundamental human right: 'Everyone has the right to use the language and participate in the cultural life of their choice, but no one exercising these rights may do so in a manner inconsistent with any provision of the Bill of Rights' (section 30).Apart from the constitution, the national policy on living cultural heritage emphasises the importance of recognising indigenous languages as they are central in the effort of heritage management in South Africa (Department of Arts & Culture 2009, 36).The cultural policy deals with cultural traditions, customs, religion, identity, language, crafts, and art forms, including music, dance, creative writing, theatre, photography, and film as the sum of the results of human endeavour.The same goal is shared in South Africa's Language policy which is seen as a serious indictment of the government's commitment in its Policy Statement to meet its goals to 'ensure redress for the previously marginalised official indigenous languages' (Department of Arts & Culture 2003).The South African and Zimbabwean cultural policies also advocate transmitting living heritage from generation to generation.They also encompass several ways to document and revitalise the harmonious combination of arts, language, and cultural heritage as catalysts for sustainable development.
Although several national and international policies are relevant for protecting Indigenous linguistic and choreomusical heritage in Southern African countries, mainly South Africa and Zimbabwe, implementing these policy instruments is associated with loopholes that cause adverse implications, such as heightened disagreement among cultural practitioners about the sustainable future trajectory of living cultural heritage.The top-down approach to implementing these policies promotes the visibility of cultural heritage and indigenous languages from specific ethnic groups.For example, the African Union (AU) agreed that 'language is at the heart of a people's culture' (OAU 1986) and that social and economic development can be accelerated using indigenous African languages.African Union declared that each African state should promote the use and development of every language within its borders.However, it overlooks the reality that some indigenous African languages suppress and dominate other indigenous languages of Africa (Nhongo 2013).A case in point is that of Shona and Ndebele in Zimbabwe, which dominate other languages and are now being labelled as minority.From a critical stance, language policy can be construed as political, ethnic, and cultural domination (Wright 2004).In essence, language is power, and control over people's language practices is a significant expression of political and cultural hegemony.Such musical and linguistic heritage preservation approaches risk being undermined by a complex set of issues, for example, a lack of grassroots understanding, resources, control, and ownership that typically characterises approaches developed and implemented at the community level (Grant 2013).For this reason, the strategies for cultural heritage preservation, in some cases, are associated with systematic challenges to the very choreomusical traditions and indigenous languages they intend to promote and protect.
Ubuntu/unhu-based transmission living cultural heritage
The centrality of communities, groups and individuals is highly emphasised in the ICH Convention of 2003 and other policies discussed previously.According to the Convention, communities, in particular, Indigenous communities, play an essential role in the production, safeguarding, maintenance and re-creation of ICH (Preamble); only communities can recognise particular practices, representations, expressions, knowledge or skills as their ICH (Article 2.1) and any cultural sustainability initiative should 'involve the communities concerned in safeguarding activities and management of their ICH' (Article 15).To bring communities to the centre of cultural sustainability, we proposed a humanistic approach based on the philosophy of ubuntu/unhu that could help design and implement approaches grounded on cultural context, inclusive participation, social values, moral ethics, respect of community members and their cultural values and social responsibility.Ubuntu/unhu can be described as the capacity in African culture to express dignity, compassion, humanity, reciprocity, inclusivity, respect, and mutuality in the interests of maintaining communities with justice and mutual caring (Nussbaum 2003;Gwerevende 2020).It is an epistemological and ethical concept shared by many ethnic communities in Southern Africa.Ubuntu/unhu is centred on social relations, group solidarity, interconnectedness, and moral values central to the survival of Indigenous African communities and the sustainability of their cultural heritage as living traditions.
Ubuntu/unhu can play an essential role in safeguarding cultural heritage, as it influences the internal social activities that involve the performance of dance, music, and the usage of Indigenous languages.The knowledge about ubuntu/unhu and its inclusion in cultural sustainability could be a recipe for a holistic and sustainable transmission of dance, music, and indigenous languages as features of the same cultural ecosystem.We considered an ubuntu/unhu grounded approach for safeguarding choreomusical and linguistic heritage in and outside communities as a decolonising strategy that supports contextual and holistic revitalisation and documentation of cultural heritage.In addition, ubuntu/unhu can also help reassert or reaffirm "Indigenous sovereignty whereby 'Indigenous people reclaim their past, present, and future (George et al. 2020, 3).By reestablishing Indigenous sovereignty, the humanistic model based on ubuntu/unhu is seeking to establish indigenous terms and conditions for the safeguarding of language, dance, and music as elements of living cultural heritage based on respect, trust, and genuine reciprocity.
Given the importance of language as a carrier of culture, it is essential to advocate for its documentation and revitalisation (i.e.encouraging the continued use of the language) in the contexts where it is used, such as community rituals, music and dance performances.To achieve fluency in Indigenous languages and proficiency in musical arts performance, researchers, learners, and teachers need to rely on methods that are based on ubuntu/unhu, such as close relationships with community elders, speakers, dancers, and musicians and participating in cultural events in which the music and dance are performed, and the language is spoken.These community-based modes of transmission promote a thorough understanding of how indigenous music practices, dances and languages are intertwined.Through the ubuntu/unhu-based cultural events, dance, music, and language are transmitted in contextualised cultural contexts and critical social and intergenerational interactions.Leuthold (1998, 93) claims that 'for obvious reasons, Indigenous dance songs can only survive in the fullest sense when native languages survive'.These songs, in most cases, are inseparable from dance, and they depend on the language for the composition of new lyrics and meanings connected to the performance context and beyond word-for-word translations facilitated by dictionaries.In addition, context-based music education incorporates musical content into the classroom and includes non-musical elements such as dance, visual arts, language, socio-cultural values, and the environment.In addition, music is an essential method for revitalising Indigenous languages and dance.It should occupy centre stage in safeguarding endangered languages and dance traditions to survive in the fullest sense.
Safeguarding ensures the long-term viability of intangible heritage within communities and groups.It is defined in the Convention as 'measures aimed at ensuring the viability of the living cultural heritage, including the identification, documentation, research, preservation, protection, promotion, enhancement, transmission, particularly through formal and non-formal education, as well as the revitalisation of the various aspects of such heritage' (UNESCO 2003).Language, dance, and music interact constantly, and music competence is not enough for learners to be competent in that music without proficiency in the language of the music and the dance that is associated with the music.A community performer, for example, in the Venda culture, is a multi-skilled performer who can dance, play ngoma (drums) and sing in the Tshivenda language in a single performance, such as tshigombela.The development and implementation of national cultural policies should reflect the interdependence of dance, music, and indigenous languages.The methods of language transmission also involve musical expressions such as songs, ululations, yodelling, poetry, and vocables that promote incorporating cultural knowledge into the education system and other strategies for language safeguarding.Vocables, are sounds that are sung, spoken or written, but have no semantic meaning (Chambers 1980).Vocables or syllables are vocal expressions without semantic content, which appear as an element of almost every style of Indigenous singing in several Southern African communities.In addition, many educators and language policymakers need to understand that knowledge of a specific language, such as grammatical competence, must be complemented by culture-specific expressions, such as dance, music, and poetry, to enhance cultural competence through language education and safeguarding strategies.
The collaboration in transmitting dances, music practices and languages promote the coordination of efforts to safeguard cultural heritage from applied ethnomusicology, ethnochoreology and linguistics.Music or dance education is not a matter of educators explaining to the learners how it is; it is essential to let learners learn the language of the music tradition and make informed participant observations, such as ethnographers would do during fieldwork.By recognising firsthand experience and the cultural heritage language, learners or researchers can see and understand a specific cultural performance beyond choreomusical terms and concepts.They can also understand and realise the underlying cultural processes and linguistic techniques that cultural practitioners of a particular music and dance tradition utilise to produce, perform, and interpret choreomusical experiences, including unspecified assumptions, collective cultural knowledge realised through ubuntu/unhu, and meanings transmitted orally.Finally, an ubuntu/unhu-based approach to transmitting cultural heritage emphasises the integrated implementation of music, dance, and language sustainability from an interdisciplinary perspective.
Conclusion
This article argues that the relationship between indigenous dance, music and languages may create collaborative platforms, policies, and initiatives for safeguarding linguistic and choreomusical heritage as forms of living cultures.While the ICH Convention of 2005 is essential for safeguarding indigenous living cultural heritage, promoting cultural rights and linguistic diversity in Southern African countries, there are several limitations owing to, among other factors, a failure to link performing arts and language as elements of the same cultural ecosystem and a lack of understanding of the embedded cultural context and philosophical underpinnings.Furthermore, the ICH Convention mentions language in a restrictive way in the first of the domains listed: 'oral traditions and expressions, including language as a vehicle of the intangible cultural heritage' (Article 2.2).This wording presents a compromise between the views of countries that do not want to acknowledge language as a domain of living cultural heritage and countries that want languages to be included as a form of ICH listed in article 2.2.These problems, among others, present a pushback from the strong relationship between indigenous language, performing arts and cultural identity characterising African traditions, as expressed in the Cultural Diversity Declaration.
This article has advocated for the reconstruction of the cultural ecosystem by proposing a community-based model that links Indigenous language revitalisation work and safeguarding music and dance practices in Southern Africa.We argued that using local terms and languages when writing about Indigenous performing arts for documentation and transmission is one way of achieving holistic transmission of Indigenous living heritage.More importantly, for communitybased safeguarding approaches to be practical, local communities must play a role in designing the revitalisation strategies and compiling ICH nomination forms for UNESCO inscription.Without this, the views of cultural policymakers, experts and Indigenous communities may remain worlds apart as far as the safeguarding of dance, music, and languages as components of the same cultural ecosystem is concerned.Applied ethnomusicologists, ethnochoreologists and linguists should collaborate with cultural practitioners to develop sustainable and culturally sensitive models for Indigenous communities to safeguard their culture and improve their livelihoods and the viability of their cultural heritage.Such models may prove a thoughtful and decolonial step towards indigenising cultural sustainability and helping local communities preserve their choreomusical and linguistic heritage for sustainable development.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Solomon
Gwerevende is a PhD candidate in Applied Ethnomusicology at Dublin City University in Ireland.He holds a Choreomundus-International Master's Degree in Dance Knowledge, Practice and Heritage jointly offered by the University of Clermont Auvergne, France; Norwegian University of Science and Technology, Norway, University of Szeged, Hungary and University of Roehampton, United Kingdom.He also holds a first-class Master of Arts in Ethnochoreology from the University of Limerick, Ireland.Zama M Mthombeni is a PhD candidate in Development Studies at the University of KwaZulu Natal.She currently works as a chief researcher for the Human Sciences Research Council.Her research interests/trajectory and publication record are in these areas: social justice in education, Language policy & Development Sociology.She holds Masters degree in Public Policy and Masters Degree in Commerce (Local Economic Development) both acquired at the University of KwaZulu Natal in South Africa.
Table 1 .
argues, Indigenous choreomusical genres and their social functions. | 2023-03-31T15:03:45.870Z | 2023-03-29T00:00:00.000 | {
"year": 2023,
"sha1": "16fc7237ad0c921e5e60a5cc1198a9c903b38f9a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/13527258.2023.2193902",
"oa_status": "CLOSED",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "144291998fd8573e82e9ac6fd0cd2749ce75e356",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
7591906 | pes2o/s2orc | v3-fos-license | Chromosomal abnormalities in 163 Tunisian couples with recurrent miscarriages
Recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses before 24 weeks of gestation. Parental chromosomal abnormalities represent an important etiology of RM. The aim of the present study was to identify the distribution of chromosome abnormalities among Tunisian couples with RM referred to the Department of Cytogenetic at the Pasteur Institute of Tunis (Tunisia) during the last five years. Standard cytogenetic analysis was carried out in a total of 163 couples presenting with two or more spontaneous abortions. Karyotypes were analyzed by R-banding. We identified 14 chromosomal abnormalities including autosomal reciprocal translocation, Robertsonian translocation, inversion, mosaic aneuploidy and heteromorphysm. The overall prevalence of chromosomal abnormalities was 8.5% in our cohort. This finding underlies the importance of cytogenetic investigations in the routine management of RM.
Introduction
Recurrent miscarriages are post implantation failures in natural conception. They are also termed as habitual abortions or recurrent pregnancy losses [1]. Recurrent miscarriage (RM) is defined by some authors as three or more pregnancy loss before 20-24 weeks and was considered as a distinct disease entity [2][3][4][5][6][7]. In 2005, the European Society of Human Reproduction and Embryology (ESHRE) introduced a revised terminology regarding early pregnancy events. A pregnancy loss that occurs after a positive urinary human chorionic gonadotropin (hCG) or a raised serum β-hCG but before ultrasound or histological verification is defined as a "biochemical loss". In general, these occur before 6 weeks of gestation. The term clinical miscarriage is used when ultrasound examination or histological evidence has confirmed that an intrauterine pregnancy has existed. Clinical miscarriages may be subdivided into early clinical pregnancy losses (before gestational week 12) and late clinical pregnancy losses (gestational weeks 12 to 21). There is no consensus on the number of pregnancy losses needed to fulfill the criteria for recurrent miscarriage (RM), but ESHRE guidelines define RM as three or more consecutive pregnancy losses before 22 weeks of gestation [1].
Recurrent miscarriage occurs in approximately 3% of women with diagnosed pregnancies [8] and affects about 1-3% of women during their reproductive years [9]. Various etiologies, either alone or in combination, have been proposed to contribute to pregnancy loss including, uterine malformations, infections, maternal thrombophilic disorder, immune dysfunction, various endocrine disturbances and parental chromosomal anomalies [10]. Among the various etiologies, genetic factors appear to be highly associated with reproductive loss [11,12]. In 50% of the couples, no specific cause can be identified and in this situation, they are regarded as having idiopathic or unexplained RM [12]. Between 29% and 60% of cases, RM could be caused by chromosomal aberrations in the embryo [13].
It is generally assumed that chromosomal anomalies found in the fetus were due to a balanced aberration in one of the parents being inherited by the offspring in an unbalanced form [13]. A chromosomal abnormality in one partner is found in 3% to 6% of RM couples, which is ten times higher than the background population [7]. Changes in the karyotypes include balanced reciprocal translocation, robertsonian translocation, gonosomal mosaic and inversions [9]. The aim of this study was to identify the types of chromosomal abnormalities in couples with two or more recurrent miscarriage referred to Cytogenetic Department of Pasteur Institute of Tunis.
Methods
Between January 2011 and December 2016, 163 couples with ≥ 2 recurrent miscarriage were referred to Cytogenetic Department of Pasteur Institute of Tunis, from different parts of the country. All of these patients underwent a complete clinical assessment, including complete medical and gynaecological history in order to exclude immunologic effects, uterine malformations and other causes of recurrent abortions. Written informed consent was obtained from all participants.
Metaphase chromosome preparations from the peripheral blood cultures were made according to standard cytogenetic protocols. Cytogenetic analysis was performed by RHG banding. Twenty metaphases were analyzed in all the patients but in cases of abnormalities and mosaicism, the study was extended to 50 metaphases. Chromosomal abnormalities were reported according to the International System for Human Cytogenetic Nomenclature (ISCN 2009). X 2 and FISHER tests and linear regression study were used to examine the significance of association (P<0.05) and performed with Epi Info 7.
Results
A total of 163 couples (326 cases) with history of recurrent miscarriage were examined. The median age of female partner was 31.94 years ± 0.70 where as the median age of male partner was 36.61± 4.94. The number of recurrent abortion varied from 2 to 7 abortions/couple. Chromosomal abnormalities were present in 14 cases (8,5%). It was found in 6.7% (5/74) of the couples with a history of two abortions, in 10.7% (7/65) with three abortions and in 8.3% (2/24) with four or more abortions. Women were more frequently affected than men, with a prevalence of 7% and 0.6 % respectively (P<0.05). No couple presented an abnormal karyotype in both partners. The number of chromosomal abnormalities decreases significantly with the times of miscarriage (R2 = 0,58 ; P=0 03 ; DF=5) ( Figure 1). Among 14 abnormalities, 6 cases showed structural aberrations and 4 cases presented numerical anomalies. Four other cases presented heteromorphism which included qh+ (secondary constriction increase) in chromosome 9, and s+ (satellite increase) in chromosomes 13, 14 and 22 (Table 1). The majority of the structural abnormalities were balanced reciprocal translocations whereas, robertsonian translocation was found in only one patient involving chromosomes 13 and 14 (Table 1).
Discussion
Recurrent miscarriage continues to be a challenging reproductive problem for the patient and clinician. More than 50% of the spontaneous miscarriages are caused by chromosomal abnormalities in the embryo or fetus [14]. The genetic etiology for multiple spontaneous pregnancy loss includes an unbalanced chromosome rearrangement, which may be the result of one parent being a carrier for balanced chromosome rearrangement [15].
Several studies have been carried out to determine the prevalence of chromosomal aberrations among couples with recurrent miscarriage. This prevalence ranges widely from 2.7% [16] to 13.9% [15]. In our study, we found that the incidence of chromosomal abnormalities among couples with two or more miscarriages was 8,5%. It was close to that reported by Frikha et al [17], higher than reported by Flynn et al [12], Dutta et al [2], Elghezal et al [3] and lower to that described by and Mozdarani et al [15]. These differences may be related to sample size and to different criteria (Table 2).
In our study, the number of chromosomal abnormalities decreases significantly with the time of miscarriage, like found by Frikha et al [14], contrarily to that reported in the literature which is significantly increasing [18,19]. However, Carp and all, showed that the prevalence of chromosomal aberrations was independent of the number of previous abortions [13]. Overall, chromosomal aberrations are the cause of 50% of first trimester spontaneous abortions [20]. In Our study, all patients with chromosomal abnormalities presented spontaneous abortions in first trimester.
As has also been reported in other studies, structural chromosomal aberrations were the most common chromosomal abnormalities detected in this study (6/14). Literature reported that only 0.7% of normal population were with structural aberrations, 2.2% occurred in cases who suffered abortion once, the rate increased to 4.8% in cases with 2 times of abortions and was 5.2% in those with 3 times of abortions [21]. The most frequent structural chromosome abnormalities in recurrent miscarriage are translocations (reciprocal translocation (62%), robertsonian translocation (16%), inversions (16%) and deletions and duplications (3%) [22]. The prevalence of balanced translocation among couples with recurrent abortion in different studies ranges from 0-31%. The reason of this extent variation is not clear [18,23].
The structural chromosome abnormalities that we encountered were divided into balanced reciprocal chromosome translocations (3/14), Robertsonian translocation (1/14) and inversions (2/14). The distribution of structural chromosomal rearrangement in our study is similar to that reported worldwide.
Among the human chromosome, chromosome no.9 appears with the high frequency of structural heteromorphism, which is a natural variation that occur 1-2% among the individuals in the general population and transmitted through family as mendelian trait [22]. In our research, the rate of heteromorphism was 2,4% and included 9qh+and s+. Whether heteromorphism can cause disease is controversial. Ueharas et al [24], reported that inv (9) 26]. Further analysis in "control" Tunisian population must be achieved to conclude about the impact of these heteromorphisms.
Eventually, our study agree with several previous studies indicating an increase in the number of balanced chromosomal translocation in couple with two or more abortion compared with general population. Numerical chromosomal aberrations are less frequently encountered among couples with RM. Those aberrations are usually in form of sex chromosomal aneuploidy, and they occur in a low frequency (0.15% of cases) [27]. In our study, we found 4/14 cases with X chromosome mosaicism. Monosomy X cell's rate varied between 6% and 22%.
Conclusion
Our study confirm that chromosomal abnormality is one of the important factors contributing to RM. Chromosomal analysis is a necessary part of the etiological research in couple with recurrent miscarriage. The identification of chromosomal abnormalities facilitated the counseling and the appropriate management.
What is known about this topic A chromosomal abnormality in one partner is found in 3% to 6% of RM couples, which is ten times higher than the background population. The incidence of chromosomal abnormalities among couple with RM ranges widely from 2.7% to 13.9%; The number of chromosomal abnormalities is significantly increasing with the time of miscarriage; Structural chromosomal aberrations were the most common chromosomal abnormalities.
What this study adds
In our study, we found that the incidence of chromosomal abnormalities among couples with two or more miscarriages was 8,5%; In our study, the number of chromosomal abnormalities decreases significantly with the time of miscarriage; Among all abnormalities, structural aberrations were the most but 4 patients presented numerical anomalies and four other cases presented minor anomalies heteromorphism which included qh+ (secondary constriction increase) in chromosome 9, and s+ (satellite increase) in chromosomes 13, 14 and 22.
Competing interests
The authors declare no competing interests.
Authors' contributions
Wiem Ayed oversaw sample collection and recruitment, supervised all clinical aspects of the work, interpreted results and date analysis. Islem Messaoudi, Zouhour Belghith and Wajih Hammami contributed to data analysis. Imen Chemkhi, Nabila Abidli, Helmy Guermani and Rim Obay oversaw sample collection and undertook cytogenetic analysis. Ahlem Amouri supervised all cytogenetic analysis and interpreted the results, had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of f the data analysis. All authors contributed to the report. All authors read and approved the final manuscript. Table 1: list of chromosomal abnormalities identified in 163 couples with recurrent miscarriage Table 2: frequency of chromosomal abnormalities in our study and other populations Figure 1: the relation between chromosomal abnormalities and miscarriage times | 2018-04-03T04:40:12.912Z | 2017-09-29T00:00:00.000 | {
"year": 2017,
"sha1": "5dce1ccd0ccbba4ac4e61c9be31f7d87a731f7eb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11604/pamj.2017.28.99.11879",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5dce1ccd0ccbba4ac4e61c9be31f7d87a731f7eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255800391 | pes2o/s2orc | v3-fos-license | Radiation-induced senescence: therapeutic opportunities
The limitation of cancer radiotherapy does not derive from an inability to ablate tumor, but rather to do so without excessively damaging critical tissues and organs and adversely affecting patient’s quality of life. Although cellular senescence is a normal consequence of aging, there is increasing evidence showing that the radiation-induced senescence in both tumor and adjacent normal tissues contributes to tumor recurrence, metastasis, and resistance to therapy, while chronic senescent cells in the normal tissue and organ are a source of many late damaging effects. In this review, we discuss how to identify cellular senescence using various bio-markers and the role of the so-called senescence-associated secretory phenotype characteristics on the pathogenesis of the radiation-induced late effects. We also discuss therapeutic options to eliminate cellular senescence using either senolytics and/or senostatics. Finally, a discussion of cellular reprogramming is presented, another promising avenue to improve the therapeutic gain of radiotherapy.
Introduction
Cellular senescence, which is a normal consequence of aging, is characterized by irreversible cell cycle arrest in response to various stress stimuli, resistance to apoptosis and senescent-associated secretory phenotype (SASP). Cellular senescence is a cell fate decision and normal physiological event, which plays essential roles in development, prevention of cancer, and the wound healing process. However, when cells are subjected to sustained sub-lethal injury including radiation therapy or chemotherapy, continued oxidative stress and chronic inflammation prompt entry into cellular senescence. The chronic state of radiation-induced senescence together with secretion of pro-inflammatory factors, a phenomenon known as the SASP (see Fig. 1) contribute to the major pathology of radiation-induced normal tissue and organ injury. In this article, we systematically review research findings and highlight the contributions of senescent cells to the pathophysiology of radiation-induced normal tissue injury, as well as therapeutic options to eliminate radiation-induced senescence.
chemokines, resulting in a deterioration of tissue and organ function [1][2][3]. Damaging ROS might arise from several sources including infiltrating activated leukocytes and macrophages. Further, other cells, such as fibroblasts, can be stimulated by pro-inflammatory cytokines to produce ROS. Tissue hypoxia resulting from vascular damage is another continual source of ROS generation [4]. The generation of these reactive molecules is part of the innate immune system and helps to rapidly clean the wound to reduce injury, but excessive production of ROS can lead to severe tissue damage including fibrosis and even neoplastic transformation. Strategies aimed at blocking effector molecules or otherwise reducing oxidative stress are attractive for preventing or mitigating radiation toxicity. For the last three decades or so, we and others have shown mitigating effects of a variety of agents including superoxide dismutase mimetics, statins, stem cell mobilizers and angiotensin converting enzyme inhibitors [1,[5][6][7][8][9]. In addition, pan-suppression of macrophage infiltration and cytokines/chemokines expression using a small molecule had a most impressive mitigating effect in normal tissues including skin and brain [5,6].
An unmet critical question of normal tissue radiobiology is "What is the source of chronic ROS and inflammation?" The authors contend that one of the major sources of chronic inflammation is radiationinduced senescent cells.
Biomarkers of radiation-induced senescence
Attempts have been made to classify molecular pathways involved in cellular senescence; using a modification of that proposed by Kumari and Jat [10], we propose four group: (1) the DNA Damage Response (DDR) pathway, (2) Mitochondrial Dysfunction, (3) Oncogene Activation and (4) Other Stresses. These are illustrated in Fig. 2. These molecular pathways have been implicated in aging of normal tissues (as described below) as well as cancer promotion and aggressiveness (also described below). Since radiation exposure is often used to simulate accelerated aging, it follows logically that the same four molecular pathways have a role in normal tissue injury following irradiation. The four molecular pathways are summarized in the following paragraphs.
Biomarkers-DNA damage response pathway
DNA damage that results from proliferative exhaustion secondary to shortened telomeres or genotoxic stress either dependent or independent of reactive oxygen species (ROS) is orchestrated by the Ataxia Telangiectasia Mutated (ATM) and RAD3-related (ATR) kinases. Fig. 1 Senescence genesis, organelle specific molecular pathways consequences. Environmental stresses including ionizing radiation, cytotoxic agents and stress cause cells to express the senescent phenotype. Senescent cells are characterized by (1) increased lysosomal activity and decreased autophagy (2) expression of histone γ-H2AX (a marker of DNA strand breaks and telomere shortening), increased p16 and p21 (indicative of cell cycle arrest) and DDR (DNA damage response, an evolutionarily conserved signaling cascade, and (3) increased production of reactive oxygen species. These collectively promote a proinflammatory senescence associated secretory phenotype (SASP). The consequence of these processes increases chronic inflammation and fibrosis, promotes tissue remodeling and alters both innate and adaptive immunity ATM and ATR belong to the class-IV phosphoinositide 3-kinase (PI3K)-related kinase (PIKK) family and act as the sentries of genome stability; upon sensing DNA damage, they induce specific (i.e. G1/S G2/M and S-phase) cell cycle checkpoints through p53 increasing p21WAF1/ CIP1 expression. p21WAF1/CIP1, also known as cyclindependent kinase inhibitor 1 or CDK-interacting protein 1, is a cyclin-dependent kinase inhibitor (CKI) that is capable of inhibiting all cyclin/CDK complexes [11], though is primarily associated with inhibition of CDK2 [12,13]. Of primary importance to cellular senescence, p21WAF1/CIP1 binds to and inhibits the activity of CDK2-Cyclin E or CDK4/6-Cyclin D leading to cell arrest in G1/S.
Biomarkers-mitochondrial dysfunction
Mitochondrial Dysfunction impacts similar cell cycle checkpoints through p53 expression, increasing p21WAF1/CIP1 and blocking cell cycle progression. Stresses such as low glucose, hypoxia, ischemia, heat shock and low NAD + /NADH cause low ATP in the cell and increased AMP-activated protein kinase (AMPK) expression. AMPK is a master regulator of cellular energy homeostasis. Persistent activation of AMPK leads to accelerated p53-dependent cellular senescence.
Oncogene-induced cellular senescence is a complex molecular program characterized by suppression of cell proliferation triggered in response to the aberrant activation of oncogenic signaling or the inactivation of a tumor-suppressor gene [14,15]. For example, Ras, traditionally believed to promote unrestrained proliferation, has been implicated in oncogene-induced cellular senescence [16]. RAS is a GTPase that is frequently mutated in cancer and that affects a variety of cancerdriving processes [17]. RAS proteins, essential components of signalling pathways that emanate from cell surface receptors, lead to accelerated p53-dependent cellular senescence through both the Raf, p38MAPK pathway and the PI3K, AKT, mTOR pathway.
Biomarkers-oncogene activation
The suppression of a cell death response by oncogenic RAS is a consequence of a perturbation of homeostatic balance between pro-apoptotic and anti-apoptotic signals. To keep up with the high energy needs of growing cells, the survival of RAS-transformed cells is further aided by metabolic reprogramming towards glycolysis that is mediated by MAPK-and PI3K-dependent regulation of hypoxia-inducible factor 1α (HIF-1α). Oncogenic RAS modulates the tumour microenvironment by promoting pro-angiogenic mechanisms and by altering host-mediated immune responses including HIF-mediated immune suppression. It is interesting to note that Song and colleagues propose that a combination of HIF-1α inhibitors with small molecules such as metformin and immunotherapy checkpoint blocking antibodies may boost anti-tumor immunity [18,19] and enhance the anti-cancer effectiveness of high dose radiation therapy [20].
Biomarkers-available evidence and lack of consensus
It is a consensus among many senescence biologists that a universal marker of cellular senescence may never be found [22]. This is partly because of heterogeneity and diversity of tissues and their divergent responses to the plethora of genotoxic stimuli. The most common approach is to identify a panel of different markers based on cell cycle arrest (e.g. p16, p21), increased lysosomal compartment [e.g. senescence associated β-galactosidase (SA-β-gal)], structural changes associated with the DNA damage response (DDR; e.g. γH2AX) and additional traits specific for the SASP (e.g. increases in ROS, pro-inflammatory cytokines/chemokines, tissue proteases such as MMPs, etc.). Table 1 provides some common characteristics of cellular senescence that have the potential to be used as biomarkers of cellular senescence. Cell cycle exit is controlled by activation of the p53/p21 and/or p16/ Rb tumor suppressor pathways. Unlike quiescent cells, senescent cells are non-responsive to mitogenic or growth factor stimuli. Consequently, increased expression of CDKN1A or CDKN2A RNA or the proteins they encode, p21 and p16Ink4a, respectively, are characteristic of senescent cells. However, these markers are not completely definitive because they may be induced in reversible cell cycle arrest or differentiation in specific cell types [23]. β-galactosidase activity, which is found in many normal cells under physiological conditions is significantly amplified in senescent cells as a result of increased lysosomal content [24]. Since SA-β-gal activity is detected in most senescent settings, both in vitro and in vivo, it is considered a de facto hallmark of senescence. However, SA-β-gal operates with a pH optimum of 6, in contrast to other lysosomal β-galactosidases (pH optimum 4-4.5), which necessitates careful controls that are not always reported [23]. Furthermore, some cells, notably hippocampal CA2 pyramidal and cerebellar purkinje neurons, express high endogenous levels of SA-β-gal, even at young ages [25], perhaps due to metabolic demands [26]. It is of note that components of the SASP have utility as confirmatory biomarkers of senescence, but are not standalone biomarkers, since most of them are not specific to senescence. Although a core SASP profile may exist, it has been recognized since the discovery of the SASP that its protein components can vary depending on cell type and inducing stimulus, as well as being temporally dynamic [27, 28; see below]. Nevertheless, despite these difficulties, several studies document de novo expression of senescence-associated markers after therapeutic irradiation. Wang et al. reported [29] that total body irradiation selectively induced murine hematopoietic stem cell (HSC) senescence using two biomarkers, p16Ink4a and SA-β-gal. Of interest, the induction of HSC senescence was associated with a prolonged elevation of p21, p19ARF and p16Ink4a mRNA expression. In contrast, there were no changes in the biomarkers of irradiated hematopoietic progenitor cells [29]. Likewise, ionizing radiation induced endothelial senescence using the same biomarkers of senescence, SA-β-gal, p16Ink4a and p21 [30]. Radiation-induced pulmonary fibrosis (RIF) is one of the limiting factors in the treatment of advanced lung cancer using radiation therapy. Whether cellular senescence is responsible for RIF remains to be answered. Studies to date indicate that several cell types with biomarkers of senescence have been identified including SA-β-gal positive alveolar epithelial cells, putative alveolar stem cells, and mesenchymal stem cells [31][32][33].
Several investigators identified markers of cellular senescence 2-12 months after whole brain irradiation. Wong and co-workers [34] observed increased expression of the cell cycle-related regulators p16Ink4a and p19ARF in mouse hippocampus after 5 Gy, while Suman et al. [35] observed increased expression of p16Ink4a, p19ARF and p53 as well as indicators of oxidative damage in cerebral cortex after 1.6-2 Gy. Elevated cortical levels of RNA for Cdkn1a (p21), Cdkn1b, Cdkn2a transcript 1 (p19ARF), Cdkn2a transcript 2 (p16Ink4A) were also reported [34,36]. Irradiation induced senescence in mouse neural stem cells as indicated by increased expression of p16Ink4a, γH2AX, markers of reactive oxygen species and the SASP factor, Il-6 [23,34,37,38]. Others showed the predominance of cellular senescence in astrocytes after radiotherapy for the malignant gliomas with increased expression of p16Ink4a and p21, as well as secretion of HGF and Il-6 as SASP factors [39]. Increased p53, but not telomere length, is an important mediator of astrocyte senescence [26]. Interestingly, the elimination of radiation-induced senescence in astrocytes using an inhibitor of Bcl-2 attenuated glioblastoma recurrence [36,40,41], while injecting irradiated, senescent human glioblastoma multiforme cells into an immunocompromised mouse resulted in faster tumor growth compared with non-irradiated, non-senescent cells [42].
Similar biomarkers of radiation-induced senescence in the mouse skin have been identified [43]. Many researchers have identified radiation-induced senescence in cultured cells from many organs in vitro [44][45][46][47], and even from transformed cell lines [24].
Senescence associated secretory phenotype (SASP)
SASP is a phenotype associated with senescent cells wherein those cells secrete a complex mixture containing hundreds of proteins, including pro-inflammatory cytokines/chemokines, immune modulators, tissue-damaging proteases, factors that can adversely affect stem and progenitor cell function, homeostatic factors, ceramides, bradykinins and growth factors [48][49][50][51]. Senescent cells also release exosomes and ectosomes such as enzymes, microRNA, DNA fragments, and the anti-apoptotic protein, Bcl-xL. Although the early phase of SASP has biologically beneficial effects in wound healing and tissue remodeling, SASP is the primary cause of the detrimental chronic effects of senescent cells. Senescence does not only affects events inside the cell but has the potential to affect the surroundings through paracrine loops and/ or entry into circulation [28]. The SASP is heterogenous, although the full extent of the heterogeneity is only starting to be explored. Transcriptomic analyses of senescent cells [52][53][54][55][56][57] assume that gene expression changes will be predictive of SASP constituents. SASP proteomic atlases are also starting to be generated [46]. Cell type appears to be the most significant factor in affecting SASP constituent heterogeneity [53,55]. However, inducing stimulus is also important and has led some investigators to begin grouping SASP factors into functional categories, with unique acronyms. A group of specific SASP factors regulated by Nfkb signaling (NASP), p53 associated factors (PASP) and Stat3 regulated factors have been identified [58,59]. For example, viral vector driven over-expression of p16 [60] or pharmacological treatment with CDK4/6 inhibitors [59] fail to induce Nfkb regulated SASP factors such as IL6, but rather induce RNAs encoding factors associated with p53 including Igfbp3, Lif, and Tollip. Finally, SASP constituents vary over time [51]. Thus, definitive compendia of SASP components will require additional investigation. Senescent cells are highly metabolically active, producing large amounts of above mentioned SASP factors, which is why senescent cells consisting of only 2-3% of tissue cells can be a major cause of aging associated diseases [61,62]. Given that humans contain an estimated 37 × 10 12 cells, including 1 × 10 6 pituitary cells, the small fraction of senescent cells outnumber professional secretory cells [63] to produce widespread systemic effects, including within the immune system. SASP factors such as IL-6 and TNFα enhance T-cell apoptosis, thereby impairing the capacity of the adaptive immune system [64]. Chronic inflammation due to SASP can also suppress immune system function. Immune system responses to senescent cells and senolytics have been reviewed recently [65].
The SASP is regulated at multiple levels, including transcription, translation, mRNA stability and secretion. One of the important regulatory pathways is mammalian target of rapamycin (mTOR). Interleukin1-alpha is found on the surface of senescent cells, where it contributes to the production of SASP. mTOR inhibition prevents the IL-1α from degrading transcripts of numerous components of SASP factors [66,67]. The use of mTOR inhibitors showed senostatic effect in various animal studies [68, 69; see below].
The role of radiation-induced senescence in tumor tissue
While most research on cellular senescence has been performed on non-cancerous cells, however, cancer cells can be equally induced to cellular senescence through a variety of stress and damage signals including radiation and cytotoxic chemotherapy. A prime first responders in the DNA damage response, non-homologous endjoining and homologous recombination, are two main pathways for repairing double strand breaks, which are potent stimuli for inducing cellular senescence. Senescent cells exhibit apoptosis resistance, metabolic activity and secretion of pro-inflammatory and proliferative molecules (SASP). The effect of the SASP is highly dependent on context and cell type and variable during the different stages of cancer progression [70,71]. Factors influencing the role of cellular senescence in the tumor tissue widely vary in part due to the tumor tissue heterogeneity, the oncogenic status, immune cell recognition by acute vs chronic senescence and radiation dose regimen, to name a few [72][73][74]. For example, acute induction of cellular senescence is considered important for cancer prevention by stimulating the immune system to rapidly eliminate the genetically unstable cells, while chronic cellular senescence due to persistent stress signals (ROS, chronic inflammation) and the accumulation of dysfunctional senescent cells is unable to remove by immune cells, whereas chronic cellular senescence creates a tumor promoting environment through a secretion of SASP including IL-1 alpha/beta, IL-6/8, MMPs, VEGF, TGFbeta, HFH, etc. The tumor microenvironment stimulates tumor cell proliferation, angiogenesis and epithelial to mesenchymal transition. Foregoing factors contribute to the increase in the tumor radioresistance. Chronic cellular senescence also contribute to the radiation-induced late effects in the normal tissues and organs such as lung and skin fibrosis, cognitive dysfunction/necrosis to name a few. Overall, the SASP of senescent cancer cells is considered to be primarily detrimental in therapy resistance, immunosuppression and metastasis [70,75].
It is well established that the efficacy of tumor radiotherapy depends on the total dose of radiation, dose per fraction and duration of fractionation regimen. Usual radiation fraction size in the clinical radiotherapy ranges from 1.8 to 2.5 Gy per fraction. Most tumor and normal cells sustain sub-lethal injury which would result in the cellular senescence, but a fraction of tumor cells undergo lethal cell death through either apoptosis or mitotic catastrophe as shown in the Fig. 1. When a radiation dose increases above 10 Gy per fraction, most cells will sustain lethal irreparable damage while a fraction of tumor cells undergoes cellular senescence [76]. It is reasonably well established that the radiosurgery/stereotactic body radiotherapy (SBRT) has consistently shown a superior tumor control rate relative to the conventional fractionated radiotherapy [77,78]. Although the initial introduction of SBRT is aimed to exploit the superior geometrical distribution of radiation dose to the small target tumor tissue relative to the surrounding normal tissue, there is mounting evidence that additional radiobiological factors would contribute to the increase in the tumor control rate perhaps including vascular and immune effect of SBRT [79,80]. We posit that the radiation-induced cellular senescence may also play important contributing factor in the increase tumor control rate. Since the quantity of radiation-induced cellular senescence in the normal tissue in the conventional RT (usually 20-30%) vs SBRT (less than 5%) is disproportionally high, the tumor recurrence rate and normal tissue damage would be expected to be high in the conventional fractionated RT relative to SBRT in part to the detrimental effect of SASP from cellular senescence as discussed in the foregoing section. Indeed, a recent paper shows the elimination of senescent astrocytes induced by radiation reduces the tumor recurrence of the radioresistant malignant glioma in the brain [40].
Senolytics and senostatics
Senolytics are a class of drugs that selectively eliminate senescent cells. Multiple pharmacological strategies are under investigation to remove senescent cells. They include small molecules, peptides, and antibodies [81][82][83][84][85][86]. Senescent cells are generally resistant to apoptosis. Some senolytic agents are cell and tissue specific; others are not. To date, five or six different signaling pathways have been identified and targeted drugs are being developed. These include Bcl-2, PI3K/AKT/mTOR, HIF-alpha pathways, TK inhibitors and HSP-90 inhibitors, to name a few [83]. Examples of senolytics include Dasatinib, Quercetin, Fisetin, and Navitoclax [82]. However, the current generation of senolytics targeting these proteins have some limitations in terms of safety, specificity and broad-spectrum activity. It is interesting to note that most senolytic drugs were initially being developed as anti-cancer agents, the so-called targeted cancer drugs, since some of the signaling pathways in tumor and senescent cells overlap each other. Our new preliminary data show the potential of senolytic as well as anti-cancer agents to illustrate the foregoing point. Alvespimycin , an HSP-90 inhibitor, reduced normal tissue damage after a radiation exposure without compromising radiotherapy effectiveness [87]. The mitigating effect of 17-DMAG alone on acute skin damage and late effects in response to a single dose of 30 Gy exposure is shown in Fig. 3. Using another class of senolytics, Kirkland and his team have shown some functional and structural improvement in cardiovascular function, and radiation-induced muscle weakness using the combined senolytics, dasanitib and quercetin [82]. In a small Phase show a statistically significant separation between groups. Skin injury semi-quantitative scale is 1 = normal, 2 = erythema, 3 = dry desquamation, 4 = moist desquamation and 5 = necrosis. Typically, scores of 3 and under resolve with time whereas scores greater than 3 do not. Each curve is from 5 mice. Error bars shown represent standard deviation. Panels E and F show the effect of a senolytic agent and senostatic agent to increase the therapeutic gain; each strategy elicits an anti-cancer effect and the combined administration mitigates normal tissue radiation injury. Panel E illustrates an increased A-549 tumor growth delay following administration of either a senolytic agent, 17-DMAG or a senostatic agent, metformin. At day 35, tumor volumes following 15 Gy + metformin was statistically different from that of 15 Gy alone (other groups did not reach significance and only trends were observed). Panel F shows that combining a senolytic and senostatic agent mitigates radiation-induced skin injury in C57BL/6 mice (data from a separate experiment as that shown in A-D). Note that combining the senostatic (metformin) with a senolytic (17-DMAG) did not abrogate the mitigation of radiation injury. At day 50, average damage score of mice receiving either 30 Gy + 17-DMAG or 30 Gy + 17-DMAG + metformin were statistically different from that of mice receiving 30 Gy radiation alone (although adding metformin did not improve average damage score). Each data point represents at least 10 mice for tumor growth delay and at least 5 mice for skin damage study. Error bars shown represent the standard deviation I clinical study without a placebo control, the dasanitib and quercetin combination appeared to be well tolerated and to alleviate frailty in elderly men and women with a serious lung disease. Other early data on the effectiveness in humans have been mixed, although 10 additional open label trials are ongoing, including one in HSC transplant survivors (clinicaltrials.gov). Using another class of senolytics, navitoclax, a Bcl-2 family inhibitors, improved radiation-induced pulmonary fibrosis [88], radiationinduced hematotoxicity, age related HSC dysfunction [89] and delayed malignant glioma recurrence by eliminating the radiation-induced senescent astrocytes [40]. The potential of navitoclax to mitigate normal tissue radiation damage while sensitizing radiation cytotoxicity in tumors is further supported by navitoclax's ability to overcome hypoxia-driven radiosensitivity [90]. Although navitoclax is an FDA approved drug for the treatment of chronic lymphocytic leukemia, the main dose limiting toxicity has been thrombocytopenia. As with many other current cancer therapeutics, the most likely scenario of using senolytics would be utilizing a combinatorial approach.
In contrast to senolytics, senostatics do not kill senescent cells but inhibit paracrine signaling and thus limit the spread of senescence via the so-called bystander effect. Antioxidants including multiple flavonoids, polyphenols and other phyto-chemicals may have a senostatic effect. mTOR pathway inhibitors and mitochondrial function (complex I) have significant senostatic potential [91,92]. Metformin and Rapamycin are examples of senostatic agents [84]. Unlike senolytics targeting a specific signaling pathway, senostatics target not only senescent cells but also non-senescent related functions. Nevertheless, a short-term treatment of mice with rapamycin, metformin (an anti-diabetic drug), or dietary restriction decreased frequencies of cells positive for multiple senescence markers [93][94][95]. Rapamycin appeared to mimic the effects of calorie restriction and induced autophagy (a process the decline of which is associated with a number of age-related diseases). A clinical trial of anti-aging in humans is being planned using metformin at 1500 mg per day, according to the American Federation for Aging Research. Many questions remain to be addressed before launching large scale human trials using either senolytics or senostatics or both. These include the dosage, timing and duration of treatment (i.e. intermittent vs continuous); further, endpoints for evaluation such as monitoring bio-markers of senescence and functionality of the therapy efficacy need to be addressed.
An area of future study is to test whether combining senolytics and senostatics has the potential to increase tumor control and simultaneously reduce normal tissue injury induced by radiation. It is of note that metformin in the context of an inhibitor of NF-κB improved cancer cytotoxicity in vitro and in vivo by interfering with senescence-associated cytokine production [96]. Figure 3E illustrates the potential benefit of combining 17-DMAG with metformin. Metformin alone in mice resulted in mitigation of radiation injury to the same extent as did the senolytic, 17-DMAG, in the same animal model of skin and muscle injury (data not shown). Interestingly, combining the senolytics and senostatic in this model did not further reduce radiation damage; one interpretation is that the target for senolytic and senostatic mitigation of tissue injury is the same (Fig. 3F).
Cellular reprogramming
Another therapeutic option to eliminate or reverse cellular senescence comes from cellular reprogramming approaches. Expressing so called Yamanaka factors, OCT4, SOX2, KLF4 and c-MYC (OSKM) converts somatic cells into induced pluripotent stem cells (iPSCs). Ocampo et al. have shown the potential of partial reprogramming in tackling aging [97][98][99]. Unlike previous studies that used Yamanaka factors in vivo which could initiate cancer development or teratoma formation, Ocampo and his co-workers have successfully demonstrated that tumor formation can be avoided by shortterm induction of OSKM. Further, cyclic induction of OSKM in vivo ameliorated hallmarks of aging and improved the regenerative capacity of pancreas and muscle following injury in physiologically aged mice. More recently, Sarkar, Rando et al. described a feasible way to deliver Yamanaka factors to cells taken from patients with osteoarthritis by dosing cells kept in cultures with small doses of the factors [100]. The result showed not only restoration of lost functionality in diseased cells and aged stem cells but also preservation of cellular identity. Also, it is interesting to note that they used a nonintegrative, mRNAs-based platform of transient cellular reprogramming. In vivo transient expression of nuclear reprogramming factors holds great promise for reversal of senescence and tissue repair and regeneration. Reprogramming cells in vivo has been shown to be possible with recent clinical successes employing CRISPR technology (e.g., in patients with genetic diseases such as sickle cell anemia) [101,102].
Conclusion
There is mounting evidence showing that radiationinduced senescence in both tumor and normal tissues contributes to tumor recurrence, metastasis, and resistance to therapy while senescent cells in the normal tissue and organ are a source of many late damaging effects. The authors propose the hypothesis that the source of chronic ROS and inflammation is radiation-induced senescent cells; this has not been confirmed and is an area of active research that may lead to a new therapeutic option. Advances in the cellular and molecular pathways of cellular senescence provide novel strategies to enhance therapeutic ratio of radiation therapy. Pre-clinical data on the radiation-induced senescence and late tissue damage using senolytics and senostatics provide a promising avenue for radiotherapy research. | 2023-01-15T05:06:55.034Z | 2023-01-13T00:00:00.000 | {
"year": 2023,
"sha1": "96e648f891d270c74f33ba5d47b73b3fa6e4bd2a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "96e648f891d270c74f33ba5d47b73b3fa6e4bd2a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17464550 | pes2o/s2orc | v3-fos-license | Anomalous Fermion Number Non-Conservation on the Lattice
The anomaly for the fermion number current is calculated on the lattice in a simple prototype model with an even number of fermion doublets.
INTRODUCTION
Fermion number, which is the sum of baryon number and lepton number (B + L), is not conserved in the Standard Model [1]. This is due to the anomaly in the fermion current. Under "normal" conditions there is, however, a strong suppression factor exp(−4π/α W ) ≃ 10 −150 , which makes (B + L) violation unobservable. At high temperature and/or high fermion densities (at high energies) the non-conservation is amplified. This may explain the small baryon asymmetry of the universe, which could arise via this mechanism at the cosmological electroweak phase transition. (For references to the extensive literature on this subject, see the reviews in ref. [2].) The lattice formulation of the anomalous fermion number non-conservation is problematic [3], because it has to do with the chiral SU(2) L gauge coupling and, as is well known, there is a difficulty with chiral gauge fields on the lattice (see, for instance, the review [4]). There is, however, an approximation of the electroweak sector of the Standard Model, which can be studied with standard lattice techniques, namely the limit when the SU(3) colour ⊗U(1) hypercharge gauge couplings are neglected. The usefulness of this limit for lattice studies was particularly emphasized in earlier works by Lee and Shrock (see [5] and references therein). In their phase structure studies staggered fermions were used. Here Wilson fermions will be considered, which naturally lead to the mirror fermion action for chiral gauge theories [6]. There is now a growing amount of experience with this action, without SU(2) L gauge field, in the numerical simulation studies of the allowed region of renormalized quartic and Yukawa couplings [7,8,9]. The inclusion of the SU(2) L gauge field in the simulation algorithms is straightforward; therefore one can start to speculate about the possibility to explore some features of the violation of the fermion number conservation. In order to understand the mechanism of fermion number non-conservation on the lattice, let us see how the relevant anomalous Ward-Takahashi identity arises in this formulation.
LATTICE ACTION
Let us consider a simple prototype model, which is the extension of the standard SU(2) L Higgs model by an even number 2N f of fermion doublets. In the Standard Model we have N f = 6 (for simplicity, we consider Dirac neutrinos, but the massless neutral right-handed neutrinos decouple [10]). In what follows we take, for simplicity, N f = 1, but the extension to N f > 1 is trivial. The lattice action depends on the matrix scalar field ϕ x = φ 0x + iφ sx τ s (with four real fields φ S=0,...,3 ) and the fermion doublet fields ψ (1,2)x : The standard Higgs-model action is The fermionic part contains the chiral gauge fields (with U xµ ∈ SU(2) and P L,R = (1 ∓ γ 5 )/2) and is given by Here ǫ = iτ 2 acts in isospin space, and C is the fermion charge conjugation matrix. The Yukawa couplings G 1,2 can, in general, be arbitrary diagonal matrices in isospin space but, for simplicity, we shall here only consider the case with degenerate doublets (G 1,2 proportional to the unit matrix).
Instead of the off-diagonal Majorana mass µ 0 and Majorana-like Wilson term (proportional to r), it is technically more convenient to consider a Dirac-like form with ψ ≡ ψ 1 and the mirror fermion field In terms of ψ and χ one obtains the mirror fermion action for chiral gauge fields [6] This is the appropriate form of the fermion action in the phase with broken symmetry, as the investigations of the corresponding chiral Yukawa models show [11,12,7,8,9].
In the symmetric (i.e. confinement) phase, however, there is a natural alternative choice in terms of the reshuffled combinations [11]: On this basis the vector-like nature of the model becomes explicit (γ 5 's appear only in the Yukawa couplings). The SU(2) gauge field couples only to ψ A , and the neutral doublet ψ B has only its Yukawa coupling. Note the different rôles played by µ 0 in the three lattice actions: in (5) it is an off-diagonal Majorana mass, in (7) the fermion-mirror-fermion mixing mass, whereas on the basis in (8) it is a common Dirac mass for ψ A and ψ B . The physical interpretation of the model is, of course, given in terms of the action in (5).
Previous studies of the phase structure of the same continuum "target theory" in the staggered fermion formulation were usually done in a basis corresponding to (8), with the known differences between staggered and Wilson fermions (see, for instance, [13,14,5,15]). In many cases the Yukawa couplings were omitted, and the ψ B field was not considered at all. Representing the fermion number anomaly both in terms of the fields in (5) and (8) is useful, because it gives the connection to the axial anomaly. This connection has recently been exploited also in ref. [16].
THE ANOMALY
On smooth background scalar and gauge fields {ϕ x , U xµ } the effective action is defined by where Ψ ≡ {ψ, χ}. An infinitesimal fermion number transformation is: This corresponds to the fact that the fermion number is defined to be +1 for the fields ψ 1,2 (and hence it is −1 for χ).
The gauge-invariant fermion number current can be defined as Introducing the new integration variables (ψ ′ , ψ ′ , χ ′ , χ ′ ) in the path integral with action (7), This has to be evaluated in the continuum limit, when the momenta of the external fields in lattice units are of the order a (a → 0). For small lattice spacing a the left-hand side of (12) is of order a 4 (note that for the moment we keep the bare parameters fixed, for instance, µ 0 can be of order 1). Therefore diagrammatically the contributing graphs can have at most four external field legs. Explicit evaluation shows that in the present case only those with two or three external fields (i.e. the triangle and quadrangle graphs) contribute. Introducing the SU(2) field strength as usual by the result is (this time for 2N f fermion doublets) Here the lattice integral I is given by (15) and the notations are The integral I is the same as the one occurring in the chiral anomaly, and one can prove (see e.g. [17,18]) Note that in the present regularization scheme no other terms on the right-hand side of (14) occur. For instance, the scalar field having Yukawa couplings to the fermions does not contribute at all (although, of course, it appears on external legs of the graphs). This is different from the non-Abelian U(N)⊗U(N) anomaly studied in ref. [19] in other regularization schemes (with different lattice actions), where the Bardeen-counterterms [20] are in general non-zero. Equations (14) and (17) show that the correct continuum anomaly is reproduced at vanishing bare (Majorana) fermion mass µ 0 = 0. It is, however, interesting to investigate the µ 0 dependence of the lattice integral in (15). The numerical evaluation of the corresponding lattice sum I L on L 4 lattices up to L = 200 shows that I = lim L→∞ I L is very small, probably I(µ 0 , r) = 0 for every positive µ 0 [21]. This behaviour implies that the anomaly in (14) disappears at every positive µ 0 , and there is a singularity at µ 0 = 0, where according to (17) the value of I is non-zero. The derivatives of I with respect to µ 0 tend to infinity for L → ∞; therefore we have, on the given external bosonic field configuration, for instance, Here the infinity can be produced by the summation over y because of the long-range correlation due to fermionic zero modes. From the practical point of view the behaviour of I(r, µ 0 ) implies that in numerical simulations one has to be careful in the extrapolation to µ 0 = 0. The lattice volume should be small enough.
The functional dependence of the lattice integral I(r, µ 0 ) on µ 0 also illustrates how the anomaly is emerging from the explicit symmetry breaking present in the lattice action (2). In the case when the cut-off can be completely removed, this does not matter. Nevertheless, in theories with scalar fields there is the well-known "triviality problem", which implies that for finite renormalized couplings the lattice spacing cannot be taken to zero. This means that the unpleasant feature of the singular dependence of the anomaly on the bare fermion mass in principle remains. However, in order to understand the situation better, one has to consider the full theory with quantized bosonic fields, where a mixing of the renormalized composite operators has to be dealt with [22,23]. | 2014-10-01T00:00:00.000Z | 1992-11-11T00:00:00.000 | {
"year": 1992,
"sha1": "48f60dd7dd77718ebbce9aa4fa94885b5bf4b292",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9211028",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e41885aaedb3c23a9564d163d17c3daf069b7d71",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
108054727 | pes2o/s2orc | v3-fos-license | Insights into the antineoplastic mechanism of Chelidonium majus via systems pharmacology approach
BackgroundThe antineoplastic activity of Chelidonium majus has been reported, but its mechanism of action (MoA) is unsuspected. The emerging theory of systems pharmacology may be a useful approach to analyze the complicated MoA of this multi-ingredient traditional Chinese medicine (TCM).MethodsWe collected the ingredients and related compound-target interactions of C. majus from several databases. The bSDTNBI (balanced substructure-drug-target network-based inference) method was applied to predict each ingredient’s targets. Pathway enrichment analysis was subsequently conducted to illustrate the potential MoA, and prognostic genes were identified to predict the certain types of cancers that C. majus might be beneficial in treatment. Bioassays and literature survey were used to validate the in silico results.ResultsSystems pharmacology analysis demonstrated that C. majus exerted experimental or putative interactions with 18 cancer-associated pathways, and might specifically act on 13 types of cancers. Chelidonine, sanguinarine, chelerythrine, berberine, and coptisine, which are the predominant components of C. majus, may suppress the cancer genes by regulating cell cycle, inducing cell apoptosis and inhibiting proliferation.ConclusionsThe antineoplastic MoA of C. majus was investigated by systems pharmacology approach. C. majus exhibited promising pharmacological effect against cancer, and may consequently be useful material in further drug development. The alkaloids are the key components in C. majus that exhibit anticancer activity.
INTRODUCTION
Traditional Chinese medicine (TCM), mainly consisting of medicinal herbs, has been widely utilized both pharmaceutically and clinically in China and other countries. At present, TCM still takes a place in eastern countries. Although TCM is clinically effective for medical treatment [1], its mechanism of action (MoA) is rather ambiguous. One hypothesis is that TCM acts in a sophisticated way, with multiple components affecting multiple targets simultaneously, so the individual study on one specific target is usually insufficient [2]. Therefore, the therapeutic mechanism of TCM needs to be investigated in a more holistic way.
Chelidonium majus is a medicinal herb belonging to the Papaveraceae family that distributes in Europe, Asia and North America. While C. majus is utilized as an analgesic in TCM formula, the polypharmacological effects of C. majus have also been discovered. It was reported to exert anti-infectious, anti-inflammatory and antineoplastic effects [3,4]. The anticancer effect of C. majus was observed by a plenty of studies, yet the molecular MoA of C. majus remains unclear.
As the development of systems biology and network pharmacology, systems pharmacology [5,6] has emerged to be a novel methodology for us to understand the complex mechanism of multi-component TCM. Systems pharmacology treats the human body as a closely integrating and dynamically changing system, which coincides with the TCM theory. Therefore, systems pharmacology appears to be a powerful approach in understanding the MoA of multi-component TCM.
Recently, we developed a series of network-based methods to predict drug-target interactions, namely network-based inference (NBI) [7], substructure-drugtarget network-based inference (SDTNBI) [8] and balanced SDTNBI (bSDTNBI) [9]. These methods do not rely on the three-dimensional structures of targets, and hence show great advantages in target prediction. The SDTNBI and bSDTNBI methods can be used to predict potential targets for both old drugs and new chemical entities, with substructures to bridge the gap between new compounds and old drugs. Our previous studies have demonstrated that both SDTNBI and bSDTNBI methods are powerful tools to uncover the pharmacological and toxicological mechanisms of TCM with complex compositions [10,11], and bSDTNBI outperformed SDTNBI.
In this study, we investigated the anticancer MoA of C. majus with systems pharmacology approach, together with bioassays and literature survey. We firstly collected the chemical components and known targets of C. majus from several databases. Then, putative targets were predicted for each ingredient via bSDTNBI method. Afterwards, pathway enrichment analysis was performed on the basis of experimental and putative compoundtarget interactions (CTIs). Network analysis was subsequently conducted to unravel the potential mechanism of C. majus, and genetic prognostic factors were analyzed to determine the certain cancer types on which C. majus might be effective. The key components from C. majus that exert potent anticancer activity were also identified.
RESULTS
The whole workflow of this study was illustrated in Figure 1, including three major steps: data collection and analysis, network construction and analysis, experimental validation.
Collection and analysis of C. majus' ingredients
Totally 44 chemical components of C. majus were obtained from three TCM databases, including TCM Database@Taiwan [12], TCMSP [13] and TCMID [14]. Duplicated molecules were removed. The logP value and molecule weight of each compound were then calculated to build the chemical space ( Figure 2). The compounds from C. majus exhibit high superposition with the chemical space of the compounds from the bSDTNBI model we built before [11], except four compounds with logP value higher than 7, which are lupeol acetate, chelidimerine, ergosterol and spinasterol. Most of the ingredients have logP values greater than zero, which implies their not-so-good water solubility. The results showed that all of C. majus' ingredients fitted well in the bSDTNBI model, which consists of 1495 herbal ingredients and 2385 drugs.
Pathway enrichment analysis
The 173 known and putative targets of 44 C. majus' ingredients were then uploaded to the Database for Annotation, Visualization and Integrated Discovery (DAVID) v6.8 online server [19,20] for pathway enrichment analysis. These pathways are associated with chemical metabolism, nervous system, cancer, infection, and inflammation, which are consistent with the fact that C. majus has been clinically used as an analgesic in TCM [21]. Meanwhile, C. majus exerts significant association with the cancer-related pathways, which indicates that C. majus might exert promising antineoplastic activity.
Besides, C. majus' ingredients interact with 37 targets enriched in 18 crucial cancer-associated pathways. 22 of them were known targets and 13 ones were putative targets. The 18 cancer-associated pathways included cell cycle pathway, apoptosis pathway, PI3K-Akt signaling pathway, NF-κB signaling pathway, etc. The pathways were highly correlated with cell division, cell proliferation, apoptosis, metastasis and angiogenesis [22][23][24][25][26]. These pathways were perceived as the potential mechanisms of C. majus' anticancer activity. A network was constructed to illustrate the MoA of C. majus' anticancer activity ( Figure 3). From Figure 3, we can see that several components are vastly discovered, such as sanguinarine, luteolin, berberine, and chelerythrine. There are 38 known CTIs and 118 putative CTIs, and 44 ingredients have all interacted with the 37 cancer-associated targets.
Cancer biomarker analysis
Although the ingredients from C. majus can modulate several signal pathways that are highly associated with cancers, C. majus might exert therapeutic activities against several certain types of cancers. We therefore investigated the known and putative targets whose expression levels are significantly associated with the prognosis of cancer patients, which were referred as the genetic prognostic factors. The genetic prognostic factors were obtained from the Human Protein Atlas [27]. The known and putative targets were mapped into the genetic prognostic factors. Among the 173 known and putative targets of C. majus, 22 targets were identified as the genetic prognostic factors of 13 types of cancers, including renal cancer, liver cancer, endometrial cancer, pancreatic cancer, head and neck cancer, urothelial cancer, ovarian cancer, colorectal cancer, thyroid cancer, prostate cancer, melanoma, lung cancer, and breast cancer (Table 1). Theoretically, C. majus might exert positive therapeutic activities against these 13 specific types of cancers.
Identification of key anticancer components in C. majus C. majus mainly composes of several alkaloids including chelidonine, chelerythrine, sanguinarine, coptisine, berberine and their derivatives. Among them, chelidonine, sanguinarine, chelerythrine, berberine and coptisine are the dominant components [28]. Through our study, we found that alkaloids were the dominant antineoplastic components in C. majus. We have observed relevant mechanisms to explain the promising antineoplastic activity that alkaloids have showed. A tripartite network was constructed to illustrate the function of the five dominant components in the anticancer activity of C. majus (Figure 4). The 18 cancer-associated pathways can be classified into 4 clusters based on their function in the hallmarks of cancer [29,30]. It can be observed from Figure 4 that the five dominant ingredients modulate 19 cancer-associated targets, and subsequently regulate the 18 cancer-associated pathways. These pathways are crucial in the cell proliferation, apoptosis, cancer angiogenesis and cancer metastasis [22][23][24][25][26][31][32][33][34][35][36][37][38][39][40][41][42]. From Figure 4, we can also see that cell proliferation was regulated by all the five alkaloids, while berberine and sanguinarine can modulate the apoptosis process. Chelerythrine, coptisine and sanguinarine can inhibit cancer metastasis through four pathways, and chelidonine, sanguinarine and chelerythrine can exert antiangiogenesis activity. Considering the abundant content of the five alkaloids in C. majus, chelidonine, sanguinarine, chelerythrine, berberine and coptisine are therefore identified as the key anticancer components of C. majus.
The structures of these alkaloids were presented in Table 2.
Bioassays and literature survey to verify the prediction
Our systems pharmacology study revealed that C. majus is a potent anticancer medicinal herb that inhibits proliferation, metastasis and angiogenesis, as well as induces apoptosis in cancer cells. Plenty of literatures have validated our results. Our computational results have shown that C. majus might be effective on several types of cancers (Table 1). Chelerythrine, the main component of C. majus, was reported to induce apoptosis in renal cancer cell lines [43]. Chelidonine has also been reported to exert antiproliferative activity again breast cancer cells [44]. Besides, we also found that the potential anti-colorectal cancer activity of sanguinarine, another key component of C. majus, was reported. The anticancer activity was meditated by inducing apoptosis in HT-29 human colon cancer cells [45]. We also performed bioassays to validate part of our predictions. Based on our prediction, chelidonine is the key component in C. majus that exerts anticancer activity, but its biological data are limited. Thus, we conducted a bioassay to show chelidonine's antiproliferative activity on the B16F10 melanoma cell line. With the elevation of chelidonine's concentration, the percentage of B16F10 cells in G1/M phase increased ( Figure 5), while the G1 phase and S phase were insignificantly influenced. The results indicated that chelidonine could induce G2/M phase cell cycle arrest in B16F10 melanoma cell line, which was consistent with our computational analysis that chelidonine and its derivatives may affect the cell cycle pathway.
DISCUSSION
Key anticancer components of C. majus C. majus consists of multiple ingredients, but there are several components that might be essential to the anticancer activity. We hereby discussed the MoA of C. majus based on the CTIs and gene ontology. The detailed interaction network is presented in Figure 4.
Sanguinarine. Sanguinarine is an antimicrobial and antioxidant agent firstly discovered in the root of Sanguinaria canadensis L. Recent studies stated that sanguinarine exerts both antiproliferative and apoptotic effects in cancer cell lines [46,47]. Our study indicated that sanguinarine interacts with 18 crucial pathways, which are highly correlated with cell proliferation, metastasis and apoptosis. Although sanguinarine was experimentally acknowledged to interact with classic cancer-related targets, such as the tumor protein 53 (TP53) [32], peroxisome proliferator-activated receptor (PPAR) [38] and p38α (MAPK14) [40], other putative targets were also predicted to be associated with cancers. The overexpression of urokinase, encoded by gene PLAU, was highly correlated with cancer metastasis [33,34], and inhibition of urokinase can successfully suppress the metastasis [31]. In our study, hydroxysanguinarine and dihydrosanguinarine, derivatives of sanguinarine that coexisted in C. majus, were predicted to act on the urokinase. Thus, C. majus might also prevent the tumor invasion and metastasis.
Chelerythrine. Chelerythrine is another substantial alkaloid existing in C. majus that exhibits selective inhibition against protein kinase C [48]. Chelerythrine was reported to exert cytotoxic effect against human prostate cancer cells [49] and induce cell cycle arrest in the human leukemia HL-60 cell lines [50]. In our study, chelerythrine was predicted to interact with urokinase and heat shock protein (HSP) 90β [39], both of which were overexpressed in cancer cells. Therefore, chelerythrine might suppress cancers by interacting with urokinase and HSP 90β. Our study also indicated the interaction between chelerythrine and the c-Jun protein, which was encoded by the proto-oncogene Jun [35]. Chelerythrine might inhibit the c-Jun protein to suppress tumors.
Chelidonine and derivatives. Chelidonine and its derivatives including homochelidonine, isochelidonine, methoxylchelidonine and 6-oxochelidone were reported to manifest antiproliferative activities against tumor cells in a cellular level, but the MoA was still uncertain. However, the experimental data of chelidonine in the molecular level was very limited. In our study, we predicted top 20 targets for chelidonine and its derivatives. Ten CTI pairs, which we believed might explain the potential anticancer mechanism of chelidonine, were identified. It was predicted by bSDTNBI method that chelidonine and three derivatives interacted with androgen and estrogen receptors, which are highly associated with hormone-dependent cancers. Apart from the nuclear receptors, methoxychelidonine was predicted to interact with cyclin-dependent kinase 2 (CDK2), which is highly related to cell division [36]. Chelidonine and derivatives might induce cell cycle arrest in cancer cells.
Berberine and coptisine. Berberine and coptisine also predominantly exist in C. majus. The two alkaloids have been extensively studied on their anticancer activities. Berberine can suppress cell invasion, induce apoptosis and arrest cell cycle in plenty cancer cells [51]. Similar pharmacological activities were also observed in coptisine against cancer cells [52,53].
Potential antineoplastic mechanisms of C. majus
From our study, we can see that C. majus might exert positive therapeutic activities against 13 types of cancers through several cancer-related pathways. We discussed three major pathways here.
Cell cycle arrest. The cell cycle pathway is the biological process of DNA replication and cell division, which takes place in almost all types of cells. Five targets were enriched in the cell cycle pathway, namely SMAD3, CHEK1, CDK2, GSK3B, and TP53. More than one-third compounds (16 out of 44) from C. majus interact with these targets, ten of which are alkaloids. It is interesting that five targets distribute in the G1 phase and S phase of cell cycle, so the cell division might be arrested in the G2/ m phase.
Apoptosis induction. Inducing apoptosis in certain cancer cells has been a hotspot in oncology research and drug development. Our study found that seven compounds interact with four targets in the apoptosis process, including TNF, CASP9, TP53 and NFKB1. C. majus might suppress the tumor cells by inducing apoptosis. Eight compounds from C. majus explicitly or putatively interacted with these targets above, inducing programmed cell death in cancer cells. Several researches verified C. majus' antiproliferative activity on diverse cancer cells by inducing apoptosis. In a study, methanol extract of C. majus inhibited cytotoxicity towards human promyelocytic leukemia HL-60 cell lines in a dose-dependent pattern [54]. Another study showed that C. majus extract exerts both cell cycle arrest and apoptosis activity against six cancer cell lines [55].
Signal transduction. Cell cycle arrest and apoptosis induction were perceived to be the key mechanisms that C. majus exerts antineoplastic activity again cancer cells. Besides, C. majus might also suppress cancer in other pathways. A research reported that the PI3K-Akt signaling pathway was activated in the early stage of tumorigenesis [41], and this cascade was considered to be a promising target for cancer treatment [37]. C. majus may act on the PI3K-Akt signaling pathway and affecting the downstream signaling pathways, including the NF-κB signaling pathway and VEGF signaling pathway, and suppress angiogenesis and cell proliferation. Moreover, C. majus remarkably influenced the MAPK signaling pathway, including the extracellular-signal regulated kinases (ERK) subfamily and p38 subfamily, and subsequently led to cell cycle arrest and apoptosis. In addition, the focal adhesion pathway, regulation of actin cytoskeleton pathway and epithelial cell signaling pathway are high correlated with cancer metastasis. C. majus might inhibit metastasis through interacting with these pathways.
Comparison with other similar work
Recently, a systems pharmacology research on the antitumor mechanism of C. majus was reported [56]. Compared with that report, we noticed that they collected 442 known targets of C. majus, greater than 111 known targets that we have retrieved in this work. However, we found that the 442 targets came from multiple species; most of them are unrelated to human cancers. Besides, the SysDT method [57] they applied for target prediction requires negative samples to build a predictive model, while negative samples are usually rarely reported. Our bSDTNBI method, on the other hand, needs no negative samples to build a promising network inference model. In addition, the chelidonine, which is the most abundant ingredient in C. majus, was perceived as an unessential component in their report. However, our study elucidated that chelidonine is a promising anticancer ingredient in C. majus by both computational method and bioassay.
CONCLUSIONS
In this study, the bSDTNBI method, integrated with pathway enrichment, network analysis and cancer biomarker analysis, was successfully used to decipher the antineoplastic mechanism of C. majus at the molecular level. Systems pharmacology approach was proven to be a useful tool in understanding the pharmacological activity of multi-component TCM.
Data collection and preparation
The chemical components of C. majus were collected from three TCM databases, namely TCM Database@-Taiwan, TCMSP and TCMID, as well as research literatures reporting C. majus composition. The com-pounds were converted into canonical SMILES via our in-house script integrated with Python, OpenBabel toolkit [58] and Schrödinger 2015 package [59].
The known CTI pairs were collected from four databases including ChEMBL, BindingDB, IUPHAR/ BPS Guide to PHARMACOLOGY and PDSP Ki Database. The interaction pairs were exacted only when the data met below four criteria: (i) the IC 50 , EC 50 , K i , K d or potency values £10 μM; (ii) the target protein was labelled as "reviewed" in the UniProt Database [60]; (iii) the target proteins originate from Homo sapiens; (iv) duplicate CTIs were removed.
Chemical space analysis
We conducted a chemical space comparison between chemical ingredients of C. majus and compounds from the bSDTNBI model, to analyze the drug-like properties of those ingredients. The molecular weight and logP values were calculated by Schrödinger 2015 package.
Prediction of putative targets
Our bSDTNBI method was applied for target prediction. Based on our previous study [9], three parameters, α, β and γ, were set to 0.41, 0.06 and -0.51, respectively. Our previous study also showed that the inference method manifested better performance when the Klekota-Roth (KR) fingerprint [61] was selected to generate substructures for each compound.
Canonical SMILES format was firstly converted to KR fingerprint via PaDEL-Descriptor [62]. The top 20 putative targets were predicted for each compound by bSDTNBI method. The putative targets were then converted to their official gene symbols for further pathway enrichment analysis.
Pathway enrichment analysis
After removing duplicates, the known and putative targets were merged for pathway enrichment analysis. The pathway enrichment analysis was conducted by DAVID v6.8 server (https://david.ncifcrf.gov/), an online functional annotation tool. The 173 targets were uploaded onto DAVID v6.8 online server. "Homo sapiens" was selected as background species for gene annotation. "KEGG_PATHWAY" was selected for functional annotation.
Compound-target network construction
We constructed a binary network (Figure 3) including 44 compounds and 173 targets to illustrate the CTI of C. majus. We also constructed a tripartite network (Figure 4) to illustrate the relationship between the key anticancer components, targets and cancer-related pathways. The tripartite network was prepared via Cytoscape 3.6.0 [63].
Cell culture and treatments
The melanoma cell line B16F10 was purchased from the Cancer Cell Repository of Shanghai Cell Bank, Shanghai, China, which was maintained in DMEM, supplemented with 10% FBS, 100 units per milliliter penicillin and streptomycin. B16F10 cells were cultured at 37˚C in 5% CO 2 and the medium was replaced as required. | 2019-04-11T13:07:48.537Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "ba4b8663d07dfd144a42c0cce468ccf85028a435",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40484-019-0165-x.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ec5abcf5e0d76e8eadaecd9f36b5450d5b8925d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
49253258 | pes2o/s2orc | v3-fos-license | Fluticasone furoate/Vilanterol 92/22 μg once-a-day vs Beclomethasone dipropionate/Formoterol 100/6 μg b.I.D.: a 12-month comparison of outcomes in mild-to-moderate asthma
Background Bronchial asthma is an inflammatory disease of the airways. Beclomethasone dipropionate/Formoterol (BDP/F) and Fluticasone furoate/Vilanterol (FF/V) are two of the most effective LABA/ICS combinations for managing persistent bronchial asthma. Aim of the study was to compare the outcomes achieved in mild-to-moderate asthma patients assuming BDP/F 100/6 μg b.i.d. (Group A) or FF/V 92/22 μg once-daily (Group B) for 12-months. No head-to-head long-term comparison is available at present. Methods Data were automatically and anonymously obtained from the institutional database: FEV1% predicted values; the exacerbation and hospitalization rates; days of hospitalization; GP and/or specialist visits; days of inactivity; courses of systemic steroids and/or antibiotics were recorded at baseline and after 3, 6 and 12 months of both treatments. The overall adherence to treatments was also calculated. The propensity score method was used for matching and comparing the two cohorts of patients; Anova and Wilcoxon tests were used for checking the trends and time-to-time comparisons over the period; statistical significance was accepted for p < 0.05. Results The PS-matching process returned a cohort of 40 group A patients matched with 40 patients of group B, fully comparable for demographics, clinical characteristics, and comorbidities. The improvement in lung function was significant in both groups (p < 0.001), even if it was significantly higher and time-dependent in group B. The mean (±SE) exacerbation rate/patient changed from 0.63 (±0.13) at baseline to 0.53 (±0.12) after three; to 0.58 (±0.13) after six, and to 0.60 (±0.18) after twelve months in group A (p = ns), while from of 1.05 (±0.16) at baseline, to 0.28 (±0.07) after three; to 0.33 (±0.08) after six, and to 0.18 (±0.08) after twelve months in group B (p < 0.001), respectively. The mean hospitalization rate/patient changed from 0.25 ± 0.07 at baseline to 0.15 (±0.06) after three; to 0.08 (±0.04) after six, and to 0.13 (±0.05) after twelve months in group A (p = ns), while from 0.30 (±0.07) at baseline to 0.08 (±0.04) after three; to 0.10 (±0.05) after six, and to 0.03 (±0.03) after twelve months in group B (p < 0.001), respectively. Also mean duration of hospitalization and days of inactivity were in favour of FF/V treatment over time (in both cases p < 0.001). GP’s visits were reduced by both treatments (p < 0.007 in group A and p < 0.001 in group B, respectively, while Specialist’s visits only dropped during FF/V (p < 0.001). Steroid and antibiotic courses were significantly reduced by both treatments, even if more systematically in group B (p < 0.001 vs p < 0.007, and p < 0.001 vs p < 0.044, respectively). Moreover, changes in all outcomes considered proved time-dependent during the FF/V treatment only, particularly over the second semester. Finally, the overtime adherence to treatment was higher by 22 days during FF/V . Conclusions Both the ICS/LABA combinations proved effective, even if characterized by different patterns of effectiveness either in terms of lung function and of long-term clinical outcomes. Only the once-daily inhalation of combined FF/V 92/22 μg once-daily optimized systematically the exacerbation and hospitalization rates in mild-to-moderate asthma, together with all other outcomes over time. The effectiveness of FF/V 92/22 once-daily μg proved progressive and time-dependent over the twelve-month period of the study, and associated to a higher adherence to treatment.
Background
Bronchial asthma is a chronic inflammatory disease of the airways which is characterized by airflow limitation, usually reversible spontaneously or following therapy, bronchial hyper-responsiveness and accelerated decline in lung function, and the occurrence of exacerbations [1].
The excessive presence and activation of inflammatory cells within the mucosal, muscular and vascular structures of the airways are the underlying mechanisms responsible for asthma, which cause the release of inflammation mediators and the remodeling of the airways. Clinical manifestations of asthma consist of recurrence of cough, dyspnea, wheezing (at rest and/or by physical exertion), and chest tightness [1]. These manifestations can change among individuals and/or in the same subject over time [2].
According to WHO estimates, 235 million people suffer from asthma. The Italian National Institute of Statistics (ISTAT) survey on health and use of health services estimated a prevalence of asthma of 4.2% (female 4.3%, male 4.2%) in Italy in 2012 [3], and the total burden of asthma was estimated in about 5 billion euro per year in Italy [4].
Severity of the disease is evaluated on frequency of symptoms, value of forced expiratory volume in 1 s (FEV 1 ), variability of peak expiratory flow (PEF), reversibility of airway obstruction, exacerbation rate, quality of life. Four levels of asthma severity are recognised: mild intermittent, mild persistent, moderate persistent and severe persistent.
Asthma cannot be cured, but appropriate management may control the disorder and enable people to enjoy a good quality of life [2]. The main goal of asthma therapy is to achieve and maintain the control of the disease in real life.
The therapeutic strategy includes two main categories of drugs: the controller medications which must be assumed regularly to keep the disease under control, and the rescue medications which relieve the acute bronchoconstriction and related symptoms. Since asthma is an inflammatory disease, inhaled corticosteroids (ICS) are the most effective controller medications currently available and represent the first choice of treatment, to which long-acting beta 2 -agonists bronchodilators (LABA) can be added. The combination of these two categories of drugs is the recommended therapeutic strategy for persistent asthma [1].
Although several studies investigated both the effectiveness and the safety of these two ICS/LABA combinations singularly, no long-term comparison is still available to our knowledge.
Aim
The aim of the present study was to estimate and compare the outcomes achievable by mild-to-moderate asthma patients assuming BDP/F 100/6 μg b.i.d. to those of patients assuming FF/V 92/22 μg once-a-day over a twelve-month treatment.
Methods
The study was an observational, retrospective analysis on asthmatic patients referring over the period February-September 2015 to the Lung Unit of the Specialist Medical Centre (CEMS), Verona, Italy.
Data were obtained automatically and anonymously from the institutional, UNI EN ISO 9001-2008 validated database, and the classic Boolean algebraic formula were used for selections [11]. Selection criteria were: mild-to-moderate asthma subjects of both genders; > 18 years of age; non-smoker; with a normal cognitive function; in a stable respiratory condition (spirometrically assessed) in the last 2 weeks before the study start; assuming BDP/F 100/6 μg b.i.d (Group A) or FF/V 92/ 22 μg once-a-day (Group B) for 12 (±2) months. At baseline sex, age, the absolute and the % predicted values of forced expiratory volume in 1 s (FEV 1 in Litres and FEV 1 as % predicted), and comorbidities of the patients were recorded. All patients were followed for 12 (±2) months. FEV 1 values; number of relapses and of related hospitalizations; duration of hospitalization (in days); number of general practitioner (GP) and/or specialist visits; days of inactivity; and number of courses of systemic steroids and antibiotics were recorded over the study period at baseline and after 3, 6, and 12 months of both treatments. Baseline values for outcomes were corresponding to values assessed over the three months preceding the index date for selections. Furthermore, as both the inhaler devices used are provided with a precise dose counter, the patients' adherence to both treatments was also recorded monthly since the index data (via monthly telephone calls and registration of the remaining doses in the device), and expressed in % inhalations vs the expected number of inhalations at each time of the study.
In order to compare the outcomes achieved in the two groups of patients, the propensity score matching method (PS) [12] was used in STATA [13]. The propensity score matching method summarizes pretreatment characteristics of each subject into a single-index variable (the propensity score) that makes the matching feasible. In this study a logit regression to estimate the propensity score on the baseline covariates age, sex, FEV 1 (%) and presence of comorbidities, was used. Moreover, the propensity score matching was performed without replacement, i.e. each of 40 patients of the Group B was matched with only one patient of the Group A.
Data reported at baseline and after three months of both treatments correspond to those already published in a previous study which was limited to a twelve-week observational period on the same cohort of patients [14]. Data collected from the same patients' cohort after 6 and 12 months were implemented in the present study in order to complete a four-point trend over 12 months of both treatments.
The analysis of variance was used to check the four-point trend (such as: baseline; at 3, 6, and 12 months) recorded in each treatment group for all outcomes. Finally, the extent of changes achieved in both treatment groups by each outcome considered was also compared at the same times by Wilcoxon test. Statistical significance was accepted for p < 0.05.
The study was approved by the R&CG Ethical Committee during the session officially held on January 11 th , 2016. The patients' consent to participate was not inserted because data were obtained automatically and anonymously.
Results
Clinical data of 77 patients treated with BDP/F 100/6 μg b.i.d (Group A) and of 40 patients treated with FF/V 92/ 22 μg once-a-day (Group B) were obtained. Characteristics of the entire cohort and of the PS-matched cohort at baseline are summarized in Table 1. At baseline, male prevalence was 33.8% in group A and 37.5% in group B. Mean (±SE) age was 51.9 (±1.60) in group A, and 50.2 (±2.43) in group B. Mean (±SE) FEV 1 in litres (L) was 2.4 (±0.09) in group A, and 2.5 ((±0.12) in group B. Mean (±SE) FEV 1 % pred. was 82.2% (±1.14) and 81.9% (±2.00) in group A and B, respectively. Patients with perennial allergy were 61.0% (47/77) in group A, and 62.5% (25/40) in group B, while those with seasonal allergy were 39.0% (30/77) in group A, and 37.5% (15/40) in group B, respectively. The percentage of patients with established comorbidities was 37.7% in group A, and 42.5% in group B. The following comorbid diseases were equally reported in both groups: arterial hypertension, kyphoscoliosis, obesity, severe depression, AIDS, diabetes mellitus, severe osteoporosis, and ischemic heart disease. In particular, arterial hypertension was the most prevalent comorbidity in both groups: 12.5% in group A, and 10.4% in group B, respectively.
The PS-matching process, designed as matching on the baseline covariates, gender, age, FEV 1 and comorbidities, returned a cohort of 40 group A patients of the entire cohort matched with 40 patients of group B. The demographics and clinical characteristics of the PS-matched cohort at the baseline are described in Table 1. The male prevalence in group A was the same as in group B (37.5%). Mean age (±SE) was 49.4 (±2.05) in group A and 50.2 (±2.43) in group B, respectively. Mean (±SE) FEV 1 % pred. was 82.4% (±1.63) in group A and 81.9% (±2.00) in group B. The presence of comorbidities was balanced (42.5%) in both groups (Table 1). Table 2 summarizes all changes calculated for each variable over the study period in each treatment group. Mean FEV 1 % pred. changed from 82.40% (±1.63) at baseline to 87.08% (±1.58) after 3 months; to 89.98% after 6 months, and to 91.88% after 12 months (Anova = p < 0.001) in group A, while in group B, mean (±SE) FEV 1 % pred. changed from 81.93% (±2.00) at baseline, to 89.50% after 3, to 90.9% after 6, and to 99.1% after 12 months of treatment (Anova = p > 0.001). Even if the overall trends of lung function proved significantly improved with both treatments, treatment B induced a FEV 1 % pred. improvement in the second semester of the study which was significantly higher than that obtained with treatment A (t test: p < 0.01) (Fig. 1).
Also in this case, even if the overall trends of the hospitalization rates proved significantly improved with both treatments, the reduction obtained in the second semester of treatment B was really substantial and significantly higher (t test: p < 0.01) (Fig. 3).
The mean number of Specialist visits per patient was 0.68 (0.13) at baseline; 0.68 (0.11) after 3; 0.65 (0.08) after 6, and 1.05 (0.13) after 12 months (Anova: p = ns), while the corresponding number in group B was 0.70 (0.16) at baseline; 0.28 (0.07) after 3; 0.35 (0.08) after 6, and 0.23 (0.08) after 12 months (Anova: Fig. 3 Changes in mean n. of hospitalizations/p. over 12 months Fig. 4 Changes in mean duration of hospitalization/p. over 12 months. s p < 0.009), respectively. The difference was in favour of group B for all the three times of follow up (t paired test: p < 0.002; p < 0.005, and p < 0.001, respectively). The difference between the treatments proved highly significant in favour of group B over the entire study period (Fig. 7).
Finally, the mean number of courses of antibiotics per patient was 0.85 (0.16) at baseline; 0.63 (1.10) after 3; 0.43 (0.12) after 6, and 0.40 (0.11) after 12 months (Anova: 0.047), while the corresponding number in group B was 1.03 (0.13) at baseline; 0.35 (0.08) after 3; 0.33 (0.08) after 6, and 0.15 (0.08) after 12 months (Anova: p < 0.001), respectively (Fig. 9). The adherence to prescribed treatments calculated in terms of expected doses over the period (stemming from the index date) was of 82.2% at three; 81.7% at six, and 80.8 at twelve months in group A, while of 93.3% at three; 91.7 at six, and 90.6 at twelve months in group B, respectively. In other words, an average of 132 doses (approximately corresponding to 66 days of treatment) in group A, and to 44 doses were skipped (corresponding to 44 days of treatment) in group B.
No relevant side effect was reported in both groups of patients. Transient hoarseness was recorded in 5 patients in group B and in three patients of group A, while transient tachycardia was recorded in two patients of group A and in one patient in group B.
Discussion
A variable degree of airway obstruction related to a variable extent of underlying airway inflammation usually characterizes bronchial asthma. In persistent mild-to-moderate Results of different treatments can be affected by several factors, such as: the pharmacological peculiarities of the molecules prescribed; the daily regimen (namely the frequency of inhalations required for a twenty-four-hour efficacy); the usability of inhaler devices adopted for the drug(s) delivery; the patient's adherence to treatment; the existence of comorbidities; the cost of treatment, and the indices considered in the study (such as, lung function only, rather than clinical outcomes).
The present observational, retrospective, matched study, aimed to compare outcomes achievable in mild-to-moderate asthma patients assuming FF/V once-daily or BDP/F for 12-months, represents the very first head-to-head comparison between these two LABA/ICS combinations in asthma to our knowledge, and here clinical outcomes are assessed over a long-term period.
Actually, in a previous pharmaco-economic study, a short-term cost-analysis carried out over twelve weeks suggested the superiority of FF/V 99/22 once-daily via Ellipta when compared to DP/F b.i.d. via Nexthaler [14]. This superiority in mild-to-moderate asthma was related to a significant higher improvement in lung function together with a significant reduction of GP' and Specialist's visits, and of extra-medication, thus indirectly confirming a better control of asthma in daily life. It was also observed a 50% drop in hospitalization cost in the same study, even if this tendency did not reach the statistical significance due to the dispersion of data occurring during the too limited period of investigation [14].
In terms of lung function, both treatments confirmed effective in improving FEV 1 % predicted significantly also in the present study. The net improvement achieved in group B proved once again significantly higher, but also progressive, according to a time-dependent trend, particularly over the second six months of treatment.
A novel evidence came out from the present study: beyond lung function, all main clinical outcomes proved clearly in favour of FF/V once-daily when compared to BDP/F b.i.d. Actually, the long-term treatment likely contributed to enhance and magnify the extent of FF/V clinical convenience, previously only suggested during a short-term therapeutic strategy [14]. In particular, the dramatic reduction of exacerbation and hospitalization rates, patients' duration of inactivity; the frequency of referral to the GP and the Specialist, and the number of courses of oral steroids and antibiotics represents a crucial confirmation of the much substantial and more effective asthma control achievable with long-term FF/V once-daily in real-life.
Even if the two compared ICS/LABA combinations are active and regarded as equally effective in persistent asthma [5][6][7][8][9][10], nonetheless present data emphasize that they are characterized by a different profile concerning their long-term clinical efficacy in mild-to-moderate asthma. Actually, the systematic trend of a progressive, time-dependent improvement of all main clinical outcomes highlights how FF/V once-daily should be regarded as the much more convenient strategy for longer lasting treatments.
On the other hand, Formoterol and Vilanterol as well as Beclomethasone dipropionate and Fluticasone furoate are characterized by different pharmacokinetics and pharmacodinamics [15][16][17]. The corresponding fixed combinations obviously reflect these pharmacological patterns which support and provide different aspects of their clinical efficacy and effectiveness also in clinical terms. In particular, the higher selectivity and persistency on steroid receptors in favour of Fluticasone furoate, together with the higher selectivity and persistency on ß 2 -receptors in favour of Vilanterol represent crucial aspects from this point of view [15][16][17]. Actually, differently than in the case of Formoterol and Beclomethasone dipropionate which require twice-daily administration [18], these are the peculiarities which allow the long-lasting therapeutic action of the FF/V combination.
The once-daily assumption has been supposed to foster the patients' adherence during long-term therapeutic strategies [7,19,20]. The substantial difference between treatments in terms of number of skipped doses (which corresponds to skipping days of treatment) over the twelve months observational period as assessed in the present study is strongly supporting this hypothesis. In other words, when compared to BDP/F b.i.d., FF/V once-daily allowed a longer adherence by 22 days in real life, which likely contributes per sé to explain the better and the time-dependent asthma control achievable with this treatment. To note that this result should be regarded as independent of the inhaler devices used by patients (namely, Nexthaler and Ellipta) as both characterized by a quite similar handling and an equal number of steps needed for inhalation actuation [21,22].
Finally, hospitalization and exacerbation rates, as well as patients' absenteeism and medical referrals represent the main components of asthma annual costs [20][21][22]. The dramatic and progressive drop in these four indices obtained over the twelve-month treatment with FF/V strongly supports and emphasizes the economic convenience of this strategy when compared to that of BDP/ F for the long-term management of mild-to-moderate asthma.
The present study has some limits. One is represented by the relative small number of subjects included, even if well matched by means of the propensity score. Moreover, the study consists in a mono-centric investigation, even though patients were from all Italian regions, anonymously selected. On the other hand, some points of strength just consist in the automatic selection of subjects from a unique database, associated to the use of the propensity score matching method which assures a strictly objective system for comparison between the two subjects' samples. Finally, the patients' adherence to treatments was not calculated according to the usual criteria adopted during usual clinical trials. Anyway, the registration of the number of doses monthly remaining in both devices (each provided with a precise dose-counter) rendered information collected in real life pretty reliable and acceptable.
Conclusions
The present study showed that the once-daily inhalation of combined Fluticasone furoate/Vilanterol 92/22 μg once-daily for twelve months consents an enhanced and time-dependent efficacy in terms of lung function and of all main clinical outcomes when compared to BDP/F 100/ 6 μg b.i.d. in mild-to-moderate asthma. Stemming from the extent of the systematic improvement of clinical outcomes achieved over the FF/V 92/22 μg treatment, the corresponding long-term economic consequences are easily and quantitatively presumable.
Availability of data and materials
Total availability on written request to authors.
Authors' contributions DRW: study design and statistics, TP: check of all clinical data and outcomes; BL: anonymous selections and extraction from the database, Data Bank construction, and numerical models. All authors read and approved the final manuscript.
Ethics approval and consent to participate
The study was approved by the R&CG Ethical Commettee during the session officially held on January 11 th , 2016. The consent to partecipate is not applicable as data were obtained automatically and anonymously from the institutional, UNI EN ISO 9001-2008 validated database, and the classic Boolean algebraic formula were used for selections. | 2018-06-17T23:35:06.601Z | 2018-06-15T00:00:00.000 | {
"year": 2018,
"sha1": "5e13c9b9c37ee2f7b0cacf88f42406b536f559fb",
"oa_license": "CCBY",
"oa_url": "https://mrmjournal.biomedcentral.com/track/pdf/10.1186/s40248-018-0131-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21f0ad8d5807e66a5c7b580ddfc76a299d4f4058",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221116634 | pes2o/s2orc | v3-fos-license | Polish Adaptation of the Self-Care of Diabetes Inventory (SCODI)
Purpose As the guidelines indicate, education and self-care in diabetic patients are essential elements in the treatment process. The efficient evaluation of the level of self-care will enable the patient’s needs to be identified and education and care to be optimised. The Self-Care of Diabetes Inventory (SCODI) is a valid and reliable tool which can measure self-care behaviours among patients with diabetes. The purpose of this study was to assess the reliability of the Polish version of the SCODI. Methods The World Health Organization (WHO) translation protocol was used for the translation and cultural adaptation of the English version of the SCODI into Polish. The study included 276 Polish patients with type 2 diabetes (mean age 61.28±12.02 years). There were 145 men and 131 women in the study. The internal consistency of the SCODI was evaluated using Cronbach’s Alpha. Results The original four actor tool structure was confirmed. The mean overall levels of self-care in the four SCODI scales in the study group were self-care maintenance (67.66 pts; SD=18.55), self-care monitoring (61.81 pts; SD=24.94), self-care management (54.65 pts; SD=22.98) and self-care confidence (62.86 pts; SD=20.87). The item-total correlations were positive, so there is no need to change the scales of any of the questions. The overall consistencies for individual scales were assessed using Cronbach’s Alpha: self-care maintenance (0.759), self-care monitoring (0.741), self-care management (0.695) and self-care confidence (0.932). Exploratory factor analysis and item factor loadings of the individual items ranged from 0.137 to 0.886 and, with two exceptions (questions number 23 and 32), were statistically significant (p<0.05). Conclusion The SCODI questionnaire has acceptable internal consistency and reliability in assessing self-care among diabetic patients in the Polish population. This reliable research tool can be managed in planned studies of Polish patients with diabetes.
Introduction
Although diabetes is a non-communicable disease, it has been recognized by the United Nations as an epidemic due to its rapid spread. 1 In the 21st century, there has been a rapid increase in the incidence of diabetes. According to estimates from the World Health Organization (WHO), diabetes, cancer and respiratory and circulatory diseases are responsible for 82% of all non-communicable disease deaths worldwide. As reported by the International Diabetes Federation (IDF) Diabetes Atlas data, diabetes affects 463 million people, which represents 9.3% of the world's population between 20 and 79 years of age. This disease changes the functioning of patients and their families in everyday life. It should be noted that there will be an increase in the number of patients living with diabetes to 578 million in 2030 and up to 700 million in 2045. 2 The latest IDF estimates indicate that there are currently 52 million adults in Europe (20-79 years old) with diabetes mellitus, which means that type 2 diabetes mellitus (T2DM) is the most common form of diabetes. The prevalence of diabetes is 7.9%. Almost half of the patients with diabetes are of working age (under 60), and over 17 million are not aware of their illness. Poland has two million adults with diabetes, which places it among the countries that have an average prevalence of diabetes (7.1%). More than half of the patients in Poland are elderly people between the ages of 60 and 79. 3 Modern diabetes therapy goes beyond the traditional understanding of chronic disease treatment. It includes early prevention, identification and monitoring of risk factors and education. A conscious patient who understands his or her role in the therapeutic process becomes an active participant in the fight against the disease. 4 The IDF Clinical Practice Recommendations for Managing Type 2 Diabetes in Primary Care guidelines of 2017 emphasise that diabetes education and self-care are the pillars of diabetes administration. 5 Nowadays, patientcentred care with self-care is an international problem which needs multidisciplinary team collaboration. Selfcare is an important issue for prevention and management of T2DM. 6 Self-care of a diabetic patient is defined as a continuous process of knowledge and skills based on the patient's awareness to be an active and knowledgeable participant in the treatment process. Self-care in diabetes assumes that the patient will practise behaviours that include an appropriate diet, avoidance of high fat intake, increased physical activity, glycaemic monitoring and regular foot evaluation. 7 A high level of preparedness for selfcare and decision-making by the patient and/or his family will be beneficial in reducing the number of hospitalisations. 8 Good self-care in diabetes patients can improve their quality of life. 9 Diabetes education, which increases the competence of patients and their families in the fight against the disease, also aims to prepare them for cooperation in the process of treatment, care and self-care. Proper education increases the patient's mental resilience to stress, builds his independence, motivates him to take on the difficulties associated with therapy, eliminates fear of the future and prevents anxiety, loss and depression. 10 The outcome of effective diabetes education is that the patient takes responsibility for the treatment of his illness and makes appropriate therapeutic decisions. 11 Patient education can be conducted individually, as an integral part of his contacts with members of the therapeutic team, or in groups. Small and, if possible, homogeneous groups are preferable, and the content and methods should be individually adjusted to the patient's needs and abilities. 12 Diabetes education also increases the patient's readiness to take pro-health actions 13 and is connected with improved compliance with medical recommendations concerning regular drug intake, proper diet and physical activity, implementation of foot self-care, glycaemic measurement, blood pressure, body mass and blood laboratory parameters. It also contributes to better collaboration with the physician. 14,15 Therefore, we may conclude that diabetes education is important, but it must be transferred into action, which means into self-care activities, to be fully beneficial for the patient. Self-care activities refer to certain behaviours, such as following a diet plan, avoiding high fat foods, increasing physical activity, self-monitoring of blood glucose, taking medications and solving problems as they occur. 16 Good health behaviour with regard to self-care influences adequate self-care practices and reduces cardiovascular risk, hospitalisations and disease-related complications, while also improving quality of life. 17,18 In recent years, it has been shown that there are many single and multidimensional tools for the assessment of self-care behaviour in people with T2DM. It is worth noting that studies on the evaluation of psychometric profiles for self-care tools still need to be evaluated for their usefulness and effectiveness of implementation in everyday clinical practice. 19,20 It is important to have a tool that measures self-care behaviours of diabetic patients because the assessment of self-care in this group of patients is essential. Therefore, we chose the Self-Care of Diabetes Inventory (SCODI), which was developed based on the Middle-Range Theory Of Self-Care Of Chronic Illness, 21 for adaptation into Polish. We decided to use this tool because, based on the development process, it is up to date clinically, was proven to be a valid and reliable tool to measure self-care in diabetic patients and can be useful for both clinicians and researchers. 22 The last multicentre cross-sectional study which concerned the test invariance of the SCODI questionnaire between Italy and the United States showed that this tool can be used in other countries because it appears to be psychometrically reliable. 23 We decided to carry out a systematic evaluation of the SCODI instrument to assess its psychometric properties. Therefore, the purpose of this study was to adapt the language of the SCODI questionnaire and assess its psychometric performance in the Polish population.
Settings and Participants
The study was conducted between March 2018 and March 2019 in the Wroclaw University Hospital in Poland. The sample of 276 patients with type 2 diabetes (mean age 61.28 years) was recruited from 373 eligible patients and enrolled in the study, as shown in the flow diagram ( Figure 1). Based on data provided by the Polish National Health Fund and the Diabetes Coalition, it is believed that there are approximately 3.5 million people living with diabetes in Poland, which represents 9% of the total population. T2DM has been diagnosed in two million people with diabetes, which is 6% of the total population. Considering that 6% of the population in Poland suffers from T2DM (assuming that the maximum error is 3% and the confidence interval is 90), the minimum sample size was estimated to be 163 people. Therefore, the sample size used in the study was considered sufficient.
Eligibility Criteria
The criteria for inclusion in the study were: consent to participate in the study, confirmed diagnosis of type 2 diabetes according to guideline criteria, age >18 years. Exclusion criteria were as follow: time from the diagnosis of diabetes < 1 year; documented cognitive impairment, lack of consent to participate in the study.
Ethical Considerations
The study was approved by the Bioethics Committee of the Wroclaw Medical University, Poland (approval no. KB-621/2018). All patients provided informed consent, and were informed that they could withdraw from the study at any time. The study protocol was carried out in accordance with the tenets of the Declaration of Helsinki and Good Clinical Practice guidelines.
Translation Protocol
We followed the WHO 24 translation protocol, which has a number of steps that include a forward translation, a panel of experts, a back translation, pretesting and creation of the final version. In the present study, forward translation of the SCODI was performed independently by two bilingual persons. Then a panel of experts (one nursing educator in diabetology, one nurse from a diabetology ward and one medical doctor) reviewed the translation. This group work was moderated by the authors of this adaptation. The team discussed the discrepancies between the original version of the questionnaire and the back translation, and they reached a consensus.
Pretesting was performed by a focus group using interviews. Finally, the back translation was conducted by a bilingual person whose native language was English. Therefore, the expert panel with a translator discussed the discrepancies between the original version and the back translation until consensus was reached.
Research Instrument
The SCODI was developed according to the Middle-Range Theory of Self-Care of Chronic Illness. 22 It has proven itself to be valid by the use of external indicators, such as glycated haemoglobin and the presence of diabetes complications. 22 It was also tested for the invariance of the measurement model cross-nationally between Italy and the USA. 23 The SCODI is composed of four scales measuring self-care maintenance, self-care monitoring, self-care management and self-care confidence. 22 Each scale has a 5-point Likert structure and scores 0-100, where higher scores represent better self-care. Each scale measures a specific part of the self-care process with good or high reliability. 22 In the original version, self-care maintenance comprises health promoting exercise behaviours, disease prevention behaviours, health promoting behaviours and illness related behaviours. Self-care monitoring comprises body listening and symptom recognition. Self-care management comprises autonomous self-care management behaviours and consultative self-care management behaviours. Self-care confidence comprises task-specific selfcare confidence and persistence of self-care confidence. 22 A cut-off score of 70 for each scale (ie self-care maintenance, monitoring, management and confidence) has been used by previous studies to discriminate between adequate or inadequate self-care. 25
Data Analysis
Internal consistency was assessed using Cronbach's Alpha, teem-total correlation and confirmatory factor analysis (CFA). In the latter, the double indicator method of Hu and Bentler was used to assess the model fit. Since SCODI items are expressed on an ordinal scale and not on a continuous scale, the parameters were estimated using the Diagonally Weighted Least Squares weighted method.
In the analysis, the significance level of 0.05 was assumed. Therefore, all p values below 0.05 were interpreted as indicating significant dependencies. The analysis was performed in the R program, version 3.6.0.
Characteristics of the Study Group
The characteristics of the participants are presented in Table 1. The study group included 145 (52.54%) men and 131 (47.46%) women for a total of 276 participants. The mean age was 61.28±12.02 years, and the mean disease duration was 10.95±8.47 years. The majority of the participants had a middle school (37.68%) or high school education (35.51%) and were in a relationship (64.13%). The evaluation of body mass index (BMI) found that the most common BMI was in the 25.0-29.99 (38.77%) range, and the percentages in the 18.5-24.99 (30.80%) and ≥ 30.0 (30.43%) ranges were comparable.
Cronbach's Alpha and Discriminatory Powers
Cronbach's alpha for the individual scales were: self-care maintenance -0.759, self-care monitoring -0.741, selfcare management -0.695 and self-care confidence -0.932 Table 3 includes discriminating powers (item-total correlations) that are positive, so there is no need to change the scales of any of the questions. The individual's reliability of alpha in the self-care scales of the SCODI questionnaire are presented in Table 3.
Confirmation Factor Analysis (CFA)
Items of our questionnaire are expressed on an orderly scale and not on a continuous scale, so the weighted method Diagonally Weighted Least Squares was used (Table 4).
For this structure, satisfactory values of the fit indices of standardized root mean square residual (SRMR), root mean square error of approximation (RMSEA), confirmatory fit index (CFI) and Tucker-Lewis index (TLI) were obtained. This allowed us to confirm the original four factor tool structure. The results are presented in Table 5.
The loadings of the individual items ranged from 0.137 to 0.886 and, with two exceptions, were statistically significant (p<0.05). The detailed characteristics are presented in Table 6.
The statistical structure of each SCODI question is presented in Table 7. It shows the frequency of answers to each question.
Discussion
The purpose of this study was to adapt and test the psychometric properties of a Polish version of the SCODI questionnaire for patients with diabetes. This questionnaire was developed based on the Middle-Range Theory Of Chronic Illness, and this instrument measures self-care maintenance, self-care monitoring, self-care management and self-care confidence. Its original version demonstrated content validity, reliability and construct validity. It was also shown to have generalisability of the measurement model. 22 It should be noted that a recently published study has shown good validity and reliability in measuring selfcare using a Farsi version of the SCODI. 26 Nowadays, due to the rising prevalence of diabetes, the importance of self-care has become more relevant to good disease management. Moreover, the main principle of self-care in diabetes is patient-centred care. A good relationship between the patient and the therapeutic team must be maintained to achieve the goals of selfcare management. Appropriate preparation for self-care behaviours, such as healthy eating, physical activity, blood glucose monitoring, adherence with medications, satisfactory problem-solving skills, healthy coping skills and reducing risky behaviours, can predict greater patient involvement in the therapy process and better outcomes. 27 However, in Poland, studies that would clearly demonstrate the impact of self-care on management in diabetes patients are still lacking. The studies that have been conducted concern the assessment of only selected variables which play a role in disease management, such as self-monitoring of blood glucose, blood pressure and foot self-care. 28,29 The Polish adaptation of the SCODI may increase the research area in the evaluation of self-care in patients living with diabetes. This is significant due to the fact that a lack of systematic self-care assessment may contribute to a passive attitude in patients and thus cause low effectiveness of disease management. The implementation
Dimension Items
Self-care maintenance 1-12 Self-care monitoring 13-20 Self-care management 21-29 Self-care confidence 30-40 of a multi-faceted level of self-care is particularly difficult in patients with diabetes and comorbidities. Another notable aspect is that diabetic patients are exposed to polypharmacy as a result of multimorbidity, age-related pharmacokinetic variability, cognitive impairment, use of over-the-counter medications or inability to control their diseases. 30 The complexity of the problem makes it essential to intensify efforts to identify elderly patients who may not follow the recommendations and require more attention. Innovative and intensive self-care should be implemented in the daily treatment practice process to improve diabetes patients' outcomes and quality of life. 31 Good management and implementation of self-care offers a number of benefits, such as improved well-being and decreased morbidity, mortality and health care expenditures. 32 Due to every country having its own cultural and social behaviour, there can be an impact on perception of self-care practices. In many cultures, a holistic approach is practised, but there can also be some cultural differences which may have an influence on specific self-care activities. 33 Nowadays, there is still a need to conduct research and recognise the burden of cultural differences in self-care. It should be remarked that both the Polish and Italian versions have the same factor loading as self-care. It may indicate that both of these populations have comparable views and approaches to self-care.
The SCODI proved to be a valid measure of self-care in our reference sample, which consisted of 276 patients with type 2 diabetes. When adapting the tool, it was checked whether the original scales matched the Polish language version. CFA was performed. The original SCODI structure has four factors, and for this structure, satisfactory values of the fit indices of SRMR, RMSEA, CFI and TLI were obtained. In the Polish study, we also confirmed that this structure was as satisfactory as it was in the Italian and American versions. The loadings of the individual items ranged from 0.137 to 0.886 and, with two exceptions, were statistically significant (p<0.05).
We interpreted the loads as correlations of the items with the subscale to which they belong. Their significance means that all the items significantly correlated with the result of the subscale tested (CFA-implied itemtotal correlations), which means that good results of the original scale were confirmed. Therefore, we may assume that the SCODI is characterised by good construct validity, reliability and acceptable internal consistency. The internal consistency of the adapted version of the scale was determined by means of Cronbach's alpha. Some studies have suggested that the internal consistency of items should be classified as follows: values ≥ 0.9 as excellent, ≥ 0.8 as good, ≥ 0.7 as acceptable, ≥ 0.6 as questionable, ≥ 0.5 as poor, and <0.5 as unacceptable. However, there is actually no lower limit to the
Conclusions
Our study revealed that the instrument tested is valid and reproducible for the assessment of self-care in Polish patients with type 2 diabetes and could be useful to both clinicians and researchers. The SCODI is a simple research tool which can be used in standardised daily clinical practice to assess the self-care behaviour of patients with diabetes mellitus. The evaluation of selfcare will allow care to be optimised and will support tailored educational interventions. The outcome obtained using this questionnaire may be helpful in identifying negative determinants while planning the self-care process. Moreover, using this instrument in everyday practice may improve patients' self-care and their quality of life.
Implications for Practice
The SCODI is a simple research tool that can be used in clinical practice or in research to evaluate the self-care capabilities of the diabetes population. The translation of this tool into 10 languages may be crucial for comparing how self-care maintenance, monitoring, management and confidence are measured in other cultures. The SCODI questionnaire can improve the effectiveness of educational activities undertaken by multidisciplinary teams in crosscultural research. | 2020-08-06T09:09:02.855Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "f2dbf16b5dc7dbdd2c1d7da0a06d4de1d424de72",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=60225",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c81fe00e99fc307b895622b3f27c5f672ef3db46",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16394622 | pes2o/s2orc | v3-fos-license | Liver Cirrhosis/Severe Fibrosis Is a Risk Factor for Anastomotic Leakage after Colorectal Surgery
Purpose. Liver cirrhosis associated with high perioperative morbidity/mortality. This retrospective study determines whether liver cirrhosis represents a risk factor for anastomotic leakage after colonic anastomosis or not. Methods. Based on a prospective database with all consecutive colorectal resections performed at the authors' institution from 07/2002 to 07/2012 (n = 2104) all colonic and rectal anastomoses were identified (n = 1875). A temporary loop ileostomy was constructed in 257 cases (13.7%) either due to Mannheimer Peritonitis-Index > 29 or rectal anastomosis below 6 cm from the anal verge. More than one-third of the patients (n = 691) had postoperative contrast enema, either at the occasion of another study or prior to closure of ileostomy. The presence of liver cirrhosis and the development of anastomotic leakage were assessed by chart review. Results. The overall anastomotic leakage rate was 2.7% (50/1875). In patients with cirrhosis/severe fibrosis, the anastomotic leakage rate was 12.5% (3/24), while it was only 2.5% (47/1851) in those without (p = 0.024). The difference remained statistically significant after correction for confounding factors by multivariate analysis. Conclusion. Patients with liver cirrhosis/severe fibrosis have an increased risk of leakage after colonic anastomosis.
Introduction
How to deal with patients with known or unexpected liver cirrhosis remains a major challenge in colorectal surgery as liver cirrhosis bears a high risk of postoperative complications [1][2][3][4][5]. Although the morbidity and mortality in these patients have been studied [6][7][8][9][10], surprisingly little is known regarding the relation between liver cirrhosis and anastomotic leakage as the most feared complication after colorectal surgery.
Although the healing of an intestinal anastomosis has often been studied but remains poorly understood [11], there are several disturbances that might influence the anastomotic healing in patients with liver cirrhosis: first, portal hypertension with impaired regulation of splanchnic blood flow [12], second, the protein metabolism disorder [13], and, third, the immune dysfunction syndrome especially in presence of ascites [14]. Indeed factors correlated with adverse surgical outcome in patients with liver cirrhosis are higher intraoperative blood loss reflecting portal hypertension, hypalbuminemia reflecting protein metabolism disorder, and the presence of ascites [15]. Furthermore the severity of liver disease correlates with postoperative morbidity and mortality [16].
To our knowledge only animal models have shown a relationship between liver cirrhosis and anastomotic leakage so far [17]. This study determines, for the first time, whether liver cirrhosis is a risk factor for anastomotic leakage after colorectal surgery or not.
Methods
Based on an existing prospective colorectal database with all consecutive colorectal operations resections, all colonic and rectal anastomoses performed at the authors' institution from 07/2002 to 07/2012 were retrospectively identified ( = 1875). Regardless of the localization of the anastomosis an end-end anastomosis was always performed. In the case of rectal anastomosis double stapling technique using a circular stapler with a diameter of 33 mm was used and coloanal anastomosis was done with a single layered single-stitch suture with polydioxanone USP 5-0, while in the case of colonic anastomosis a hand-sewn anastomosis was done using a continuous double-layered suture with polydioxanone USP 5-0. The policy on when to construct a temporary loop ileostomy ( = 257 cases, 13.7%) was as follows: Mannheimer Peritonitis-Index [18] higher than 29 and/or low rectal anastomosis up to 6 cm from anal verge.
The diagnosis of liver cirrhosis/severe fibrosis was based on the finding of a typical nodular surface of the liver during surgery and/or on liver biopsy. Histologically staging of fibrosis/cirrhosis was performed using the Ishak scoring system which defines cirrhosis for stages 5 and 6 and advanced fibrosis in stage 4 [19].
The development of anastomotic leakage and the risk factors were retrospectively assessed based on chart review.
A planned contrast enema was done in more than onethird of the patients ( = 691) either at the occasion of another study (prospective) or prior to closure of temporary loop ileostomy. In the remaining patients a CT scan with contrast enema was only done in the case of the presence of signs of anastomotic leakage, such as abdominal pain, fever, and elevation of inflammation markers (e.g., CRP elevation after postoperative day 3).
An anastomotic leak was defined as the extravasation of water-soluble contrast in the contrast enema or CT scan, a detected fluid collection (containing air bubbles and/or surrounded by a wall with contrast enhancement), fecal abdominal drainage, the intraoperative finding of anastomotic leakage, or the combination of two or more of these factors. An overview of the study methodology is presented in Figure 1.
The patients' characteristics are shown in Table 1 and the results of the univariate analyses of the risk factors for an anastomotic leak are shown in Table 2. The significantly higher leak rate in the case of presence of liver cirrhosis/severe fibrosis is shown in Figure 2. Table 3 shows the results of the multivariate analyses where only male gender, lower albumin level, intake of immunosuppressive drugs, and the presence of severe fibrosis/cirrhosis remained statistically significant predictors of the development of an anastomotic leak after colonic surgery.
From 71 patients with reversal of a colostomy only two patients had liver cirrhosis/or high-grade fibrosis. These two patients did not develop anastomotic leakage.
Discussion
Liver cirrhosis is a known major risk factor for postoperative complications in general and colorectal surgery with a reported morbidity of up to 50% and mortality up to 25%. The severity of the disease correlates with postoperative morbidity and mortality [16]. Surprisingly little is known about the connection between liver cirrhosis and anastomotic leakage after colorectal surgery [17]. Nevertheless surgeons dislike or even avoid performing colonic or rectal anastomosis in patients with liver cirrhosis.
Gastroenterology Research and Practice
The present study aimed to determine if liver cirrhosis/severe fibrosis represents a risk factor for leakage after colorectal surgery or not. Indeed the results support the assumption that cirrhosis/severe fibrosis is a significant risk factor for the development of an anastomotic leak after colorectal surgery.
The presented data complies with earlier reports showing that cirrhotic patients are in an immunocompromised state with a high risk for bacterial translocation and septic conditions [14,20]. As in other studies hypalbuminemia as a marker of liver dysfunction was shown to be associated with anastomotic leakage [21][22][23].
Which measures can be taken to lower the risk of anastomotic leakage in colorectal surgery in patients with liver cirrhosis?
First, pharmacotherapy of patients with liver cirrhosis should be optimized prior to surgery, particularly regarding portal hypertension [24]. For the latter, appropriate treatment with nonselective beta-blockers should be implemented. For those nonresponsive to medical therapy, portal hypertension can be lowered by inserting a transjugular intrahepatic portosystemic shunt [25]. However, this is limited to elective procedures and the impact on postoperative outcome is yet unclear [26].
During operation the sequels of anastomotic leakage can be lowered by construction of a temporary loop ileostomy. But the risk for complications after construction of temporary loop ileostomy (peristomal leaking, infection, peristomal eventration, bleeding from peristomal varices, and complications related to stoma closure) seems to be elevated in patients with liver cirrhosis as well [6,27]. Thus the role of a temporary loop ileostomy in colorectal surgery in the presence of liver cirrhosis and/or severe fibrosis remains open.
The diagnosis of liver cirrhosis/severe fibrosis was defined as typical nodular surface of the liver described by the surgeon and/or histological diagnosis of liver cirrhosis/severe fibrosis by liver biopsy. Due to possible sampling error of liver biopsy in liver cirrhosis about one-third of the diagnoses are missed by biopsy only [28]. The intraoperative typical nodular surface is not an unambiguous marker of liver cirrhosis as it can be absent in about 1% of the patients with histologically confirmed liver cirrhosis [28] and it can be present in patients with severe fibrosis [29]. Doing both laparoscopy/laparotomy and liver biopsy improves the diagnostic yield up to 98% [30]. Thus, we feel that the definition of liver cirrhosis/severe fibrosis in the present study is justified. Furthermore it fits the real world, as during surgery surgeons sometimes have to deal with an unexpected nodular surface of the liver. However, for future, especially prospective studies, noninvasive methods such as transient elastography or shear-wave elastography potentially allow for improved risk stratification.
Limitations.
The main drawback of this study is its retrospective design. However, we tried to minimize a potential bias by including all consecutive patients registered in a prospective database who had colorectal resection with or without anastomosis at the same institution in a predefined period of ten years. No selection bias due to not performing colonic or rectal anastomosis in patients with liver cirrhosis could be detected.
Although the study has some limitations, we feel that the results of this study are valid and that colorectal surgeons should be aware of the higher risk of anastomotic leakage in patients with liver cirrhosis or high-grade fibrosis.
Clearly, to confirm the results of the present study a prospective study should be performed.
Conclusion
Patients with liver cirrhosis or severe fibrosis have an increased leak rate after colonic anastomosis. | 2018-04-03T01:49:44.655Z | 2016-12-26T00:00:00.000 | {
"year": 2016,
"sha1": "289d0fb349b0ddf5ef2dce026a7f71c9d151b43b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/grp/2016/1563037.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19c73acd540172f17f73cb786e4df1348468e332",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13535453 | pes2o/s2orc | v3-fos-license | Transcript Profiling Distinguishes Complete Treatment Responders With Locally Advanced Cervical Cancer1234
Cervical cancer (CC) mortality is a major public health concern since it is the second cause of cancer-related deaths among women. Patients diagnosed with locally advanced CC (LACC) have an important rate of recurrence and treatment failure. Conventional treatment for LACC is based on chemotherapy and radiotherapy; however, up to 40% of patients will not respond to conventional treatment; hence, we searched for a prognostic gene signature able to discriminate patients who do not respond to the conventional treatment employed to treat LACC. Tumor biopsies were profiled with genome-wide high-density expression microarrays. Class prediction was performed in tumor tissues and the resultant gene signature was validated by quantitative reverse transcription–polymerase chain reaction. A 27-predictive gene profile was identified through its association with pathologic response. The 27-gene profile was validated in an independent set of patients and was able to distinguish between patients diagnosed as no response versus complete response. Gene expression analysis revealed two distinct groups of tumors diagnosed as LACC. Our findings could provide a strategy to select patients who would benefit from neoadjuvant radiochemotherapy-based treatment.
patients and was able to distinguish between patients diagnosed as no response versus complete response. Gene expression analysis revealed two distinct groups of tumors diagnosed as LACC. Our findings could provide a strategy to select patients who would benefit from neoadjuvant radiochemotherapy-based treatment.
Translational Oncology (2015) 8,[77][78][79][80][81][82][83][84] Introduction Cervical cancer (CC) is the second leading cause of cancer-related deaths among women worldwide with an estimated 275,000 deaths in 2008; about 88% of them occur in developing countries. More than 80% of patients affected by CC have large tumors of advanced stage mainly those classified as locally advanced cervical cancer (LACC), for whom the mortality/incidence ratio is about 50% [1,2]. As with other cancers, treatment depends mainly on progression stage and some clinical characteristics such as tumor size [3,4]. LACC is defined by tumors confined to the pelvic wall; therefore, those patients have no distant metastasis. The standard treatment for patients diagnosed with LACC with International Federation of Gynecology and Obstetrics (FIGO) stages from IB2 to IVA [5] consists of radiotherapy in combination with cisplatin-based chemotherapy (40 mg/m 2 ) followed by brachytherapy [5,6]; regrettably, the number of deceased patients due to disease progression after 5 years is as high at 50% [1].
Concomitant treatment based on chemotherapy and radiotherapy (CRT) has provided clinical benefits for pelvic control of CC; however, it has important toxicity in several patients, and some studies have shown that it could not significantly extend the overall survival in at least 40% of patients [7,8]; in addition, up to 35% of patients experience disease progression after CRT [9]. This scenario highlights the need for early detection of innate resistance to conventional or standard therapy, which would allow physicians to provide tailored treatment alternatives as early as possible. The advent of high-throughput technologies enables us to define patients' tumors as a function of their gene expression profile and use this information to improve identification of patients that would benefit with conventional treatment and those in need of adjuvant therapy. Such an approach has been developed for breast cancer [10], leukemia [11], colon cancer [12], and B cell lymphoma [13]. Nevertheless, this approach is only currently applied in the clinic to breast cancer in the form of MammaPrint (www.agendia.com) and to prostate and colon cancers through Oncotype DX [14] (www.oncotypedx.com).
Patients who do not respond to conventional treatment could require other chemotherapy-based treatment schemes; therefore, their timely detection is crucial. To contribute to this aim, we searched for a gene expression signature able to predict the clinical outcome for LACC patients who receive conventional treatment as soon as at the time of diagnosis.
Thus far, there are no reports showing the use of microarrays to identify gene signatures associated with clinical response to CRT in LACC; here, by means of transcriptome profiling and machine learning algorithm, we identified a group of genes that can be used as molecular markers to predict the clinical outcome in those patients. Our rationale is that primary tumors that have not received any conventional treatment (virgin to treatment) carry expression patterns capable of predicting the potential tumor progression; hence, accurate identification of genes involved in the innate resistance could be employed as a prognosis signature associated with CRT treatmentderived clinical response. In this study, we analyzed the genome-wide expression profiles in a discovery group consisting of 89 LACC patients receiving conventional or standard treatment (CRT) by means of genome-wide high-density arrays, covering 45,000 expressed sequences. A nearest-mean classifier was trained for probe selection in a leave-one-out cross-validation process. We obtained a 27-gene signature capable of predicting with high significance the clinical response as complete response (CR) versus no response (NR). Next, gene expression values were confirmed by quantitative reverse transcription-polymerase chain reaction (qRT-PCR) on an independent validation group of 30 patients, confirming the gene expression signature.
Tumor Samples
The population under this study included 119 patients prospectively enrolled into the National Cancer Institute of Mexico All patients received radiotherapy and cisplatin as coadjuvant (50-Gy external radiation, 35-Gy intracavitary brachytherapy, and six cycles of 40 mg/m2cis-diamminedichloroplatinum(II)).
(INCAN) tumor-banking protocol at the time of diagnosis (April 2010 through August 2012). All patients included accept and signed informed consent; institutional ethics and scientific board committees approved the protocol. Immediately after punch biopsy, tumor samples were split into three pieces, one for pathologic confirmation of at least 80% of tumor cells that is mandatory for this type of molecular profiles and the remaining two for RNA and DNA isolation. RNA and DNA biopsies were frozen in liquid nitrogen until nucleic acid extraction. Eligibility criteria were 1) patients with a confirmed pathologic diagnosis of CC staged IB2 up to IIIB (LACC); 2) biopsies with pathology report with more than 80% of tumors cells; hence, the genomic analysis is mainly addressed for tumor cells; 3) age greater to 20 and less than 60 years; 4) high-quality DNA and RNA; 5) no presence of comorbidities; 6) without previous oncological treatment; and 7) patients able to receive standard or conventional therapy based on concurrent CRT. Chemotherapy was based on weekly cis-diamminedichloroplatinum(II) at 40 mg/m 2 during five to six cycles. Radiotherapy consisted of external radiation and intracavitary brachytherapy, for a total dose of 64 to 66 Gy over 67 days [6]. Hence, all patients received the same conventional treatment. Clinical characteristics of patients are summarized in Table 1.
Clinical Definitions
Staging was assessed according to the FIGO classification [15]. Clinical responses were evaluated by RECIST 1.1 criteria and computed axial tomography scan and were assigned as CR, defined as the disappearance of all signs of cancer in response to treatment, and NR, defined as patients with partial, progressive, or stable disease [16].
HPV Genotyping
DNA was obtained from cervical tumor biopsies by means of MagNAPure Compact Instrument following the manufacturer's recommendations (Roche Diagnostics GmbH, Roche Applied Science, Mannheim, Germany). HPV genotyping was assessed by two approaches, linear array HPV genotyping (Roche Diagnostics GmbH, Roche Molecular Biochemicals, Mannheim Germany) and nested multiplex PCR (MY/GP primers) with subsequent PCR-fragment direct sequencing [17].
RNA Purification and Microarray Hybridization
Eighty-nine samples obtained at the time of diagnoses were used to discover a gene expression signature associated with clinical response. We compared gene expression signatures from patients with CR against patients diagnosed as NR. The quality of RNA was assessed by means of 18S:28S ratio. Hybridization targets were prepared from 250 ng of total RNA and amplified with whole transcriptome amplification kit 2 (Sigma-Aldrich, St Louis, MO). Four micrograms of amplified and Cy3-labeled cDNA was used to hybridize onto high-density arrays containing 45,000 features according to the recommended protocol of Nimblegen Roche (Mannheim, Germany). After standard washes, arrays were scanned on the Nimblegen MS200 microarray scanner. Images were stored for further analyses.
Microarray Preprocessing and Statistical Analysis
Scanned images were gridded by using the NimbleScan v2.6 Software (Nimblegen Roche). Then, robust multi-array analysis background normalization and quantile normalization were performed for intra-array and inter-array normalization, respectively. Genes with signal intensities above a 95% random threshold were chosen [18]. Differential expression between clinical outcomes was assessed by moderated t tests and significance statistics for each gene were obtained by the empirical Bayes method implemented in limma package from Bioconductor [19]. Global differential expression was also examined by random sampling of class labels. We selected gene subsets on the basis of classifier optimal performance ranking, as in previous approaches [20,21]. A nearest mean classifier was trained for feature selection in a leave-one-out cross-validation process and feature selection was further tested by another leave-one-out cross-validation procedure to select the profile with the strongest association with clinical response. Graphics were generated using Genesis 2.1 software [22]. The total microarray raw and normalized data of this study are public available at the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/) with accession number GSE56303.
Validation of Gene Expression Profile by qRT-PCR
We employed the remaining 30 samples to validate the gene expression profile identified in the discovery tumor set. Twenty-seven differentially expressed genes were subjected to qRT-PCR. Each primer set was designed by an experimentally verified computer algorithm and then tested in a quality control assay to guarantee that they yield a single band of the predicted size by agarose gel electrophoresis. The sequence of primers and PCR conditions are shown in Supplemental Table S1. RT reactions were performed according to the MMLV protocol from Promega (Madison, WI) following the vendor's recommendations. Real-time PCR was performed using FastStart SYBR Green Master in Light Cycler 480 Instrument II (Roche, Mannheim, Germany) according to the manufacturer's protocol. Duplicate RT samples were used in each assay, data were normalized with β-actin housekeeping gene, and in a parallel way, glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was used. The comparative Ct method (ΔΔCt) was used to quantify gene expression, and relative quantification was calculated as 2 − ΔΔCt for both housekeeping genes.
Disease-Free Survival
Disease-free survival (DFS) of the resulting patient groups was evaluated using the Kaplan-Meier method, and the statistical significance of survival differences was determined with the log-rank test. Multivariate analysis for confounding factors was performed with the Fisher exact test.
Patient Characteristics
Relevant clinical information of 119 recruited patients in this study is shown in Table 1. The median age at diagnosis was 48 years (range 29 to 59 years). The majority of patients were diagnosed as IIB (63.8%) and IIIB stages (22.7%); 92.6% were squamous cell carcinomas, while 8.4% were adenocarcinoma histologic type. The main HPV types were 18 (18.5%) and 16 (37.7%); an important number of patients (27.7%) were infected with two or more HPV types. The median clinical follow-up was 24 months. Thirty-six (30.2%) patients had NR, while seventy-nine (66.4%) were diagnosed as complete responders (CR), and four patients (3.3%) withdrew from the protocol.
Gene Expression Profile from 89 Tumors
To identify genes differentially expressed in pretreatment biopsies of responders (CR) and non-responders (NR), we applied a supervised classification based on moderated t tests and significance statistics obtained by the empirical Bayes method. We obtained a list of 2133 genes with significant differential expression (P b .02). Figure 1 shows a two-dimensional hierarchical clustering using Pearson correlation distance and complete linkage clustering obtained from Genesis 2.1 software [22]. Dendrograms shown in Figure 1 represent the similarity between clinical samples based on gene expression profiles; the length and subdivision of the branches show the similarity between CC tumors (left) and the gene expression profiles (top Table 3 showed that only FIGO stage had a slight significant association to clinical response (P = .026).
Prediction Model for the Prognostic Profile in LACC
To identify genes with scores capable of discriminating between CR and NR clinical outcomes, we employed a supervised classification approach that has shown success in previous studies [20,21]. Genes ranked by significance of differential expression were used and a nearest mean classifier was trained in a leave-one-out cross-validation process to select genes with the best classifier performance. Using the predictive algorithm, a 27-gene signature was developed with maximum accuracy in predicting clinical response status (sensitivity: 74%, that is the capacity to predict patients with NR; specificity: 91.3%, that predict the patients with CR; and overall accuracy of 90%); Figure Supplementary S1 shows the classifier performance, while the description of the 27-gene signature is summarized in Supplementary Table S2. The expression pattern of genes present in this 27-gene signature panel is shown in Figure 2. The left panel shows the classifier itself, clinical outcome is represented with black circles for patients with NR and white circles for CR, and the score for each one is indicated in the x-axis. According to this score, patients were divided in two groups delimited by a red line: 17 of 21 (80%) patients with NR and 62 of 68 (91%) patients with CR were assigned the expected score (lying to the right and left of the red line, respectively) showing the high sensitivity and specificity of our predictor. Each patient's 27-gene profile is displayed in the heat map at the right, where contrasting patterns can be observed at the top and bottom, suggesting an expression profile gradient between patients with the best (top) and worst (bottom) prognoses. Interestingly, the apparent threshold indicated by a horizontal black line corresponds to the actual disease outcome of the patients represented by black or white circles in the classifier at the left. Moreover, Kaplan-Meier analysis showed a statistically significant difference in DFS between the NR and CR groups ( Figure 3). The NR group had a mean DFS of 16 months, whereas the CR had a median survival that had not yet been reached (log-rank P = 1 × 10 − 16 ).
Validation by qRT-PCR
To confirm the discrimination capability of the Cervical Cancer Conventional Treatment Response Profile (CC-CTRP) gene signa-ture, we searched for gene expression levels of all 27 genes that were validated by qRT-PCR in an independent group formed by 30 independent patients with LACC diagnosis. Total RNA was isolated from biopsies taken before treatment; diagnosis-wise 16 of these patients were classified as CR and 14 as NR. For the sake of clarity, we analyzed the qRT-PCR results as ΔΔCt(log 2 ), which represents the fold change relative to the β-actin housekeeping gene ( Figure 4A); in a parallel analysis, we used another housekeeping gene (GAPDH) to confirm the consistency of results ( Figure 4B). Although GAPDH and β-actin housekeeping genes showed slight differences in expression levels (Supplementary Figure S2), the ability to discriminate both clinical responses (CR vs NR) does not show significant differences (Supplementary Figure S3, A and B). Hence, the RT-PCR results were evaluated as ΔΔCt(log 2 ) grouped patients in accordance with the initial 27-gene classifier, regardless of the source of the data, qRT-PCR, or microarrays.
Discussion
Despite of the increase in early detection programs, CC still remains one of the principal neoplasms causing death of women throughout the world since most patients are diagnosed only when they arrive to health centers and the disease has often reached locally advanced stages. Thus far, no generally accepted molecular marker for CC has been reported [23] and clinical parameters are the only strategy currently used in the prediction of disease outcome.
In this work, we aimed to find a molecular signature associated with chemo-radioresistance of LACC; tumor biopsies were carefully selected to fulfill inclusion criteria, which included that each biopsy had more than 80% of tumor cells. We obtained the differentially expressed gene profiles and their correlation with the clinical outcome of 89 tumors and used these data to build a predictor algorithm to identify chemoradiotherapy-resistant individuals. The data were later validated by assessing 30 additional samples in an independent group, for a total of 119 LACC analyzed patients.
We identified a 27-gene molecular signature with high prognostic value, which proved to be more effective than tumor size in predicting disease outcome that is currently the main clinical approach used as predictive marker [3] (Table 3). Our predictor sorted correctly 17 of 22 patient diagnoses; this figure represents 80% of effectiveness. The Kaplan-Meier graph in Figure 3 shows that disease outcome can be identified as early as 4 months by using our approach.
Several works have used microarray technologies in the search for CC molecular signatures associated with different conditions such as radioresistance [24][25][26], early stage lymph node metastasis [27], and resistance to angiocidin and darapladib-based anti-tumoral and anti-inflammatory treatment [28]. However, despite assessing the same tumor type, these molecular signatures lack consistency, mainly because treatment selection is not based on NCI standards, which could affect the reported genes. In addition, this may result from the diverseness of the experimental designs or intrinsic bias of the different microarray platforms [23]. However, the populations that these authors have analyzed, while seemingly similar, are actually very diverging when observed from the perspective of the therapy. Thus, we find it reasonable to speculate that different treatments elicit variable but specific gene expression profiles. Moreover, intra-tumor source heterogeneity is an important but seldom considered point. Bachtiary and co-workers suggest that increasing sample size can serve as a remedial measure, based on variance-component analysis of the genetic properties of replicate cancer biopsies [29]. Currently, there is no previous report of a molecular signature associated with conventional treatment response. To our knowledge, this work is the first transcriptome-based molecular signature associated with conventional treatment based on chemoradiotherapy resistance and the largest LACC sample number assessed in such a study.
Conclusion
Patients diagnosed with LACC are submitted to conventional treatment, without certainty of a CR, due to tumor chemoradiotherapy resistance. The CC-CTRP gene signature, obtained in this work, is a novel prognosis tool aimed at sorting patients with regard to their sensibility to conventional treatment; consequently, it would be possible to give those with a bad prognosis the opportunity to undertake alternative or complementary treatment without prior exposition to conventional treatment, avoiding unnecessary weakening and thus increasing their survival possibilities. However, more studies would be necessary to increase evidence about the utility of the current molecular signature. | 2018-04-03T00:36:14.488Z | 2015-04-01T00:00:00.000 | {
"year": 2015,
"sha1": "fc5e4b6847ac24f082cb82d70dd72ef73f06893f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.tranon.2015.01.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc5e4b6847ac24f082cb82d70dd72ef73f06893f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
126137659 | pes2o/s2orc | v3-fos-license | A new generation of 99.999% enriched 28Si single crystals for the determination of Avogadro’s constant
A metrological challenge is currently underway to replace the present definition of the kilogram. One prerequisite for this is that the Avogadro constant, NA, which defines the number of atoms in a mole, needs to be determined with a relative uncertainty of better than 2 × 10−8. The method applied in this case is based on the x-ray crystal density experiment using silicon crystals. The first attempt, in which silicon of natural isotopic composition was used, failed. The solution chosen subsequently was the usage of silicon highly enriched in 28Si from Russia. First, this paper reviews previous efforts from the very first beginnings to an international collaboration with the goal of producing a 28Si single crystal with a mass of 5 kg, an enrichment greater than 0.9999 and of sufficient chemical purity. Then the paper describes the activities of a follow-up project, conducted by PTB, to produce a new generation of highly enriched silicon in order to demonstrate the quasi-industrial and reliable production of more than 12 kg of the 28Si material with enrichments of five nines. The intention of this project is also to show the availability of 28Si single crystals as a guarantee for the future realisation of the redefined kilogram.
Introduction
The kilogram is the only base unit of the International System of Units (SI) still defined by a material prototype as stated by the 1st General Conference on Weights and Measures in 1889. The mass of the international prototype of the kilogram (IPK) expressed in terms of the SI unit kilogram is invariable by definition, but since 1889 the mass differences between the IPK and its official and national copies have drifted by about 50 µg, or 5 × 10 −8 in relative terms, on the average. A sufficiently accurate determination of the drift is not yet possible, but it is evident that there is a need for a new definition of the mass unit. It has now been more than 40 years ago since the first attempts were started to find a way of defining the kilogram based on an atomic or fundamental physical constant. Since then, two methods have advanced sufficiently far to make it likely that there will be a new definition within the coming years, namely the watt balance experiment to determine the Planck constant h and the so-called x-ray crystal density (XRCD) method [1] which is used in the Avogadro experiment for determining the Avogadro constant N A .
Using the Avogadro experiment for a new definition of the mass unit, the kilogram, can be understood as the mass of a specific number of silicon atoms. The number of atoms in a perfect mono-crystal can be determined by measuring the volume V of a macroscopic sample and its lattice parameter a 0 . In the case of silicon, the unit cell (with the edge length a 0 ) contains 8 atoms; thus the number N of atoms in the sample is When also measuring the mass m of the sample, the (mean) mass of one silicon atom, m Si-atom , can be calculated by A real silicon sample does not only contain one type of atom; it contains three different silicon isotopes of differing mass and additionally some impurities of other chemical elements. When the kilogram is newly defined, a relative measuring uncertainty of N A at the level of 10 −8 is needed. This means, with 6 × 10 23 entities in one mole, the 'counting' uncertainty should be of the order of 6 × 10 15 atoms! Therefore, all kinds of atoms with a content higher than this limit must be identified and taken into consideration. Natural silicon consists of three isotopes with differing masses. Therefore, the isotopic composition of the sample has to be determined with high precision. From the isotopic composition the molar mass M and the amount of substance (=number of moles) in the silicon sample, n = m/M, can be determined. Then the Avogadro constant N A can be calculated with the density of the sample, ρ = m/V. Unfortunately, the molar mass of natural silicon cannot be determined better than 1 × 10 −7 , relatively, by using gas mass spectrometry. Relative uncertainties below the 10 −8 level can only be achieved if isotopically enriched silicon with an enrichment better than 0.9999 is used (see sections 4.2 and 4.3).
Even the purest silicon crystal still contains some atoms of different chemical elements which change the mass of the sample with respect to a pure sample of the same volume. The concentrations of these elements have to be determined and their influence on the Avogadro constant has to be corrected. The elements carbon (C), oxygen (O) and nitrogen (N) have the largest impurity concentrations in the 28 Si crystals used for the determination of the Avogadro constant, and their contents must be carefully reduced and measured (section 4.4). Carbon additionally disturbs the lattice parameter measurement since it is not homogeneously distributed in the crystal but causes so-called striations [2]. To minimize this effect the concentration of carbon should be below 2 × 10 15 cm −3 .
In order to improve the already published results for the Avogadro constant, the availability of almost ideal 28 Si crystals is an essential precondition. The aim of the project named 'kg-2' was therefore to increase the isotopic enrichment and to reduce the impurity content in the final crystals. Last but not least, it is envisaged to test and provide reliable and reproducible 28 Si production at a quasi-industrial level with a quantity that allows making available a sufficient number of 1 kg spheres to the metrological community. As the XRCD or Avogadro method is one of the accepted methods of the mise on pratique on the kilogram to realize the mass unit after redefinition, the 28 Si spheres can be used for the realization of the new mass unit.
Historical background
Since Avogadro's findings in 1811, the determination of the number of entities in a mole of substance has fascinated generations of scientists. With the discovery of x-rays and the decoding of crystal structures, the counting of atoms in a silicon crystal has become the basis of the search for a new mass standard, according to the published 'Phantasies on a natural unity of mass' in 1963 by Egidi [3]. Two years later, Bonse and Hart [4] implemented from a silicon crystal an x-ray interferometer, an important tool to measure the silicon lattice parameter extremely accurately in the length unit 'meter'. Nine years later, 1974, Deslattes [5] from the former National Bureau of Standards, USA, published his pioneering article about the first precise Avogadro constant based on Bonse's idea. Other national metrology institutes started similar projects, among them also the Physikalisch-Technische Bundesanstalt (PTB, Germany), which was able to follow with corrected data in 1992 [6]. All relevant data were measured with silicon of natural isotopic composition, namely 92% 28 Si, 5% 29 Si and 3% 30 Si. But the measurement uncertainty of the molar mass M was eventually the limiting factor. Zosi [7] outlined a way out of this problem in 1983: He proposed using perfect silicon spheres manufactured from enriched 28 Si crystals for the determination of N A . An initial attempt was started in 1990 at PTB when two sources of enriched material were made available from the Oak Ridge National Laboratory (ORNL), USA, and from the National Institute of Metrology (NIM), Russia. The enrichment of both sources of 28 Si material was similar: 0.9989 and 0.9988, respectively. As the material delivered by NIM in the form of 590 g of Si powder was found to be contaminated by an undesirably high amount of impurities, only the ORNL material-about 1284 g of 28 SiO 2 grainshas been converted into a silicon single crystal in collaboration with the Wacker-Chemitronic company in Germany [8]. The final product, the first 28 Si single crystal, has a mass of about 300 g and an enrichment of 0.9902 (figure 1). The enrichment was unexpectedly diluted and the impurity contents of boron (B) and aluminium (Al) were very high. The derived data for the Avogadro constant were about 10 −5 N A larger than the data of previous measurements and not in accordance with the CODATA value published in 1986, possibly due to the imperfection of the crystal.
But in the 1990s, scientific interest and the technological promise of highly enriched isotopes led to a sharp rise in the number of experimental and theoretical studies dealing with isotopically controlled semiconductor crystals. In 1994, Tarbejev et al presented the selection of appropriate working gases, the development of gas centrifuges, their arrangement for the effective separation of the isotopes and the optimization of the production process. A resulting mole fraction of better than 0.9999 seemed to be feasible. The requirements for the production process in growing 28 Si single crystals and the related problems were also discussed. They also estimated the uncertainties necessary for the check measurements during production [9].
A contract for a feasibility study was signed between PTB and former nuclear technologists in St. Petersburg and Nizhny Novgorod (Russia) to produce highly enriched silicon based on improved technologies for purification, enrichment and analytical methods [10]. The enrichment of the material was more than 0.999 28 Si and the concentration of the main impurities B, N, C, O, and Al was some orders of magnitude smaller than in the first attempt described above. Since 2000, in several feasibility studies on the production of high-purity 28 Si in accordance with the Avogadro project, the enrichment of 28 Si could be increased from 0.998 96 to finally 0.9998 in charge No. 7, see table 1. The final product of charge 7 is shown in figure 2. The central part was dislocation-free and was used for investigations connected with the Avogadro project [11]. The impurity concentration was 8.2 × 10 14 cm −3 for oxygen and 3.4 × 10 15 cm −3 for carbon.
Based on these studies, the fabrication of a 1 kg silicon single crystal sphere of 0.9999 enriched 28 Si seemed to be feasible. The International Avogadro Coordination (IAC) was founded in 2004 by several national metrological institutes with the aim to finance a new project for the production of a 28 Si crystal (charge 10, see table 1). This project was planned again with partners from Nizhny Novgorod and St. Petersburg, with the target of producing a 5 kg single crystal with an enrichment of more than 0.9999 [13].
The final crystal was of p-type with a mass of 4530 g and an enrichment of-surprisingly-about 0.999 95, shown in figure 3. Only a small end part of about 200 g was disturbed by back-gliding dislocations. Most of the impurities had accumulated in other residual parts with a total mass of approx. 1 kg, the 28 Si isotopic enrichment was not affected. The impurity concentration could be reduced significantly and was in the order of 0.4 × 10 15 cm −3 for oxygen and of 2 × 10 15 cm −3 for carbon. This material was used for the accurate determination of the Avogadro constant in 2010 [14] and for improvements in the measurement uncertainty in 2014: With two 28 Si spheres shaped and polished first in Australia and later repolished at PTB, a relative uncertainty of 2 × 10 −8 was reached [15].
Realisation of improved 28 Si crystals: the kg-2 project
The redefinition of the kilogram is expected for 2018-a great challenge for the metrological community. It will be the starting point for a new era of mass metrology, which will have an impact not only in the near future. Therefore, for the redefinition of the kilogram and its realisation by the XRCD method, the experiments within the framework of the IAC based on two 28 Si spheres can be considered as a feasibility study. Experiments on a multitude of 28 Si spheres are necessary to form the new definition on a solid basis. Simultaneously, the quality of the material should be improved with respect to enrichment and chemical purity, in order to push the measuring uncertainty towards the 10 −9 region before the definition of the kilogram can be changed. Through enrichments higher than five nines and impurity concentrations smaller than 10 15 cm −3 , the perfection of the silicon material would come very close to an ideal one. In spite of the enormous costs for the material and the financial shortages in most of the metrology institutes, PTB decided to shoulder the material procurement alone. The aim of this new project named 'kg-2' is therefore (1) to demonstrate the availability of 28 Si crystal material at a quasi-industrial level with finally two 28 Si single crystals of about 6 kg each and the manufacture of four 1 kg spheres, (2) to reduce the carbon content by an additional cleaning of the SiF 4 gases by centrifugation and by using high purity chemical substances, and (3)-by the way-to accentuate the need of the so-called silicon path to the new kilogram. How are these ambitious plans to be realised? How is the production process to reduce the impurity content and to simultaneously increase the isotopic enrichment to be improved? The first 28 Si mono-crystals were still made on a laboratory scale. Now the Stock Company 'Production Association Electrochemical Plant' (SC 'PA ECP') in Zelenogorsk (near Krasnoyarsk, Russia), PTB's main project partner and one of the most important suppliers for isotopes worldwide, was elected to be responsible for the 28 SiF 4 gas production. The chemical conversion into silane and the deposition as a polycrystal was again performed at the G.G. Devyatykh Institute of Chemistry of High-Purity Substances of the Russian Academy of Sciences (IChHPS RAS) in Nizhny Novgorod, Russia. The final growth to perfect mono-crystals was carried out at the Leibniz Institute for Crystal Growth (Leibniz-Institut für Kristallzüchtung, IKZ) in Berlin-Adlershof, Germany. The kg-2 project was administratively conducted in Russia by the ISOTOPE company, a Russian governmental organization.
The production of 28 Si single crystals from silicon of natural isotopic composition to the highly enriched single crystal consists of 5 main steps, as shown in figure 4.
Enrichment of 28 Si in centrifuges
According to figure 4, solid silicon granulates with natural isotope composition were converted to SiF 4 gas in the first step. SiF 4 is used as a process gas for the centrifugal separation of Si for two reasons: there exists only one isotope of fluorine, namely 19 F, and SiF 4 has a sufficiently high vapour pressure at room temperature.
In contrast to the previous projects, SiF 4 was obtained by direct fluorination of high-purity natural silicon in order to significantly reduce the boron and carbon impurities. SiF 4 was synthesized from the element according to the chemical reaction The natural abundance silicon ( 28 Si ≈ 92.23%) of electronic grade quality was manufactured and supplied by Wacker Polysilicon Europe, Wacker Chemie AG, Germany. SC 'PA ECP' produced high-purity fluorine (F 2 ) by itself. A special installation with a reactor was constructed and used for the synthesis of the initial SiF 4 and for silicon isotope separation in centrifugal cascades. Gas centrifugation is the only effective method of isotope separation with a high enrichment level. Other methods, such as magnetic mass separation, ion exchange and laser technology, are more expensive. These methods do not allow a high isotopic enrichment to be reached.
In order to fulfil the requirements for a very high enrichment of 28 Si along with the high chemical purity, the separation of the SiF 4 gas was performed in three centrifugal cycles (the second step in figure 4).
In cycle 1, an enrichment of about 99.9% 28 SiF 4 was achieved. The 99.9% 28 SiF 4 gas was used in a next step as input gas for cycle 2, where the centrifuge cascade was specially designed in order to reduce the carbon-containing impurities in the gas. Values smaller than 100 ppm in the case of CO 2 and smaller than 30 ppm for CO, CH 4 , C 2 H 2 , C 2 H 4 , and C 2 H 6 were targeted. The purified gas was then enriched up to 99.999% in cycle 3.
Using ECP's gas centrifuge cascades made it possible to produce high-enriched 28 SiF 4 with the required low content of carbon impurities. Gas centrifuge method using allows producing enriched stable and radioactive isotopes of various chemical elements with high chemical purity [16].
The entire amount of material was produced in three charges numbered 22, 23 and 24. Charge 22 was used for the production of slim rods, which acted as deposition rods for the two 6 kg polycrystals (charges 23 and 24). Si isotopes in the final 28 SiF 4 gas were analyzed at SC 'PA ECP' and at the IChHPS RAS by mass spectrometry. IChHPS RAS also measured the polycrystalline product of charge 24 by ICP-MS (see section 4.3). The results were also compared with PTB's measurement results (for method, see section 4.2) of the polycrystalline and the final single crystal material produced from this 28 SiF 4 (see table 2).
Silanisation and purification of silane
During the third technological step the 28 SiF 4 gas was converted into silane, 28 SiH 4 . For the production of high-purity silane the hydride method was used [17]. First, the synthesis of silane was carried out by the reaction of high-purity silicon tetrafluoride with calcium hydride 28 SiF 4 + 2CaH 2 → 28 SiH 4 + 2CaF 2 .
The synthesis was performed in the flow-through mode. A mixture of isotopically enriched silicon tetrafluoride with hydrogen of special grade B purity was passed through a layer of mechanically dispersed calcium hydride. The reactor was made of high-purity Si-free stainless steel to prevent the diffusion of boron compounds and natural Si into the highly enriched gas. It was found in [18] that the reaction between silicon tetrafluoride and calcium hydride occurs in the form of propagating waves. The yield of 28 Si during the silanisation process was between 92% and 94%. The content of hydrocarbon impurities in silane after synthesis was at the level of 10 −5 mol mol −1 (see second column of table 3). The content of polysilanes and disiloxanes is the largest component according to the data of a gas-chromatographic-mass-spectrometric (GC-MS) analysis at a level of 10 −3 mol mol −1 . Calcium hydride seems to be the main source of impurities, in particular carbon [19].
Technical details of the purification in brief.
The produced 28 SiH 4 was freed from impurities by the methods of cryofiltration and periodic low-temperature rectification [20,21]. Cryofiltration of silane was performed using a Petryanov tissue at a temperature of 163 K (−110 °С). The flow rate of silane was about 100 g h −1 (volume flow rate about 1 l min −1 ). Rectification was carried out in a packed metal rectification column with a middle feeding tank-operating in the mode of discrete extraction of impurities from the lower to the upper separating sections. The height of rectifying sections of the column was 100 cm and 170 cm with a cross-section of 4.9 cm 2 . A fine spiral prismatic packing of Nichrome wire was used (3 × 3 × 0.2 mm). The column load was 1000 g; the degree of extraction was 0.003-0.006. The low-volatile impurities were methane, carbon dioxide and hydrogen, and the semi-volatile impurities С 2 -С 9 hydrocarbons, alkylsilanes, polysilanes, and polysiloxanes. Ethylene is the most difficult to remove impurity in 28 SiH 4 (reduction factor α = 1.28). The duration of the rectification process is determined by the starting concentration and the reduction factor of this impurity at the bottom of the column. The on-line analysis of the fractions enriched by ethylene was performed by gas chromatography. With the aim to free the product mainly from carboncontaining impurities, the purification process was stopped after reaching the content of ethylene at a concentration level of a few 10 −7 mol mol −1 in the concentrate of the impurities. The yield of the high-purity product during purification was 80%. The content of chemical impurities in the purified 28 SiH 4 is given in table 3. High-purity 28 SiH 4 was prepared with the content of С 1 -С 9 hydrocarbons and of alkylsilanes with less than 4 × 10 −8 mol mol −1 of disiloxane and of higher silanes at the level of 10 −7 mol mol −1 .
Chemical vapour deposition of 28 Si
The process of depositing the 28 Si from the 28 SiH 4 is the fourth step in figure 4 which was carried out by chemical (or better, pyrolytic) vapour deposition (CVD) in a single-rod set-up particularly designed for this purpose. A 28 Si rod with an enrichment of more than 0.9999 with a diameter of 8.5 mm and a length of 850 mm was used as a substrate for the deposition ('slim rod'). The rod was heated up to 800 °С by passing an electric current. The method of so-called high-voltage start was used for heating the rod from room temperature up to the operating value. The temperature of the rod was PID controlled within ±2 K. Deposition was carried out in a stainless steel reactor with cooled walls.
To meet the requirements for the final crystal with respect to the carbon content (<2 × 10 15 cm −3 ), all possible sources of this impurity had to be avoided during the installation of the set-up. Therefore, carbon-containing materials were not used to manufacture the internal components of the reactor. The reactor surface was cleaned without using carbon-containing solvents.
Compared to previous projects, the whole set-up was modernized with the aim of increasing its operational stability. The characteristics of the power supply circuit were improved. An uninterruptible power supply of 30 kW was used to provide a stable power supply. Besides, a flattening filter was installed in the power supply circuit to reduce the start-up current. The program for changing the parameters of the temperature regulator was modernized to allow an on-line correction of the parameters during the deposition process.
The deposition procedure of polycrystalline silicon was optimized in order to increase the rate from 0.010 g (cm 2 h) −1 up to 0.018 g (cm 2 h) −1 without a noticeable change in the yield. The specific deposition rate per unit area of the rod surface was kept constant by increasing the feeding rate of 28 SiH 4 proportional to the surface area of the growing rod. The dependence of the diameter of the polycrystal on the deposition time is given in figure 5. It takes about 2 weeks to deposit 6 kg of 28 Si. A higher deposition rate provides a decrease in the background contamination of the polycrystalline silicon from the apparatus material and a decrease of the exposure time of 28 SiH 4 in the reactor. The yield of the product, determined by comparison of the mass of the 28 Si in the consumed 28 SiH 4 with the mass of the polycrystalline silicon produced on this basis, was 95%. The final polycrystalline rod of charge 23 just after the deposition is shown in figure 6.
FZ single crystal growth
The last step in the technological process of figure 4 is the growth of the dislocation-free 28 Si single crystal. The cruciblefree floating zone (FZ) technique was used to achieve the chemical purity needed and to save the isotopic enrichment from the polycrystalline starting material (poly-Si). Before the FZ growth, the polycrystalline rod was prepared by cutting off the electrical contacts and by preparing a cone on one side and a groove for the holder on the other side. This preparation leads to fragmentary losses of material. Due to the high initial oxygen and carbon concentrations in the polycrystalline rod, several growth runs had to be performed, also leading to loss of original (as-deposited) material. This is the reason why additional procedures on the way from the starting material to the final crystal of high perfection were necessary to optimize the growth process and to minimize the loss of material.
At the beginning, the number of FZ runs had to be estimated based on the oxygen and carbon concentrations in the poly-Si. As-deposited polycrystalline samples were annealed before the measurement for over 10 h at a temperature of about 1350 °C to make them transparent for IR beams [22]. The contents of C and O were measured using Fourier-transform infrared (FTIR) absorption spectroscopy (see chapter 4.4) both in the polycrystalline material and at the end of the crystal after the first FZ growth run (figure 7). Both measurements resulted in nearly the same carbon concentration: N C = 3.5 × 10 15 cm −3 for charge 23. Due to oxygen evaporation during the first FZ growth process, the oxygen concentration measured in the FZ crystal was one order less than in the polycrystal, measured as For a further decrease of the oxygen concentration in the crystal, two growth runs were made in vacuum. The next runs were carried out in an argon atmosphere while saving the low oxygen concentration by previously desorbing the water layers in vacuum.
Carbon was further reduced by the segregation effect of multiple FZ passing. The number of growth runs was estimated using the theory of zone refining for many passes [23] on the basis of the carbon concentration in the starting material and on the target concentration (smaller than 2.0 × 10 15 cm −3 ) over the whole crystal length, and was determined to be 6 runs.
However, even though the freeze-off technique was used at the end of each run to avoid a new preparation of the grown crystal for the next run, some additional material is lost by freezing-off and is missing in the final crystal. That is whyusing experience from the past [12]-all cut fragments of polycrystalline silicon were regrown by a Czochralski (Cz) technique resulting in a crystal 50 mm in diameter that was joined to the FZ crystal just after the first run. To avoid isotope dilution during Cz growth, the quartz crucible was coated before the growth process with a 28 SiO 2 layer of about 100 µm in thickness [24]. Thus, no isotope dilution could be detected in the Cz crystal. A 1 0 0 oriented 28 Si crystal grown in a previous step of the project by a crucible-free pedestal technique was used as a seed crystal. At the end of the growth process the crystal was grown tapered down to a diameter of 79 mm by using the automatic diameter control system in order to protect the cylindrical part of the crystal from back-gliding dislocations. The final monocrystals of charges 23 and 24 shown in figure 8 have the following specifications: mass 5.12 kg and 5.64 kg, respectively, maximal diameter 100.3 mm and n-type conductivity except for the first 50 mm in the cone. The cylindrical part of charge 24 crystal is 20 mm longer than that of the charge 23 crystal.
Crystal quality
The main parameters representing the quality of the final crystals are isotopic enrichment, carbon concentration and their spatial homogeneity. An indispensable precondition for the determination of N A with further reduced measurement uncertainty is the determination of the molar mass M(Si) of the new silicon material highly enriched in the 28 Si isotope with reduced associated uncertainty (u rel (M(Si) < 5 × 10 −9 ), a challenge for any mass spectrometry. Samples from different parts of the crystal were cut for the analysis. The isotopic analysis was made by laser mass spectrometry (LIMS) and high-resolution inductively coupled plasma mass spectrometry (HR ICP-MS) at the IChHPS RAS and by isotope dilution mass spectrometry (IDMS) at PTB. The oxygen and carbon concentration in the 28 Si single crystal was measured by IR spectroscopy (IChHPS RAS and PTB, see section 4.4). Additionally, the instrumental neutron activation analysis was used to check the purity with respect to many other chemical elements [25].
Molar mass measurements at PTB
The molar mass M(Si)-and thus the enrichment-was determined by using the modified isotope dilution mass spectrometry virtual-element (VE-IDMS) method applying high-resolution multicollector-inductively coupled plasma mass spectrometry (MC-ICP-MS) which is described in detail elsewhere [26][27][28]. In brief, M(Si) can be expressed using the relation with amount-of-substance fractions x( i Si) and molar masses M( i Si) of the ith silicon isotope [29]. The x( i Si) are accessible via the measurement of isotope ratios R applying state-ofthe-art isotope ratio mass spectrometry using the VE-IDMS method. In summary, the enriched silicon is regarded as consisting of 29 Si and 30 Si (=the virtual element) in the matrix of all three Si isotopes. By measuring predominantly the isotope ratios R( 30 Si/ 29 Si) in the enriched silicon sample ('Si28') and in an IDMS blend consisting of the 'Si28' material and a silicon crystal material highly enriched in the 30 Si isotope ('Si30', spike), x( 28 Si), x( 29 Si), and x( 30 Si) can be determined with associated uncertainties sufficient to obtain u rel (M(Si) < 5 × 10 −9 . This method has been successfully applied and approved by several leading NMIs in the past few years by measuring M(Si) of the silicon material used to determine N A [26][27][28][30][31][32].
As an additional check, the 30 Si mole fraction was measured by the instrumental neutron activation analysis [33,34]. Another main outcome within the context of the determination of M(Si) is the development and application of an analytically closed-form method for the determination of calibration (K) factors necessary for correcting measured isotope ratios for mass bias effects [35]. These K factors are accessible experimentally and described in detail elsewhere [28]. All quantities measured and applied were treated by a respective uncertainty analysis according to the guide to the expression of uncertainty in measurement (GUM) [36].
For the experimental procedure, the silicon crystal samples (approx. 300 mg each)-either polycrystalline or monocrystalline-were carefully cleaned, etched, and weighed applying air buoyancy correction. After dissolution in aqueous tetramethylammonium hydroxide (TMAH) and further dilution, the isotopic composition of the samples was determined using a high resolution (HR) MC-ICP-MS (Neptune ™ , Thermo Fisher Scientific) with resolution M/ΔM = 9000. The main advantage of TMAH over sodium hydroxide (NaOH) is a strongly increased signal intensity due to the absence of sodium acting as an energy sink as well as overcoming any scattering effects in the vicinity of the ion detectors [37]. The results for the charges 23 and 24 (single crystal) are given in table 2.
The outstanding reduction of the uncertainty associated with M(Si) during the past decade as a consequence of the increasing enrichment in 28 Si (expressed in x( 28 Si)) is demonstrated in figure 9.
Isotopic composition measurements at IChHPS RAS
All steps of the chemical process are related to a high risk of isotopic dilution by natural silicon from reagents and apparatus materials. This is in particular important for the gas 28 SiF 4 which is a highly aggressive compound in the presence of moisture traces. The isotopic composition in the technological process of converting 28 SiF 4 to polycrystalline 28 Si in the former Avogadro project was controlled by a double focusing laser ionization mass spectrometer with photographic registration of the mass spectrum. To increase the measurement precision, an internal isotopic standard was used [39]: Potassium with natural isotopic abundance: 0.932 581(44) 39 K, 0.000 117(1) 40 K, and 0.067 302(44) 41 K [40]. The 28 SiF 4 samples were bubbled through KOH solution, whereas polycrystalline 28 Si was dissolved in a KOH melt in carbon glass beakers. The prepared solution was transferred onto a substrate of high-purity germanium and then dried. The concentrate was scanned by a laser beam layer-by-layer analysis. The content of 28 Si was measured with respect to 39 K and the contents of the isotopes 29 Si and 30 Si were measured with respect to 40 K. The total measurement uncertainty was limited by the uncertainty of the table values for the isotopic abundances of potassium. This technique provided the prompt control of the technological process of converting 28 SiF 4 into polycrystalline 28 Si. Figure 9. History of the uncertainty associated with the molar mass of silicon highly enriched in the 28 Si isotope measured via the VE-IDMS principle. In 2016, the second crystal of the kg-2 project provided u rel (M(Si)) = 1 × 10 −9 , which is a milestone in the Avogadro project. For comparison, u rel (M(Si)) of silicon with natural isotopic composition measured using gas phase isotope ratio mass spectrometry is also displayed [38].
While increasing the enrichment in the kg-2 project, the isotopic composition of silicon in the 28 SiF 4 gas and in the crystalline 28 Si was measured at IChHPS RAS also by HR ICP-MS (Element 2 of Finnigan, Bremen) using the method of inverse IDMS [41]. Thus, the IChHPS RAS was able to measure both gaseous and solid samples. The measurements were carried out at an average resolution (4500) with the aim to exclude the 28 SiH + interference with 29 Si + . For the analysis, the 28 SiF 4 samples were hydrolyzed with a 0.5% solution of hydrofluoric acid HF. Hydrolysis was accompanied by the formation of a gel which was later dissolved in excess of HF. The samples of polycrystalline 28 Si were dissolved in the mixture of hydrofluoric and nitric acids.
Four series of solutions were prepared: 1. Solutions with a 28 Si concentration in the range from 1 ppm to 20 ppm. In the solutions of the first series only the content of 28 Si was measured. A calibration curve of intensity versus 28 Si concentration was plotted and the calibration parameters were determined graphically. In the solutions of the other series only the contents of 29 Si and 30 Si were measured, since the 28 Si signal was overamplified.
The third and the fourth series of the solutions were used to determine the coefficients of mass discrimination and of the matrix effect which could suppress the intensity of lines including the matrix element, i.e. silicon. Since the coefficient of the matrix effect is determined for each isotope, it also includes the coefficient of mass discrimination. The intensity of the signals of the 28 Si isotope in the second series of 28 Si solutions was calculated by the parameters of the calibration chart accounting for the solution concentration.
Despite the fact that the measurement uncertainty of the isotopic composition on single-collector instruments is substantially higher than in the MC-ICP-MS, this technique allowed providing prompt control of the isotopic composition in the technological process of silicon conversion. The results of the measurement carried out at the IChHPS RAS for the average value of the isotopic composition of 28 SiF 4 in five containers of charge 24, and for the prepared polycrystalline material, are in good agreement with the PTB results of the final FZ crystal (see table 2). The data indicate also that the chemical treatment at the IChHPS RAS is on an excellent high purity level: Between input gas and output solid material no significant dilution of the enrichment can be detected.
Chemical impurity determination at PTB
Low-temperature infrared absorption spectroscopy was performed at PTB using a continuous-flow cryostat system equipped with a multiple sample holder and a vacuum FTIR spectrometer (Bruker VERTEX 80V). Measurements could be performed with a small aperture size leading to measurement spots smaller than 3 mm in the mid-infrared range. The limiting diameter of the sample holder is then 5 mm. In the far-infrared range the loss of intensity leads to a slight increase of the detection limit, but the method was found in preliminary experiments to be sensitive enough to measure shallow impurities below the order of 10 13 cm −3 which still meets the requirements for the determination of the Avogadro constant. Since the sample size for IR measurements can now be reduced from 14 mm × 14 mm × 3 mm down to 7.5 mm × 7.5 mm × 3 mm, measurements with a higher spatially resolved radial distribution are possible, in order to get a more detailed picture of the radial impurity profile caused by striations during the growth process.
Substitutional carbon, interstitial oxygen as well as shallow impurities from the boron and nitrogen families were measured at temperatures below 10 K in the mid-and far-infrared according to standard procedures [42][43][44] which have been adapted to highly enriched 28 Si according to [45]. An automatic beam splitter changer as well as various radiation sources and detectors allow the aforementioned impurities to be measured within a single cooling cycle.
FTIR measurements were performed on samples at three different axial positions from the final single crystal of charge 23. Only C, O and the shallow impurities B and P were found in the silicon crystal whereas signals from other shallow impurities such as Al, As, Sb and Ga were below the detection limit. The results are given in figure 10. While the impurity concentrations of O and B remain predominantly constant along the crystal axis, a significant increase of the concentrations of C and P occurs at the back end of the silicon crystal. Multiple FZ runs have thus reduced the relatively high carbon content measured in the polycrystalline material by a factor of ~4 in the front part of the crystal. Nevertheless, the C concentration at the back end of the crystal is comparable to that of the polycrystalline material. Similar results at the front and back end were obtained at IChHPS RAS.
Conclusion
With charge 24, the 28 Si enrichment exceeds the barrier of 5 nines! In contrast to previous charges, no significant dilution by natural silicon during the whole technical process could be observed in the kg-2 project, even though the enrichment is much higher. In other words: also a further increase of enrichment would be easily exploitable. The realisation of five nines in enrichment and the production of an isotopically almost pure Si single crystal of 0.999 99 enriched 28 Si is a challenge for all analysing methods used. Thus, with this material the current detection limit in molar mass determination is nearly reached. The main problem for further improvements of 28 Si single crystals is a reduction of the carbon concentration by a factor of 5. This would help to also reduce 3D lattice defects (striations).
With enriched material from the former IAC project a measurement uncertainty of 2 × 10 −8 N A could be reached. For a further reduction of the relative uncertainty of N A towards 1 × 10 −8 and below, an improvement in the enrichment and in the molar mass determination seems to be no longer necessary. A further reduction of the measurement uncertainty is now mainly limited by the unroundness of the spheres and its influence on the diameter measurements.
The new definition of the mass unit, the kilogram, is envisaged for the year 2018. Although it would be more obvious, intrinsic and easy to explain by defining the kilogram as a specified number of atomic particles, such as the mass of a 28 Si or a 12 C atom, the new definition of the kilogram will fix the numerical value of the Planck constant. The link between the Planck constant and the mass of a silicon atom is possible with very high accuracy by other measurements like the fine-structure constant [46]. Thus, the XRCD method using 28 Si crystals is one accepted method for the realisation of the new kilogram. This will cause stimulation worldwide of the demand for 28 Si crystal spheres for primary mass standards.
The availability of enough 28 Si material for the determination of the Avogadro constant and the realisation of the new kilogram is an existential and necessary requirement for the redefinition. The project described here and also the already started follow-up project ('kg-3' for three more crystals and 6 spheres) impressively target increasing the number of 28 Si spheres in the near future and helping to disseminate 28 Si spheres worldwide. Measuring the relevant parameters on a large number of crystals will also lead to a significant reduction in uncertainty. To reach this ambitious goal successfully at the end of 2018, the work on N A at PTB and in partner laboratories must be based on continuous interaction between physics and technology at their highest levels, and it would contribute to strengthening the edifice of the fundamental physical constants.
Also many other scientific ideas exist for applications of highly enriched silicon material; there is certainly a need for more and further production of this material. It should be noted that-besides the application for a new kilogram definition-the combination of highly pure and highly enriched 28 Si crystals with high-resolution laser-photoluminescence spectroscopy at low temperatures has opened the door for a new understanding of fundamental problems in solid state physics. The ultra-narrow linewidth of the donor-bound transitions in 28 Si has also activated new research in semiconductor quantum information processing which seems to be more promising than that based on conventional quantum dots. It seems that the silicon era will not be over yet for a long time [47,48]. | 2019-04-22T13:06:52.838Z | 2017-07-24T00:00:00.000 | {
"year": 2017,
"sha1": "4be2153939abbc9c46f1bd203ca2b2fee95403ca",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1681-7575/aa7a62",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "7b8c01a4042e68f154d54effadf54780cbbc37c9",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
265443385 | pes2o/s2orc | v3-fos-license | Current and future advances in practice: a practical approach to the diagnosis and management of primary central nervous system vasculitis
Abstract Primary CNS vasculitis (CNSV) is a rare, idiopathic autoimmune disease that, if untreated, can cause significant morbidity and mortality. It is a challenging diagnosis due to multiple mimics that can be difficult to differentiate, given that the CNS is an immunologically privileged and structurally isolated space. As such, diagnosis requires comprehensive multimodal investigations. Usually, a brain biopsy is required to confirm the diagnosis. Treatment of CNSV involves aggressive immunosuppression, but relapses and morbidity remain common. This expert review provides the reader with a deeper understanding of presentations of CNSV and the multiple parallel diagnostic pathways that are required to diagnose CNSV (and recognize its mimics), highlights the important knowledge gaps that exist in the disease and also highlights how we might be able to care for these patients better in the future.
• Primary CNS vasculitis (CNSV) is a rare but potentially devastating diagnosis that requires clinical suspicion and comprehensive investigation to diagnose.
• A clinician needs to assess for atherosclerotic, embolic disease, infectious, immune, neoplastic, genetic and other disease mimics before confirming a diagnosis of CNSV.
• Advances in imaging have assisted in the diagnosis of primary CNSV; however, a brain biopsy is still the standard investigation to confirm the diagnosis.• Treatment of CNSV requires glucocorticoids and long-term systemic immunosuppression because relapses occur.
• There are multiple knowledge gaps, but growing research initiatives will allow improvements in the diagnosis, management and outcomes for patients with CNSV.
Introduction
CNS vasculitis (CNSV) is the presence of inflammation within the blood vessels of the brain, meninges and spinal cord.
CNSV can be primary when confined to these structures (CNSV) or secondary to a systemic inflammatory process, infection or other systemic process (secondary CNSV).CNSV was first identified in the 1950s; however, there continue to be many gaps in our understanding of its pathogenesis, diagnosis and management attributable, in part, to the rarity of the illness and under-recognition of the diagnosis [1].
Accumulating cases and growing research interest, however, have established several principles of practice for the disease.This review summarizes current expert approaches to primary CNSV (referred to as CNSV throughout this manuscript), key pearls and pitfalls, and highlights how we might improve the care provided for patients with this diagnosis.
Taxonomy and definitions in CNSV
Criteria used in the diagnosis of CNSV The commonly used criteria for the diagnosis of CNSV require that the patient had a history or clinical findings of an acquired neurologic deficit, which remained unexplained after a thorough initial basic evaluation and that there are classic angiographic or histopathological features of angiitis within the CNS and that there is no evidence of systemic vasculitis or any other condition to which the angiographic or pathological features could be secondary [1].Although these criteria have never been endorsed formally for either diagnosis or classification, they continue to be used in research and clinical practice and represent a robust diagnostic philosophy.It has been increasingly recognized, however, that imaging techniques in CNSV do have limitations in differentiating CNS vasculopathies and that CNSV can only be considered definite when appropriate findings are seen on brain biopsy.It has thus been proposed that changes seen only on imaging should be considered probable and not definite CNSV [2].Research is ongoing to understand better how advances in imaging should be incorporated into the diagnosis and management of CNSV [3,4].
Disease subtypes
Initial efforts to subtype CNSV focused on the mode of diagnosis and classified CNSV as either histological or radiographic subtypes.Over time, these have evolved to focus on the size of vessel affected: small vessel CNSV (svCNSV) and proximal, medium-to-large vessel CNSV (lvCNSV), which includes the intracranial carotid, intracranial vertebral, basilar and proximal branches of the cerebral arteries [5,6].This change in taxonomy reflects that svCNSV often has nonvascular radiographic changes, and radiographically conspicuous lvCNSV will, less frequently, have changes on brain biopsy [6][7][8][9][10].Both imaging and histology are important for the diagnosis of CNSV and distinguishing these subtypes.Given that data are limited, it is currently unclear whether these subtypes represent variant presentations of CNSV or possibly different disease processes, although there is a suggestion that patients with lvCNSV are at higher risk of relapse [11][12][13].
Clinical presentations
CNSV typically presents in individuals aged 40-60 years and appears to affect both sexes equally [14][15][16].There are often weeks to months of prodromal symptoms, including headache, cognitive impairment, personality change and/or constitutional symptoms, followed by the onset of acute neurological changes, including strokes, encephalopathy, seizures and/or other deficits (Table 1).The most common presentation is insidious headaches and strokes, with a negative diagnostic work-up for more common secondary causes.Given that the prodromal phase also represents a presentation of myriad other conditions more common than CNSV, the diagnosis is typically not considered until there have been sufficient events to trigger an investigation for atypical disease processes or imaging suggests that a CNS vasculopathy is present.Although a post-morbid investigatory strategy is not ideal, there are currently no data that allow clinicians to risk stratify patients presenting with neurological symptoms to their probability of CNSV.
Less frequent clinical presentations of CNSV include a more indolent and slowly progressive presentation of cognitive and/or neurological changes; presentations of inflammatory mass-like lesions with symptoms related to mass effect; and spinal cord lesions presenting with spinal syndromes [17,18].Case reports of CNSV presenting with new-onset refractory status epilepticus and various cranial neuropathies have also been published [18,19].
An approach to the diagnosis of CNSV
Initial diagnoses of CNSV were limited to autopsies and brain biopsies that were performed in patients with episodic, progressive neurological defects [20,21].Modern diagnoses of CNSV can be made much earlier in the disease course with the introduction of new imaging and biopsy techniques, however, it remains a challenging diagnosis to confirm.Four diagnostic pathways should be pursued in parallel when considering a possible diagnosis of CNSV: (i) demonstrate that there are radiological changes consistent with a CNS vasculopathy with vascular and/or parenchymal features suggestive of a vasculitic process; (ii) demonstrate that there is a neuroinflammatory process; (iii) rule out mimics of CNSV; and (iv) consider a brain biopsy.
Demonstrate that there is a CNS vasculopathy with features suggestive of an inflammatory aetiology
Owing to the non-specific symptoms of CNSV, the first suggestion that this is the underlying process is often when neuroimaging demonstrates that there is a CNS vasculopathy with features suggesting an inflammatory aetiology based on the distribution of vessels affected, changes in the vessel wall (VW) and lumen, and parenchymal abnormalities.Assessment of the vessel lumen is performed via CT angiography (CTA), magnetic resonance angiography (MRA) and/or more invasive digital subtraction angiography (DSA).These modalities are useful first-line evaluations but provide information only about the vessel lumen [22].
Digital subtraction angiography has the highest resolution of all modalities and is the most sensitive for small to medium vessels [23].CTA offers the highest non-invasive resolution of larger vessels for luminal narrowing or occlusion, but MRI with MRA is the most sensitive non-invasive modality overall, because it captures both vascular and parenchymal changes [24].In svCNSV, MRI plus MRA may not sufficiently assess small, distal vessels that are affected by the disease process.In these cases, DSA should be considered to evaluate these vessels, although a negative DSA will not rule out CNSV, because these vessels may still be too small to resolve, and a biopsy may be needed to determine the final diagnosis [24].Secondary parenchymal changes seen on MRI supportive of a CNS vasculopathy can include infarcts in multiple vascular territories of varying ages, meningeal enhancement, hyperintense foci on T2 and fluid-attenuated inversion recovery (FLAIR) sequences, microhaemorrhage attributable to small vessel vasculitis and, rarely, tumour-like lesions [17,24].Meningeal and parenchymal changes are seen more frequently in individuals with svCNSV; ischaemic lesions in lvCNSV [5,25].It is important to note, however, that these findings are non-specific and can be considered only in conjunction with clinical evaluation and vascular imaging.
MR vessel wall imaging (MR-VWI) uses high-resolution MRI machines, contrast, and advanced signal processing that optimizes contrast-to-noise ratios to allow for resolution of the vessel wall, which has 1/10 th the diameter of the lumen [26,27].MR-VWI has quickly become integral in the evaluation of intracerebral vasculopathy, but is not available in many medical centres.The characteristics of the vessel wall changes and the pattern of vessels affected may also offer clues concerning the aetiology of the changes [23].The most common pattern of vascular change seen using MR-VWI in inflammatory CNS vasculopathies is scattered foci of smooth, homogeneous circumferential involvement (Fig. 1) [26,28].Where there is suspicion for lvCNSV (or other large vessel vasculopathies), one must also be aware of increased enhancement from the vasa vasorum enhancement at sites where vessels penetrate the dura (V3-V4 segment of vertebral artery and cavernous/supraclinoid internal carotid artery), traversing veins mimicking wall enhancement, and slow-flow-related artefacts, which are commonly reported erroneously as inflammatory changes [26,27].MR-VWI enhancement in intracranial atherosclerotic disease, a common mimic, is usually eccentric and irregular (Fig. 2).There can be overlap in the appearance between the two, however, as intracranial atherosclerotic disease can be circumferential [28], and the two conditions can also co-exist.
MR-VWI is a promising modality that helps to reveal the changes of CNSV, but it complements and does not replace other imaging techniques, clinical testing or serological/cerebrospinal fluid (CSF) analysis.As a burgeoning imaging modality, it is not universally available, and non-MR-VWI, CTA, MRA and DSA can still provide valuable diagnostic information.Early in the disease course, changes on MR-VWI may be subtle, and parenchymal abnormalities, seen in virtually all cases of CNSV, may be the only radiographic indicator of disease.Given that parenchymal MRI changes are more sensitive for CNSV, normal MRI parenchymal imaging (instead of normal VW-MRI), alongside normal CSF analysis, has high negative predictive value for a diagnosis of CNSV [29].
Demonstrate that there is a neuroinflammatory process
Demonstrating a neuroinflammatory state has historically been considered a hallmark of the diagnosis of CNSV.CRP and ESR are typically normal and, if elevated, should prompt evaluation for systemic inflammatory, infectious or thrombotic processes [16].Evaluation of CSF is more sensitive and specific for a neuroinflammatory process; changes are seen in 75-81% of all patients with CNSV, and a normal CSF should be considered highly suggestive of an alternative diagnosis [16,18,30].Limited data suggest that abnormalities might be CNSV: central nervous system vasculitis; CSF: cerebrospinal fluid.
CNS vasculitis review
seen more frequently in svCNSV (83%) than in those with lvCNSV (55%); many of these patients did not have a biopsy to confirm the diagnosis, and mimics might have been included in the lvCNSV cohort [16].The most common abnormalities seen are a mild increase in CSF protein and/or pleocytosis.Although less sensitive than elevated protein, pleocytosis might be more specific in discriminating CNSV from other diseases, although patients with pleocytosis should first be assessed for a possible CNS infectious process [31].Non-matching CSF oligoclonal bands and/or a high IgG index is found in 25% of CNSV and, in the absence of other diagnoses, can strengthen the diagnosis [32,33].
Rule out mimics of CNSV
Systemic diseases and mimics typically fall into five differential categories: immune-mediated inflammatory diseases, malignancy, non-inflammatory vasculopathies, infectious vasculitis and other (Table 2).The most common mimics of CNSV are premature atherosclerosis and non-inflammatorycerebral vasculopathies.Specific mimics that frequently arise with the possibility of CNSV and how to differentiate them are presented in Table 3.
Clinical evaluation for mimics of CNSV includes a comprehensive history and examination.Patient demographics in addition to the pace and progression of neurological deficits can often provide helpful indications of CNSV mimics (e.g. the presence of a thunderclap headache suggesting reversible cerebral vasoconstriction syndrome compared with insidious headaches typical of CNSV).Intracranial manifestations of systemic inflammatory diseases rarely occur in isolation, and findings suggestive of an inflammatory process elsewhere in the patient should direct the clinician to consider these diagnoses.These can include small, medium and large vessel vasculitis, non-vasculitic systemic inflammatory diseases (e.g.SLE) and sarcoidosis.Equally important are travel, exposure and infectious histories; even remote exposures to tuberculosis or HIV can demonstrate reactivation with vasculitis [34].Coronavirus disease 2019 (COVID-19) infection should be ruled out, giving its neurological complications, which can mimic CNSV.Drug exposure can also readily lead to vasculitis and can easily be overlooked.Finally, a family history of atherosclerotic disease or similar symptoms can indicate small vessel genetic diseases, including cerebral autosomal dominant or recessive angiopathy with subcortical infarcts and leucoencephalopathy (CADASIL or CARASIL) and others [35,36].
Biochemical testing should include serology for autoimmune diseases including autoimmune/paraneoplastic encephalopathies, bacterial cultures, viral serologies and testing (including HVB, HVC, HIV and COVID-19 in all patients and other serologies driven by presentation and local epidemiology), quantitative immunoglobulins and flow cytometry.Appropriate genetic testing should be considered in patients where there is a suspicious family history of undiagnosed neurological changes or in cases which are resistant to treatment.CSF should undergo evaluation of protein, glucose, immunoglobulins (for evidence of both inflammation and paraneoplastic autoantibodies), oligoclonal bands (in both CSF and serum), IgG index, bacterial/viral testing (including varicellazoster virus, HSV, syphilis, Lyme disease and tuberculosis testing in all patients, with consideration for other conditions based on local epidemiology), flow cytometry and cytology (performed on serial large-volume samples in the case of low CSF cellularity).Advanced pathogen genetic sequencing techniques, where available, should be considered, if there is persisting uncertainty concerning infection [37].
Patients with evidence of ischaemic stroke on imaging should undergo evaluation for thromboembolic processes, including echocardiography with bubble study, interrogation for arrhythmias, and extracranial vessel imaging for inflammatory and/or atherosclerotic changes.There should be a low threshold for repeat imaging of individuals with acute onset of symptoms or persisting diagnostic uncertainty; resolving stenoses can be diagnostic of reversible cerebral vasoconstriction syndrome, and the lack of contrast enhancement seen in MR-VWI is also suggestive of reversible cerebral vasoconstriction syndrome rather than CNSV [38][39][40].In young females being considered for CNSV who present with encephalopathy, vision changes, hearing changes and/or imaging changes of the corpus callosum, ocular fluorescent angiography to assess for Susac syndrome should be considered (clinical vignette 1, see Supplementary Material, available at Rheumatology Advances in Practice online) [41].PET has also been used in some institutions to assess for systemic inflammatory or paraneoplastic processes; however, its utility has yet to be determined.Electroencephalograms can be abnormal but are non-diagnostic; both encephalopathic changes and seizures might be attributable to CNSV or other processes [1,16].
Consider a brain biopsy
A brain biopsy is often required to confirm the diagnosis of CNSV: CNSV may not demonstrate radiographic evidence of vasculitis, many mimics may only be differentiated
CNS vasculitis review
histologically (including intravascular lymphoma, see clinical vignette 2, available at Rheumatology Advances in Practice online), and evidence of a neuroinflammatory process may only be evident histopathologically.Clinicians should evaluate patients under the presumption that a brain biopsy is needed to confirm CNSV and take reasonable measures to obtain one; however, there are occasions when it might be infeasible to obtain, such as lesion location or lack of appropriate procedural facilities [2].Biopsy is 75% sensitive for the diagnosis of CNSV; this may be attributable to disease subtype and/or the presence of skip lesions in the parenchyma [42].Yield can be maximized by ensuring that the biopsy includes cortical, subcortical and leptomeningeal tissue and by targeting areas with either imaging or clinical evidence of disease; areas of parenchyma with abnormalities on MR-VWI are ideal and can increase sensitivity to 89% [8].When there is no targetable area or there is excess procedural risk, the temporal or non-dominant frontal lobes can be targeted, but they have only 50% sensitivity [43].This should not dissuade clinicians; a negative biopsy will also lower the probability of an underlying diagnosis of malignancy or other mimics.
Classic biopsy findings in CNSV include parenchymal and/ or leptomeningeal vasculitis with transmural mononuclear infiltrates and non-necrotizing granulomas (granulomatous vasculitis), seen in 60% of cases.Twenty percent of cases will each show a lymphocytic infiltrate at least two cells thick (lymphocytic vasculitis) or limited lymphocytic infiltrate with fibrinoid necrosis (necrotizing vasculitis) [44,45].A small number of older patients with typically granulomatous vasculitis are also found to have significant deposition of amyloid-b fibrils in the media and adventitia; this is attributable to amyloid-b-related angiitis [46,47].These different histological patterns demonstrate the heterogeneity of disease and might represent distinct pathotypes; granulomatous and/or necrotizing vasculitis might connote more severe disease, and amyloid-b-related angiitis might be associated with worse prognosis [48,49].
Treatment of CNSV
There are no randomized trials of treatments for CNSV, nor have any consensus guidelines for treatment been published to date; therapy has been inspired by treatment for ANCAassociated vasculitis (AAV) and by treatment for other neuroinflammatory diseases [12,15,50].Given that there have been multiple treatment options with established efficacy for AAV, there is also heterogeneity in how patients with CNSV are treated that are driven by pathological findings, drug availability and severity/phenotype [51].Initial therapy is guided by disease severity; severe disease is defined by larger volumes of ischaemia and/or the presence of encephalopathy, seizures or organ/function-threatening neurological deficits on presentation.Patients with a large volume of disease, granulomatous/necrotizing angiitis and/or amyloid-b-related angiitis on biopsy are considered to be at the highest risk of poor outcomes [12,49,52].
Immunosuppression in CNSV
Initial therapy for CNSV is high-dose glucocorticoids to rapidly arrest the disease process, which should be started immediately in those with confirmed disease and readily considered where the disease is probable (e.g.abnormal CSF and radiographic findings without a biopsy).For those with severe disease, therapy is typically administered as pulse glucocorticoids (500-1000 mg of i.v.methylprednisolone daily for 3-5 days) followed by high-dose oral glucocorticoids (1 mg/kg, typically 40-60 mg daily).In non-severe cases, oral glucocorticoids can be used without i.v.pulses.The initial dose is administered for 4 weeks before being tapered slowly.There are currently no data demonstrating whether glucocorticoids can be tapered safely more rapidly, as in AAV [53].Patients should be co-administered a gastric protection agent, sufficient calcium and vitamin D, and there should be an assessment for glucocorticoid-induced osteoporosis.Infectious vasculitis (commonly varicella zoster, herpes simplex, tuberculosis) [34] Demographics: more common in the elderly or immunosuppressed.Presentation: variable, including subacute headache, encephalopathy and neurological defects (often cranial nerve palsy).
Can be identical to CNSV on MRI/MRA/CTA; may demonstrate parenchymal lesions and white matter changes incongruent with age.Tissue culture of biopsy material may demonstrate the causative organism.
Typically, central tumour-like lesions with avid enhancement and low apparent diffusion coefficient with a predilection for the periventricular region.Meningeal enhancement, vessel wall enhancement and non-specific white matter changes.Histology demonstrates malignancy.
CNSV secondary to immune-mediated inflammatory diseases
Demographics: based on underlying immunemediated inflammatory disease.Presentation: skin changes such as livedo reticularis or photosensitivity, oral ulcers, RP, sicca symptoms.
Can be identical to CNSV; may demonstrate parenchymal lesions and white matter changes incongruent with age.
CNS granulomatous disease associated with common variable immunodeficiency (CVID) [69] Demographics: variable age, more common in females.Presentation: subacute onset of headaches, neurological deficits, seizures.Known history of CVID and infections.
Serology: non-specific; may see elevated angiotensin-converting enzyme, vitamin D, calcium, cytopenias.CSF: may see low glucose, mild increase in CSF protein, pleocytosis and/or increased CSF angiotensin-converting enzyme and soluble IL2R.
Parenchymal changes, mass lesions and vessel wall enhancement may be seen.Nodular enhancement of meninges and cranial nerves may be seen.Pulmonary changes and lymphadenopathy are common.Histology demonstrates non-caseating granulomas.
Serology: Autoantibodies may be seen, but have variable correlation with CNS disease.CSF: autoantibodies are strongly suggestive of disease.
Parenchymal changes, often in stereotyped distribution commonly involving the mesial temporal lobes; minimal evidence of vasculitis.Imaging may demonstrate a primary malignancy elsewhere in the body.
CNS vasculitis review
A second disease-modifying agent, both to provide glucocorticoid-sparing effects and to induce more durable remission, should also be prescribed, stratified by disease severity and patient profile.CYC is the default agent in patients with severe disease; oral formulations continue to be used owing to potential increased efficacy [12,50,54,55].In those with refractory disease, case reports have demonstrated that rituximab might be effective, although owing to limited access and data, it is not the preferred first-line agent [56,57].Patients undergoing induction therapy for severe disease are also typically provided with Pneumocystis jirovecii prophylaxis [58].After an induction phase of 6 months, patients are typically switched to an oral immunosuppressant; AZA and mycophenolic acid are preferred.Non-severe disease can be treated with mycophenolic acid, AZA or CYC; MTX is not favoured owing to poor CNS penetrance [30,59,60].IVIG has been used in other vasculitides with conflicting results; given that CNSV predisposes to ischaemic events and there are concerns around potential thromboembolic complications of IVIG, it is generally avoided [61][62][63].
Additional therapeutic considerations
Given the neurological deficits associated with CNSV, in addition to its propensity to present with ischaemic stroke, appropriate stroke care is the other important cornerstone of therapy in addition to pharmacotherapy.This includes assessments by speech pathology, physiotherapy, occupational therapy and social work, in addition to provision of supportive devices and modifications to diet, mobility and environment.There is no evidence to support antiplatelet or anticoagulation agents in CNSV; they should be added only if there are other clear indications (e.g.co-morbid atrial fibrillation or secondary prevention of atheroembolic stroke); antiplatelets for secondary prevention of further ischaemic events can also be prescribed.Likewise, seizures should be managed with appropriate anticonvulsants, and neuropsychiatric disturbance should be managed with appropriate psychotropic agents directed by ongoing symptoms.
Response to therapy
Patients should be monitored closely during the induction period for changes, with responses to therapy of CNSV depending on its manifestations of disease.Where there are non-ischaemic neurological deficits, such as headaches and seizures, improvement is typically seen in days to weeks of treatment; patients with strokes typically stop seeing new events within the first 6-8 weeks of therapy.If there are new clinical deficits, the presence of recrudescence and/or ischaemic events attributable to post-inflammatory stenotic changes should be ruled out before additional glucocorticoid therapy is considered.If, however, imaging is typically repeated at 8-12 weeks after starting induction therapy to demonstrate response to therapy and again 3-6 months from induction to demonstrate successful remission.Ongoing follow-up depends on disease severity, response and the local availability of imaging; however, imaging should be repeated every 3-6 months with more frequent initial clinical follow-up.
MRA/MRI is the preferred assessment modality because it can capture the vessel lumen, VW and parenchyma.Approximately 50-75% of patients assessed using MR-VWI will show changes in vessel wall enhancement concordant with disease activity; as such, it should again be considered part of a suite of monitoring investigations, if this technique is available at the treatment centre [64].DSA and CTA might have limited utility because they are invasive, and lumen diameter does not clearly demonstrate treatment response, even with resolution of active inflammation.As such, where MRA/ MR-VWI is not available to monitor response to treatment, CTA and DSA will demonstrate a response through the absence of new lesions but cannot assess whether there has been a VW response.Any new areas of stenosis or wall enhancement should be interrogated carefully for possible disease relapse.The optimal method to measure response to treatment is unclear; interpretation of clinical response will be coloured by permanent neurological damage and CSF and radiological investigations have logistic limitations.
Relapse
It can be difficult to differentiate relapse of CNSV from progression of damage or new insults from atherosclerosis owing to the similarity of presentation.As such, when there is concern for relapse, a comprehensive, multimodal evaluation, including CSF and radiographic investigations, should be performed to demonstrate that changes are attributable to active vasculitis and no other cause, including infection and/or thromboembolic events.Regardless, relapse is common in CNSV and presented in 58 of 191 patients (30%) in one cohort followed over a median of 19 months (range 0-28.1 years) [12].Treatment of relapse should follow initial treatment, with consideration of pulse or oral glucocorticoids for severe and non-severe relapses and either restarting induction therapy or switching therapeutic agents.
Outcomes
In addition to the risk of relapse, functional decline owing to damage from CNSV is common, with 70.4% of patients demonstrating some degree of functional impairment attributable to their disease [55].Although earlier diagnosis and aggressive therapy have improved outcomes, mortality from CNSV seen in 11-28% over long-term follow-up, typically seen in the first year [12,55].Although not well reported in CNSV, it is likely that there is also significant morbidity associated with treatment, including cardiovascular risk, infections and osteoporosis [65].Risks of disease and therapy must be balanced, and as such, treatment durations for CNSV are unclear.Currently, there are no data to suggest whether there is an appropriate duration of immunosuppression in CNSV, and there is significant variation in practice in terms of the duration of therapy, ranging from 5 years to lifelong immunosuppression based on the individual patient profile.It is possible that patients with non-severe, monophasic disease might be treated safely with shorter durations of therapy; those who relapse typically have prolonged treatment duration.
Future directions
This review synthesizes expert knowledge concerning CNSV and also demonstrates that there are significant gaps across diagnosis, treatment, monitoring and outcomes.The foremost of these to address are as follows: establishing an effective diagnostic pathway of CNSV; understanding the strengths and limitations of the proposed diagnostic criteria; understanding whether subtypes represent disease endotypes (potentially based on clinical/vessel phenotype, disease severity and/or histologic findings); establishing consensus recommendations for treatment of CNSV; determining optimal methods of assessing responses to therapy; and understanding predictors of relapse and outcomes to assist in clinical decision-making.
The rarity and poor awareness of CNSV have historically been the greatest barriers in carrying out clinical research in the disease space.The evolution and proliferation of imaging, however, have significantly assisted in disease recognition, diagnosis and monitoring of CNSV, and increased interest has led to wide reporting concerning the disease, although singlecentre cohorts will be subject to selection bias, clinician preferences and the availability of local investigations and treatment [16].Other rare diseases, most analogously AAV, have demonstrated that through collective interest and collaboration, large prospective international cohorts of individuals aggregated and diseases previously considered rare are now more easily studied.Indeed, during the last 20 years, international research consortiums, including the Vasculitis Clinical Research Consortium (VCRC) and the European Vasculitis Society, have realized marked advances in caring for patients with vasculitides, including AAV and GCA.
Similar steps are now being taken for primary CNSV.An international prospective cohort of patients with CNSV is underway to collect a wide variety of data, including: clinical and radiological findings; longitudinal disease and therapeutic outcomes; and collection of biological specimens for analysis of possible new biomarkers of disease.Through this cohort, disease definitions can be refined, and a comprehensive theory of disease can be constructed; early priorities are to develop consensus approaches to diagnosis, to establish unbiased phenotypes and presentations of disease, and evolve collaborations to develop a platform for further research.This platform can then be used to explore and validate new biomarkers of disease, design clinical trials of new therapies or better establish the efficacy of existing agents and improve our understanding of disease outcomes to inform clinical decision-making.
Figure 1 .
Figure 1.Radiographic findings of primary CNS vasculitis.(A, B) Pre-(A) and post-gadolinium (B) T1 SPACE high-resolution MR-VWI shows circumferential vessel wall enhancement (B, arrow) in a patient with suspected CNSV.(C) 7 T MR-VWI following gadolinium administration shows diffuse pericallosal artery branch wall enhancement in biopsy-proven CNSV (arrows).(D) FLAIR MRI showing evolving signal changes from parenchymal insults of varying ages, including subacute (arrowhead) and acute (arrow).(E) diffusion-weighted MRI in the same patient as (E) shows that the paramedian frontal insult is acute.(F) Digital subtraction angiography in the same patient showing subtle areas of distal anterior cerebral artery territory beaded luminal irregularity.CNSV: CNS vasculitis; FLAIR: fluid-attenuated inversion recovery; VW: vessel wall
Figure 2 .
Figure 2. Mimics of CNS vasculitis seen with magnetic resonance vessel wall imaging (MR-VWI).(A, B) Before (A) and after (B) T1 SPACE high-resolution MR-VWI showing an area of eccentric M2 wall thickening and enhancement (arrow) related to non-inflammatory atherosclerotic changes.(C) Postgadolinium coronal T1 SPACE MR-VWI showing vasa vasorum enhancement of the proximal V4 segments in atherosclerotic disease (often mistaken as inflammation).(D-G) Images representing distal M1 thrombosis with a finding of circumferential enhancement (G) secondary to thrombectemy: area of apparent filling defect in the right cavernous ICA segment (D, arrow) owing to flow-related artefact, with E showing patency on time-of-flight (TOF) magnetic resonance angiography; CTA demonstrating acute distal right M1 occlusion (F, arrow) and G showing MR-VWI following mechanical thrombectomy, with circumferential enhancement thought to be related to mechanical manipulation and disruption of the endothelium.(H, I) Images from a patient with reversible cerebral vasoconstriction syndrome with multifocal areas of luminal narrowing (H) of the proximal M2 on CTA (arrows) and absence of vessel wall enhancement (I) in the same region on MR-VWI suggesting a diagnosis of reversible cerebral vasoconstriction syndrome.CNSV: CNS vasculitis; CTA: CT angiography
Table 1 .
Frequency of presenting features in primary CNS vasculitis, stratified as either pathologically or radiographically diagnosed central nervous system vasculitis
Table 2 .
Differential diagnoses of primary central nervous system vasculitis
Table 3 .
Key mimics of primary central nervous system vasculitis | 2023-11-26T16:08:07.137Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "5c3d1026950fc8237e21bb7f023e0155a6767ab8",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/rheumap/article-pdf/7/3/rkad080/53710664/rkad080.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "488fd53ed5cc51861b31ef9ff7aa09def367f3c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
38751130 | pes2o/s2orc | v3-fos-license | Allosteric-activation mechanism of BK channel gating ring triggered by calcium ions
Calcium ions bind at the gating ring which triggers the gating of BK channels. However, the allosteric mechanism by which Ca2+ regulates the gating of BK channels remains obscure. Here, we applied Molecular Dynamics (MD) and Targeted MD to the integrated gating ring of BK channels, and achieved the transition from the closed state to a half-open state. Our date show that the distances of the diagonal subunits increase from 41.0 Å at closed state to 45.7Å or 46.4 Å at a half-open state. It is the rotatory motion and flower-opening like motion of the gating rings which are thought to pull the bundle crossing gate to open ultimately. Compared with the ‘Ca2+ bowl’ at RCK2, the RCK1 Ca2+ sites make more contribution to opening the channel. The allosteric motions of the gating ring are regulated by three group of interactions. The first weakened group is thought to stabilize the close state; the second strengthened group is thought to stabilize the open state; the third group thought to lead AC region forming the CTD pore to coordinated motion, which exquisitely regulates the conformational changes during the opening of BK channels by Ca2+.
Introduction
Large conductance, Ca 2+ -activated potassium (BK) channels are one type of calcium-activated potassium channels. BK channels are known as Big K + channels, which is due to having a large single-channel conductance of~100-300 pS. [1] BK channels are widely expressed throughout the animal kingdom, which play important roles in many physiological processes, such as ansmitter release [2], secretion of endocrine [3], and regulation of vascular [4]. Loss-function of BK channels could lead to epilepsy [5], hypertension [6], asthma [7], tumor progression [8], obesity [9].
Similar to voltage-gated K + channels, BK channels are a tetramer of the pore-forming subunits, which possess a voltage-sensor domain (S1-S4) that senses membrane potential changes, a pore-gate domain (S5-S6) that opens and closes to control ion selectivity and K + permeation, and a large cytosolic tail domain (CTD) that forms a gating-ring serving as the primary ligand sensor, which is sensitive to intracellular chemical ligands such as Ca 2+ [10][11][12] and others [13][14][15]. The main structural components of gating ring are two regulators of K + conductance PLOS (RCK) domains (RCK1 and RCK2) that are connected by a*100-amino acid linker [16]. Each RCK domain can be further divided into three subdomains: Rossmann-fold subdomain (βA-βF), which contains a AC region (βA, αA, αB and βB) which forms the CTD pore; intermediate helix-crossover (αF-turn-αG); and C-terminal subdomain (αH-C-terminus). [17] Electrophysiological and mutagenesis experiments have identified two Ca 2+ high affinity binding sites for each subunits: one is located in RCK1 domain including the residues of D367, R514 and E535 [18], which are called 'RCK1 Sites', and the other in the C-terminus of RCK2 domain, containing a string of Asp residues known as the 'Ca 2+ bowl'. [19] BK channels gain Ca 2+ sensitivity by their association with Ca 2+ -binding calmodulin proteins. [20] Ca 2+ binding stabilizes the conducting state of the channel, which shows that Ca 2+ induces conformational rearrangements of the gating ring and open the transmembrane and CTD pores.
Recently, three crystal structures of eukaryotic CTD of BK channels (PDB ID: 3MT5, 3NAF and 3U6N), including both RCK1 and RCK2, respectively. [17,21,22] The x-ray structure of the human BK Ca 2+ gating (PDB: 3MT5) was firstly solved and deduced its tetrameric assembly by structure of a Na + -activated homolog. [21] The crystal structure of the entire cytoplasmic region of the human BK channel in a Ca 2+ free state (PDB:3NAF) reveals four intracellular subunits and the linker connecting S6 and gating ring, which can generate a structural model for full BK channel. [17] The crystal structure of zebrafish BK channel in Ca 2+ -bound state with eight subunits(PDB:3U6N), shows that one layer of gating ring opens upon binding Ca 2+ . Those crystal structures present molecular bases for homolog modeling and conformational transition pathway with Ca 2+ opening BK channels.
With the collective efforts of the BK channels field, the understanding of molecular mechanisms of BK channel function has been greatly advanced over the past three decades, [23] but it is still not clear that the molecular mechanism of intracellular Ca 2+ -induced conformational changes of BK channel gating ring. So, we should first address the follow questions: 1) Which is more important to widen the gating ring aperture, RCK1 or RCK2? 2) How does the interaction deliver during the BK channel gating?
Here, the authors combined Molecular Dynamics (MD) with Targeted MD on the gating ring of BK channels, and achieved the transition from the closed state to a half-open state. Our data indicate that the RCK1 Ca 2+ sites make more contribution to opening the channel than the RCK2 domains do. We identified a series of interaction networks, which regulates the conformational changes during the opening of BK channels by Ca 2+ .
Homology modeling
The structures of gating ring were taken from homology models of the CTD of the BK channels based on the closed and open state models of crystal structure of gating ring (PDB ID:3NAF and 3U6N). [17,22] Crystal structures were retrieved from protein data bank (www. rcsb.org). The target sequences were taken from protein data bank (PDB ID: 3U6N). Homology models of the BK channel gating ring were all based on chain A of the template structures using SWISS-MODEL server. [24][25][26] These models were evaluated with GMQE. [27,28] GMQE (Global Model Quality Estimation) is a quality estimation which combines properties from the target-template alignment. The resulting GMQE score is expressed as a number between 0 and 1, reflecting the expected accuracy of a model built with that alignment and template. Higher numbers indicate higher reliability. The members of the BK channel family show high degree of sequence similarity. Due to high sequence identity (about 96.89% and 95.68%), the GMQE score is 0.90, and 0.71 respectively. [29] Compared to its templates, the mainchain geometry in experimental models had no change. We applied transformation matrix from the PDB file (3NAF and 3U6N) to generate the complete tetramer of closed and open states of gating ring, respectively. [17,22]
Conventional molecular dynamics
The Molecular dynamics (MD) simulations with explicit solvent and ions were carried out on two separate systems (closed and open states) of the gating ring of BK channel in~150 mM KCl [30]. The K + and Clwere positioned randomly in a rectangular box of water with the size of 183 ×183 × 86 Å 3 (closed state), and 162 × 116× 175 Å 3 (open state), respectively. The water potential TIP3P was used [31].
The minimization and molecular dynamics simulations were carried out using the NAMD2 program(http://www.ks.uiuc.edu/Research/namd/) [32] and the CHARMM 27 force filed [33]. During the production run, a 2.0 kcal/mol harmonic restraint on the Cα atom of gating ring was maintained for 5 ns. Then, letting the system relax freely for last over 10-20 ns until reaching equilibrium. Langevin dynamics and the Langevin piston were used to maintain the temperature at 310 K and a pressure control, respectively. The van der Waals interactions were modeled using Lennard-Jones. Short-range non-bonded interactions were truncated at 12 Å. Long-range electrostatics was calculated using the particle mesh Ewald (PME) algorithm with grid spacing 1Å. [34] The calculations were performed on every time step, which was 2 fs. Simulation analysis and structural diagrams were used with VMD (Visual Molecular Dynamics). [35] Targeted Molecular Dynamics Targeted Molecular Dynamics (TMD) [36] has been used in studies of allostery and a variety of transitions in large proteins. In TMD, a subset of atoms (target atoms) is guided toward a target structure by means of steering forces which gradually steers the initial structure toward the target structure and is obtained through the gradient of a potential calculated as a function of RMSD, which are defined as Eq (1): where k is the force constant, and N is the number of targeted atoms. The number of atoms used to calculate the RMSD from the target structure was set to be the same as the number of restrained atoms. At each time step, the RMSD(t) between the current coordinates and the target structure was computed (after first superimposing the target structure and the initial coordinates). RMSD Ã (t) evolves linearly from the initial RMSD at the first TMD step to the final RMSD at the last TMD step. RMSD Ã (t) tends to zero is the criterion to end the TMD.
Principal component analysis
Principal component analysis (PCA) was carried out using Normal Mode Wizard (NMWiz) (http://prody.csb.pitt.edu/nmwiz/) to a trajectory from TMD simulations [37]. Normal Mode Wizard (NMWiz) is a VMD plugin [38,39] for depiction, animation, and comparative analysis of normal modes. Normal modes may come from principal component of structural ensembles, essential dynamics analysis of simulation trajectories, or normal mode analysis of protein structures. In addition, NMWiz can be used to depict any vector that describes a molecular motion. The standardized trajectory data is then utilized to generate a covariance matrix between the C α atoms i and j, which are defined as which are defined as Eq (2): Where x i and x j are Cartesian coordinates of the i th and j th Cα atom. N is the number of Cα atoms considered. <x i > and <x j > represent the time average over all the configurations obtained in molecular dynamics simulations. [40] Results
Construction and MD simulations on the BK gating ring
In present study, we have constructed two 3-dimentional structures of BK gating ring, Ca 2+free (closed) and Ca 2+ -bound (open) states (Fig 1A and 1B). During 10-20 ns free MD simulations, these two structures reach their equilibration states because the Cα root-mean-square deviations (RMSDs) values are 4 Å or less (Fig 1C and 1D). The diagonal subunits distances of the closed and open states of the reaching equilibrium were 41.0 Å and 55.8 Å, which were measured at Cα atoms of the N-terminal residues Asn384 (red balls) of the helix αB by virtue if it's more stabled than the Cα atoms of the N-terminal residues Lys343 (black balls) during MD simulation (Fig 1 and Fig 2A). The RCK1 sites are colored in green, the 'Ca 2+ -bowl' sites are colored in yellow.
The RCK1 domain makes more contribution on opening the BK channel
To identify which of the Ca 2+ -binding regions contributes more to opening the BK channel, we carried out three TMD simulations on the RCK1 sites (green), 'Ca 2+ -bowl' sites (yellow) and both the two regions, respectively (Fig 2A). The corresponding Ca 2+ -binding regions in BK channel at open state was set as a target structure. During the TMD simulations, an external force was applied to the backbone atoms of the RCK1 domain (His365 to Asp369, Ser512 toPhe516, Ser533 to Tyr537) and RCK2 domain (Asn887 to Pro899) with a force constant of 500 kcal/mol/Å 2 . The Cα RMSD was decreased monotonically from the initial RMSD to near zero Å along the TMD trajectory (Fig 2B), which identified that the three TMD simulations had finished. During this process, the Cα RMSD of BK gating ring between the closed structure in the simulation and the open structure shows that closed gating ring from the TMD on Ca 2+ -binding sites (RCK1 site-black line) is similar to structure from the TMD on Ca 2+ -binding regions (R&C-blue line), whose two structures are closer to the open gating ring than the structure from the TMD on Ca 2+ -bowl sites (Ca 2+ bowl-red line) (Fig 2C).Further analysis, the distance between the diagonal subunits of gating ring is 45.7 Å, 41.8 Å, 46.4 Å at the end of TMD simulation on RCK1 site, Ca 2+ bowl and R&C, respectively, which suggests that the gating ring achieved a partial opening, or quasi-opening state with the TMD simulation on the RCK1 site and R&C (Fig 3A-3E). The open processes are consistent with the movement of RMSD along the TMD simulations (Fig 3F). These results suggest that the Ca 2+binding sites in RCK1 contribute more than binding to the 'Ca 2+ bowl' sites to opening the BK channel (Figs 2 and 3).
There are two motion models during the gating of BK channel To explore the dynamics behavior of gating ring based on the TMD simulation trajectory, essential dynamics analysis was conducted. The first principal component of motion tendency of gating ring based on the TMD simulation trajectory is shown in Fig 4. It can be illustrated that the gating ring experiences an anticlockwise rotational motion around the gating ring axis and a flower-opening like motion which push the channel to open state. These two dynamic motions of gating ring can be seen obviously from the calculation results of essential dynamics analysis using the animation function of VMD 1.9.2 plugin. [39] Interaction-networks exquisitely regulate the gating of BK channels To identify the transmission of interaction that brought about the dynamic motion of gating ring, inner weak interactions analysis was conducted. We identified three interaction-networks. In the gating ring opening process, one of the interaction-networks is broken, which consist of three pairs of interactions, between αA and αR (D362-S925), αA-βB loop and TKloop (T:turn) (D367-S515), TK-loop and GI (G:3/10-helix) (S515-Y904), respectively (Figs 5A-5C and 6); the other interaction-network also contains three pairs of interactions between βA and βB (R342-E374), αA-βB loop and CO-loop (C:coil) (D369-R648), CO-loop and αR (R653-D931), which are strengthened, respectively (Figs 5D-5F and 6); the third consists of the interactions between αA and αB (L360-H394), αA and αR (N358-D821) (Figs 5H-5G and 6).
The network of interactions also exist in the ACregion which is formed by αA, αB, βA and βB. Our data show that there are two parts that facilitates the movement of AC region, one is the weakened interactions (green ellipse) that liberate the AC region (Fig 6B), and the other is the interactions within AC region that make the entire AC region coordination (yellow ellipse) ( Fig 6B). The three pairs of strengthened interactions is like a hooked arm (red region) to make the AC region movement of the open state, (The interactions between D369 and R648, between R653 and D931, between R342 and E374 as an elbow, a shoulder and a hand, respectively.) (Fig 6B).
Discussion
Ca 2+ -induced gating of BK channel is an intrinsically dynamic process. However, since allosteric conformational changes take place on the microsecond time scale, it is not possible to capture the transfer of a closed state to an open state even through MD simulations. To accomplish the transition from the closed to the open state of BK channels, we performed Targeted MD simulations, which was developed by Schlitter et al [36] and has been used in studies of allostery and a variety of transitions in large proteins [41]. Limited by the methods, there is no Ca 2+ in our Targeted MD simulations. During the simulations, we applied force on the Ca 2+ binding residues and hoped that the applied forces, in somehow, mimic the channel-Ca 2+ interactions.
In TMD simulations, a subset of target atoms (RCK1 site, Ca 2+ bowl or R&C) is guided towards a target structure by means of steering forces, which show that Ca 2+ -binding site in RCK1 is more important than binding to the Ca 2+ bowl to activate the BK channel gating ring (Figs 2 and 3). Our simulation data are consistent with the experimental results. [13,42,43] We next analyzed the global motion of the BK channel gating ring, which exhibits two motions: the flower-opening like motion and the rotation motion (Fig 4). The expansion motion indirectly induced the AC region widen. In a full-length BK channel, the AC region, at the N-terminus of RCK1, is connected to C-terminus of the transmembrane inner helix (S6), which forms the pore's gate via the S6-RCK1 linker and could therefore be a point of convergence for the conformational changes evoked by Ca 2+ binding to either RCK1. [43] The rotation of the gating ring may pull the conformational changes of the S6-RCK1 linker to open the activation gates of BK channel, which is similar to PIP 2 opening Kir channels. [23,41] The two gating ring structures, Ca 2+ -bound and Ca 2+ -free states, differ in position that the RCK1 layer in the Ca 2+ -bound gating ring is expanded from a diameter of 81 to 93 Å, measured at the position of Lys343s. [22] In the full BK channel, the Lys343s locate in S6-RCK1 linker that connects the transmembrane domain to cytosolic tail domain (gating ring) [22,44]. The S6-RCK1 linker may undergo conformational change during opening of the activation gates. Our models only consist of gating ring, absence of the entire transmembrane spanning domains, the fluctuations of Lys343s may be wider during simulations. So the diagonal distance of Lys343s cannot show the accurate distance of the pore gate of gating ring. We chose the Asn384s (red ball) as the position to measure distance of gating ring which expanded from 41.0 to 55.8 Å from closed state to open state (Fig 3A and 3B).
The gating ring opening to the target structure during TMD simulations with RCK1 sites, we identified that three interaction-networks played a critical role in the Ca 2+ -induced gating of BK channels (Fig 5). The weakened interactions decrease the correlation between the AC region and around the RCK1 region (green ellipse) (Fig 6B), which facilitate the AC region motion, the interactions within AC region help to keep stability and coordination of AC region (yellow ellipse) (Fig 6B), the strengthened interactions pull AC region to expand, which is like a hooked arm (red region) (Fig 6B). These interaction-networks make the AC region to movement of the open state. By analyzing the relationship between motions of gating ring and interactions, we identified that the transmission pathway of interaction during BK gating ring opening. The small conformational changes in RCK1 site (Ca 2+ binding RCK1 site) induced the large conformational changes of the gating ring. | 2018-04-03T04:21:36.759Z | 2017-09-27T00:00:00.000 | {
"year": 2017,
"sha1": "9132a74294644cee07f113a38b4ceb95e998c1e7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182067&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9132a74294644cee07f113a38b4ceb95e998c1e7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
208173788 | pes2o/s2orc | v3-fos-license | The Red Flag Canal: a socio-ecological practice miracle from serendipity, through impossibility, to reality
This showcase article presents a 50-year-old, 1500-km-long irrigation canal in China as an exemplary case of socio-ecological practice. With a focus on its genesis, the article is the first of a mini-series on one of the best kept secrets in the history of socio-ecological practice.
A miracle of self-reliant, diligent, and ecophronetic socio-ecological practice
July 6, 1969, is a memorable day to the people of the Linxian County (林县) in Henan Province, China. 1 On that very day, they celebrated the completion of the Red Flag Canal (红旗 渠). This 1500-km-long irrigation canal transfers precious lifesaving water from the Zhuozhang River (浊漳河) in the neighbor Pingshun County (平顺县) to their arid hometown ( Fig. 1). It provides drinking water to the people and domestic animals, and irrigates farmland (Wang and Sang 1995, p. 318). In this remote mountainous region where widespread poverty and poor agriculture productivity had long been imputed to both the dearth of drinking water supply and scarcity of irrigated farmland, the introduction and provision of these two primary services are historic and revolutionary. 2 Not only did they change the half a million people's lives forever, but also shaped the well-being of all their posterity (Hao et al. 2011, p. 261;Wang and Sang 1995, p. 4). The completion of the Red Flag Canal is an extraordinary human achievement-so much so that, in 1971, the then Chinese Premier Zhou Enlai [周恩来 (1898-1976)] praised it as a miracle: There are two miracles of engineering in the modernday China that people created with self-reliance and diligence, one is the Nanjing Yangtze River Bridge, 3 2 About the historic, revolutionary differences the Red Flag Canal made in overcoming the severe conditions of drinking water supply and the lack of irrigated farmland, historians Hongmin Wang (王宏 民) and Jilu Sang (桑继录) provide telling statistics in their 1995 book A history of the Red Flag Canal (in Chinese). Before the canal's completion, there was no sustained drinking water supply in 307 of the county's 550 villages. People in these villages had to make daily or weekly round trips, ranging from 2.5 to 20 km, to get drinking water in water barrels (Wang and Sang 1995, p. 10); after the completion, 410 villages, including all the 307 above-mentioned, benefited from the sustained drinking water supply from the canal (ibid., p. 4, p. 318 Hao et al. 2011, p. 272; English translation by the author).
The completion of the Red Flag Canal is indeed a miracle. It is an otherwise impossibility the Linxian people brought into reality through a decadal process of self-reliant, diligent, and ecophronetic socio-ecological practice (Hao et al. 2011, p. 169). 4
A reality created by "half a million pairs of hands" 5
According to historians Hongmin Wang and Jilu Sang, the canal's planning, design, construction, project management, and institutional arrangements were all undertaken and completed by the Linxian people themselves with their own diligent efforts, local talents, and available resources (Wang and Sang 1995, pp. 7-176). During the ten-year period of the project (1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969), the Linxian people supplied willingly a total of 37,402,000 person-days for the completion of the canal (ibid, p. 96). 6 The vast majority of project Ostrom (1933Ostrom ( -2012 commends the high level of attendance of the local volunteers in the zanjera irrigation communities in the Phil-5 The metaphoric expression "half a million pairs of hands" is an English translation of the Chinese clause "55万人民55万双手," Historian Jiansheng Hao (郝建生) and his coauthor colleagues used to praise the Linxian people's miracle-making endeavor (Hao et al 2011, p. 169]. It figuratively refers to both the enthusiastic, voluntary participation of the Linxian people and the primitive equipment and building materials they made themselves and used in the canal project. These include, but are not limited to, shovels, pickaxes, hammers, chisels, wheel borrows, gunpowder, cements, and lime (Hao et al 2011, pp. 169-172;Wang and Sang 1995, pp. 168-176).
4 "Socio-ecological practice is the human action and social process that take place in specific socio-ecological context to bring about a secure, harmonious, and sustainable socio-ecological condition serving human beings' need for survival, development, and flourishing. It … includes six distinct yet intertwining classes of human action and social process-planning, design, construction, restoration, conservation, and management" (Xiang 2019a, p. 8). Ecophronetic is the adjective of the term ecophronesis-ecological practical wisdom (Xiang 2016;Austin 2018). According to Xiang (2016, p. 55), "ecophronesis is the master skill par excellence of moral improvisation to make, and act well upon, right choices in any given circumstance of (socio-)ecological practice; motivated by human beings' enlightened self-interest, it is developed through reflective (socio-) ecological practice" [the addition of "(socio-)" by the author].
expenditures were funded locally-30% of the total project cost, 7 20 million out of 68.7 million renminbi (RMB), was covered jointly by the county, the local people's communes, and brigades 8 ; 55% (37.4 million RMB), primarily professional (labor) compensations and equipment, was covered voluntarily by the Linxian people, mostly farmers 9 ; and the rest (15%) was covered by the provincial and central governments (ibid., p. 95). 10
A gift of hardship in a year of misfortune and frustration
Every human achievement has its beginning in an idea (Hill 1937, p. xi). 11 The completion of the Red Flag Canal is no exception. The fountainhead of Linxian people's miraclemaking endeavor is a bold idea that they believed in and were committed to throughout the entire project. Interestingly enough, like the completion of the canal, this idea is also a gift of hardship (Hao et al. 2011, p. 118;Yang 1995, pp. 464-465).
A blessing in disguise
To the Linxian people, 1959 is a year of misfortune. A brutal, injurious drought forcefully interrupted their routine time-sensitive practice of summer crop planting in early June; the concomitant severe shortage of drinking water supplies presented yet another life-threatening hardship (Hao et al. 2011, pp. 116-117;Wang and Sang 1995, pp. 22-23).
To Gui Yang [杨贵(1928-2018], the county's manager since 1954, and his colleagues on the county's leadership team, 1959 is also a year of frustration. Since 1957, the Linxian people had been implementing a county-wide 7 Project cost is "the total cost of a project including professional compensation, land costs, furnishings and equipment, financing and other charges, as well as the construction cost" (Harris 2006, p. 768). 8 "DURING THE TWENTY YEARS (sic-the author) from 1958 to 1978, the framework for rural development throughout China was provided by the people's commune, a structure with a 'three-level system of ownership with the production team as its basis' (the English translation of '三级所有,队为基础'-the author). In the vast majority of communes, the ownership of land, labour, basic farmingimplements, and animals was vested in the team level, a unit with an average population of fewer than 170 people. The team managed the farming tasks and formed the unit of account for calculating and dividing income. At successively higher levels of organization, the brigade and the commune provided inputs of larger machinery and water resources, general management, and overall planning. Depending on the quality of leadership and available resources, the latter two levels also accumulated the funds to invest in infrastructure, subsidiary undertakings, and small industries. In addition, the commune formed the basis for governmental administration in the countryside. It absorbed the functions of the old xiang (township) and took most of the responsibility for the provision of welfare services, education, public security, and so forth" (O'Leary and Watson 1982, p. 593). 9 How could this high level of voluntary contributions be possible? How could the high level of voluntary attendance (i.e., the abovementioned 37,402,000 person-days supplied voluntarily by the Linxian people) be possible? In a 2004 book entitled China's Red Flag Canal: its resource background and institutional arrangements (in Chinese), resource economist Luliang Li (李露亮) and his coauthor colleagues attribute both remarkable phenomena to the self-motivation of the Linxian people, the effective project management, and a set of unique, pragmatic institutional rules the project leadership team devised and implemented (Li et al 2004, pp. 94-103). These will be the topic of a latter article for this journal. 10 The technical support and financial assistance from the provincial and central governments helped improve the project quality and efficiency significantly (Wang and Sang 1995, p. 118). They, however, did not come until 1964, the fifth year into the project, after the Linx-11 "[A]ll achievement, all earned riches, have their beginning in an idea!" (Hill 1937, p. xi).
ippines. She writes (Ostrom 1990, p. 86): "In terms of the contemporary schedule of 5 days per week, this (level of attendance-the author) amounts to 2 months of work supplied without direct monetary payment. About 16,000 man-days were supplied by members to their own zanjera or federation during the year. As Siy (Robert Siy is the scholar who studied and reported the zanjera irrigation communities-the author) reflects, 'there are definitely few rural organizations in the developing world which have been able to regularly mobilize voluntary (sic) labor to such extent' (Siy 1982, p. 95). Given the rigorous and at times dangerous nature of the work, the level of attendance at these obligatory sessions is rather amazing." Both Ostrom and Siy would have been even more impressed with the level of attendance in the Red Flag Canal project and eager to find out the reasons (see footnote 9).
Footnote 10 (continued) waterworks plan the leadership team developed under the guidelines from the central governments (Wang and Sang 1995, pp. 13-20;Yang 1995, pp. 463-464). 12 By the end of 1959, they would had built a county-wide waterworks that consists of 36 reservoirs, 2397 retention ponds, 32,772 wells, and 1364 canals [they did actually build (ibid., p. 19)]. Underlying the plan is the premise that once built, such a county-wide waterworks would meet the needs for drinking and irrigation (Hao et al. 2011, p. 116). But this very premise was now so readily falsified and indifferently rejected by the daunting reality of punishing drought: throughout the entire waterworks, in each and every one of its reservoirs, retention ponds, wells, and canals-whether built or under construction, there was simply little, if any, water at all (Hao et al. 2011, p. 116;Wang and Sang 1995, pp. 22-23;Yang 1995, pp. 464-465).
"The 1959 hardships are truly a blessing in disguise", reflected Gui Yang several decades later. "Not only did they awaken our minds to the daunting reality of pernicious draught, but they also mandated us to let go of the romantic wishful thinking, and to instead think outside the box" (cited by Hao et al. 2011, p. 118; English translation by the author).
That-to let go of the wishful thinking, and to think outside the box-was exactly how Gui Yang and the county's leadership team responded to the inexorable hardships of misfortune and frustration. They swiftly took a prudent, decisive action of moral improvisation-to look beyond the county boundary for sustained water resources (Hao et al. 2011, pp. 117-118;Yang 1995, p. 465). 13 On June 13, 1959, Gui Yang and a survey crew started their treasure hunt journey along the Zhuozhang River in the neighbor Pingshun County (Hao et al. 2011, p. 119; see also Fig. 1). The next day, they made a serendipitous discovery from which the very idea of the canal project emerged.
A bold idea from a serendipitous discovery
About the emergence of the canal project idea and the instance of serendipity, historian Jiansheng Hao and his coauthor colleagues write in their 2011 book Gui Yang and the Red Flag Canal (Hao et al. 2011, pp. 119-121; English translation by the author): It was June 14th, 1959. Making their way through a deep canyon in the neighbor Pingshun County (see Fig. 1-the author), Gui Yang and his survey crew marveled at the abundant water resources of the Zhuozhang River flowing through the canyon. Gui Yang could not believe what he saw-the large, swirly waves of whitewater on the rapids of the river; he was even more amazed by the massive volume of water supply from the riverhead in a year of severe drought throughout the region.
"Can some of this water be transferred through a canal to our arid hometown for drinking and irrigation?" Spontaneously asked Gui Yang.
The crew wasted no time getting the initial answers: The Zhuozhang River is a perennial river; and there is ample, continuous streamflow in the river that can sustain a water transfer 14 ; despite outside the Zhuozhang River watershed, the basin where the Linxian County is located is downstream from the river, and is lower in elevation than the section of riverbed near the boundaries between the two counties. 12 The general guidelines require that agricultural waterworks should be mainly (1) constructed for retaining water from natural precipitations or groundwater; (2) undertaken by the local beneficiaries-people's communes and brigades-themselves; (3) small in scale, but can be part of a larger system ["(中央的方针是农业水利建设要)以蓄为 主,以社队自办为主,以小型为主,大中小型相结合"] (Yang 1995, p. 464). 13 Improvisation, when contextualized differently from improvisational jazz and theatrical performance where it originates, is an extemporaneous action or array of such actions practitioners take to manage unforeseen challenges or to embrace emergent opportunities with available knowledge and resources (Xiang 2016, p. 57). Inherently neither good nor bad, improvisation itself may lead to either positive or negative results (Cunha et al 1999, pp. 327-332;Vera and Crossan 2005, p. 204). To be prudent and effective, therefore, practitioners need to exercise what American planning scholar John Forester calls "moral improvisation", improvisation with commitments to generally or traditionally held moral principles, that is (Forester 1999, p. 224). In challenging, unforeseen situations, they act extemporaneously yet mindfully as "moral improvisers" (ibid., p. 236) who are "doubly responsible" (Nussbaum 1990, p. 94)-honoring moral commitments and upholding ethical principles, on the one hand, and attending time-sensitive, circumstantial particulars, on the other. For both Aristotle and American pragmatist William James , "the metaphor of theatrical improvisation … is a favorite … image for the activity of practical wisdom (phronesis, that is-the author)" (ibid.). For American geographer and planning scholar Wei-Ning Xiang, moral improvisation is a hallmark of ecophronesis-ecological practical wisdom (2016, p. 55; see Xiang's definition of ecophronesis in footnote 4). For a classic, in-depth discussion about "moral improvisation" in the practice of planning, see Chapter 8 of Forester's 1999 book The deliberative practitioner: encouraging participatory planning processes (pp. 221-241); for an updated account, see section 6 in Forester (2019). As no Chinese translation of moral improvisation can be found in the published English-Chinese dictionaries, the author translates it tentatively as 因地制宜, 与时偕行.
Footnote 13 (continued) 14 Streamflow is the amount of water passing through a specific point of a river over time. In Zhuozhang River, the annual average streamflow is 30 cubic meters per second (m 3 /s), ranging from 7000 m 3 /s to 13 m 3 /s, the hydrological record that Yang and his crew found shows (Hao et al. 2011, p. 120). Inspired by this serendipitous discovery, and after much contemplation, in the night of June 15th, Gui Yang returned to the Linxian County with a bold idea in mind-building an irrigation canal to bring the lifesaving water in the Zhuozhang River to home. 15 Ten years later, on July 6, 1969, the idea of water transfer became a materialized reality-the completion of the Red Flag Canal. 16
A good, study-worthy social practice
In "There is nothing as theoretical as good practice," a 1991 editorial published in the journal Environment and Planning B: Planning and Design, American geographer and planning scholar Helen Couclelis writes (Couclelis 1991, p. 383): he practice has its own rationale, its own theoretical justification … [H]uman agents (sic-the author) participating in a social practice such as doing geography or doing planning know why they do what they do (indeed, they have a theory about it), no matter how uninformed and distorted that knowledge might seem from somebody else's perspective. If the practice is successful (by whatever criterion), then the collective, commonsense knowledge (sic-the author) behind it is worth a closer look by us theoreticians. Good practice is theoretical, not in the trivial sense that it inspires, motivates, informs theory, but more literally, in that good practice contains its own theory. So, indeed, "there is nothing as theoretical as good practice." The socio-ecological practice of the Linxian people is exemplary of such good, study-worthy social practice. Their self-reliant, diligent, and ecophronetic practice is good, in that not only did it successfully bring the 1959 idea of serendipity, through a myriad of impossibility, to the 1969 miracle completion reality, as presented in this showcase article, but it has also been instrumental ever since in securing canal's operations as an enduring, beneficial common-pool resource (CPR). 17 Their practice is study-worthy because "its own theory" possesses both the intrinsic values and ordinary utilities Couclelis describes in the above quote, and therefore exemplifies the body of knowledge ecopracticology, the study of socio-ecological practice, aims to build. (For a discussion on ecopracticological knowledge, see Xiang 2019a, pp. 8-9.) Once systematically unearthed and critically scrutinized, this centerpiece will significantly enrich the emerging field of ecopracticology and ultimately help advance socio-ecological practice. 18
A fitting SEPR mini-series
To this end, Socio-Ecological Practice Research (SEPR), the home journal of ecopracticology (Xiang 2019a, p. 12), will feature the Red Flag Canal in a mini-series. Following the present showcase, other articles of various types in the miniseries [for the 11 SEPR article types, see Xiang (2019b, pp. 1-4)] will be on different but equally important aspects of the socio-ecological practice pertaining to the canal (e.g., humanity, ecophronesis, science, engineering, ethics, politics, governance, and leadership) and on changes the canal brought about to the people and the place. The mini-series will be several years in the making and will conclude with a synthesis of this best kept secret's "own theory." 18 As many historians have documented and unveiled [e.g., Guo (2018), Hao et al (2011), Li (1975, Li et al (2004), and Wang et al (1998)], for the Linxian people, "the whole story" of their self-reliant, diligent, and ecophronetic endeavor "is a romance of hardship, daring, and wonderful achievement", to borrow a phrase from American author George Cary Eggleston (1839Eggleston ( -1911 in his 1886 book Strange stories from history [Eggleston 2007, p. 19]. While the development of this poetic romance is a valuable work in its own right and still in progress, "a closer look" (Couclelis 1991, p. 383), more systematic and rigorous, into the theory behind "the whole story" is in order. 15 The original sentence is "经过一夜思虑,引漳(浊漳河水)入林( 县)宏伟构想在杨贵胸中形成了" (Hao et al 2011, p. 121). 16 The canal's name is another realized idea of Gui Yang. At an organization meeting on March 6 and 7, 1960, he proposed to name the irrigation canal with a term he coined-"the Red Flag Canal" because, he explained, "the red flag symbolizes social progress and our life-changing endeavor" (Hao et al 2011, p. 141). The proposal was adopted at that meeting, and later approved unanimously by the representatives at the county's Water Transfer Conference on March 10 (ibid., p. 142). | 2019-11-20T16:42:45.744Z | 2019-11-20T00:00:00.000 | {
"year": 2019,
"sha1": "e4bec145918a4bf3c74b903221001b4a78a9af42",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8150154",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "0861d6f4e823990cb4785b8290e3b19d4e038185",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"History"
]
} |
119238181 | pes2o/s2orc | v3-fos-license | D-term Enhancement in Spin-1 Top Partner Model
Supersymmetric models with extended electroweak gauge groups have the potential to enhance the Higgs quartic interaction through nondecoupling D-terms. We consider the D-term enhancement effect in a vector top partner model, where the quadratic divergence to the Higgs mass from the virtual top quark is canceled by its corresponding spin-1 superpartners. We are going to show that the model can predict a Higgs mass beyond the LEP bound, and is consistent with the precision electroweak constraints.
I. INTRODUCTION
In the Supersymmetric theory, since the quadratic divergence associated with the Higgs mass-squared from the SM fields will be canceled by their superpartners, soft SUSY breaking terms only induce logarithmic corrections and the scalar field mass is stabilized to be around the soft SUSY breaking scale m s . In order to let the SUSY theory to be natural and therefore reduce fine-tunings, the soft SUSY breaking scale m s is supposed to be in the hundred GeV range. In the minimal supersymmetric standard model (MSSM), the physical Higgs mass is related to the mass of Z gauge bosons times a factor of cos(2β) at the tree level, where β is determined by the ratio of the two Higgs fields vacuum expectation values (VEVs) (v u /v d ). However, the LEP direct search excluded the existence of a Higgs bosons below 114. 4 GeV at 95% C.L. For the Higgs to go beyond the LEP bound, large radiative contribution to the quartic interaction term from the top quark sector is necessary, which in turn demands the top squark to have a mass of the TeV order. The tension between the electroweak scale and the new physics emerging scale, which is referred as the little hierarchy problem, encourages people to explore new possibilities to avoid the dilemma. There are many attempts to achieve a Higgs mass much heavier than the Z gauge bosons in the supersymmetric theory. One straightforward way is to enhance the quartic interaction term at the tree level, and generally additional interaction structure is required. In the NMSSM model [1] , one extra SU (2) L singlet superfield N is added, which couples with the two Higgs fields through a supersymmetric Yukawa interaction λN H u H d . A large λ is preferred to generate a large quartic term but the requirement that the Yukawa interaction is perturbative till the unification scale puts an upper bound on the Higgs mass. An alternative method to raise the Higgs mass without inducing fine tunings is to consider a fat Higgs scenario originated from a strong interaction sector . In the fat higgs scenario, the singlet chiral field N and the two Higgs fields H u and H d are composite meson fields interacting via a naturally large Yukawa coupling. The original fat Higgs model has a dynamically generated superpotential λN (H u H d − v 2 ) with the similar matter content as the NMSSM in the low energy scale [2]. This type of theory is further extended by Refs. [3] and [4]. In the New Fat Higgs model [3], only the singlet chiral field N is composite while the two Higgs fields are still kept elementary.
Supersymmetric models with enlarged gauge groups under which the Higgs bosons are charged may raise the Higgs mass through the nondecoupling D-term effects [5] [6]. In the low energy scale, the enlarged gauge groups need to be broken into the Standard Model gauge group by the VEVs of some extra scalar fields. If the gauge symmetry is broken in a SUSY conserving limit, the D-term effects of these extra scalar fields would decouple and we could recover the standard MSSM D-term potential for the Higgs fields. In order to retain the D-term effects from those extra fields till the electroweak scale, SUSY breaking effects need to be included in the mechanism responsible for the gauge symmetry breaking. When the SUSY breaking scale is much larger than the gauge symmetry breaking scale, the effective D-term for the Higgs fields in the electroweak scale can be enhanced.
In this Letter I consider the possibility to increase the Higgs quartic interaction terms in a spin-1 top partner model [7]. In this model, the superpartners of the left-handed top quark are spin-1 vector bosons. while the superpartner of the right-handed top quark is still a scalar. This scenario is realized by extending the gauge group and assembling the left-handed top quark into a vector supermultiplet. The extended gauge group serves to provide the source of nondecoupled D-term effects. Extra chiral fields are necessary to be added to trigger the gauge symmetry breaking since we hope to achieve a D-term flat minimum. In the following of this letter, I will specify the superpotential responsible for gauge symmetry breaking and supersymmetry breaking. The exact mass spectrum for scalar states in the link fields after the symmetry breaking will be calculated. I am going to verify the D-term enhancement effects in the Higgs bosons sector and explore the bound for the mass of the Higgs boson in this model after considering relevant electroweak constraints.
II. D-TERM ENHANCEMENT AND HIGGS MASS
We first briefly review the structure of the spin-1 top partner model. For details of the realization, one can refer to the previous paper [7]. The model is based on the gauge group SU (5), which can be better illustrated in a supersymmetric two sites moose diagram (See Fig [1]). One copy of three generations of leptons, quarks plus their superpartners are put in the first moose site with a gauge group of SU (3) × SU (2) × U (1) H . These chiral superfields transform exactly the same as in the MSSM. While two higgs superfields H andH need to be put in a second moose site which has a SU (5) × U (1) H × U (1) V gauge group. The gauge coupling of U (1) H can be set to be very small. Four vector-like link fields Φ 3 , Φ 3 , Φ 2 , Φ 2 are responsible to communicate between the fields located in the two isolated moose sites. When the link fields gain nonzero VEVs, they break the original product gauge group The gauge transformation property of the Higgs fields and the four link fields are given in Table I. For the vector supermultiplet in the SU (5) gauge group, the extra gauge bosons X, Y transforming as (3, 2) under the SU (3) C × SU (2) W are identified as the spin-1 top partner in this model. They are lifted to be heavy after the gauge symmetry breaking in the same way as in the GUT model . We also need to identify the field contents in the four link fields. Under the diagonal SU with the same quantum number ast L (t L ) in the first moose site. Similar decomposition applies to the fields of Φ 2 and Φ 2 . They will split into one complex singlet Φ 2S (Φ 2S ), one complex triplet Φ 2T (Φ 2T ), as well as one component field Φ 2t ( Φ 2t ) which has the same quantum number as t L (t L ). Φ 2t and Φ 3t mix with the (3, 2) sector gaugino λ 32 through gauge interaction and they mix with the left handed field Q 3 = (t L , b L ) in the first moose site through Yukawa interaction. As long as the µ-terms for Φ 2,3 and Φ 2,3 are large enough compared with their VEVs, the dominant component of the physical left handed top quark will be the gaugino λ 32 and their superpartner are spin-1 gauge bosons.
We now write down the superpotential relevant for calculations. In order to get quartic terms for the link fields, we may add two singlets chiral superfields S 1,2 to interact with Φ 2,3 and Φ 2,3 . One adjoint chiral superfield A 1 charged under the SU (2) gauge group and another adjoint chiral superfield A 2 charged under the SU (3) gauge group are also added to ensure that there are no light modes after the gauge symmetry breaking.
σ a /2, (a = 1, 2, 3) are the generators for the SU (2) gauge group and G m , (m = 1, . . . , 8) are the generators for the SU (3) gauge group. The two λ S singlet interacting terms will force VEVs for Φ 2,3 and Φ 2,3 . The first Yukawa interaction term is not relevant for the gauge symmetry breaking but it will align the VEVs of Φ 2,3 and Φ 2,3 in the singlet component field direction, i.e.
We can check from Table I that these singlets' VEVs do not violate the H + V + aT 24 charge. The VEVs break the large gauge group down into the MSSM gauge group SU (3) C × SU (2) L × U (1) Y and their gauge couplings are given by: whereĝ i andĝ 5 are the gauge couplings of the original SU (3), SU (2), U (1) H , U (1) V and SU (5) gauge groups respectively. For simplicity, we further assume f 2 = f 2 and f 3 = f 3 , therefore this is a D-term flat minimum and it will not induce mass terms for the Higgs fields. The singlet field S 1,2 and adjoint fields A 1,2 will not gain VEVs in this scenario. These terms give the following scalar potential for the four link fields Φ 2,3 and Φ 2,3 : The minimum of this simple potential determines the VEVs, Substituting Eq. [5] and Eq. [6] back into the scalar potential Eq. [4], we can see that Supersymmetry is spontaneously broken in this setup via the ORaifeartaigh mechanism with the simultaneous presence of the supersymmetric µ terms and the λ S interaction terms . An easy way to verify this statement is that, only if µ 2 = µ 3 = 0, the superpotential could have zero vacuum energy when link fields develop nonzero VEVs. Desired values for the two VEVs f 2 and f 3 can be achieved by tuning the three free parameters: µ 2,3 , w 2,3 and λ S . As we expect no light modes after the gauge symmetry breaking, we first examine the mass spectrum in the link fields after they gain VEVs. Due to the traceless properties of the SU (2) and SU (3) gauge generators, the two λ A terms will not change the VEVs, but they will give masses to the two linear copies of real triplet fields i.e. ψ 2T,2 = 1 2T , as well as the two linear copies of real octet fields i.e. ψ 3O,2 = 1 3O , leaving other states untouched. The two λ S quartic terms and the two µ 2 2,3 mass terms can give masses to two specific linear copies of real triplet fields i.e. ψ 2T,1 = 1 2T , plus two specific linear copies of real octet i.e.
They also give masses for six real singlet states: Ignoring some singlets mixing, we list the mass spectrum for all the singlets, triplets and octets in Table II. As shown in that table, we have one copy of real triplet field , one copy of real octet field and two copies of real singlet fields left massless, which are the Goldstone bosons eaten by the heavy W ′ , G ′ gauge bosons and two heavy U (1) gauge bosons B ′ and B ′′ respectively.
: The two real singlets fields ψ 2S,1 = 1 . Their mass eigenstates are determined by diagonalizing the full mass terms for ψ 2S,1 and ψ 3S,1 as described in Eq. [7]: The component fields in the off-diagonal direction also need to be dealt with in a straightforward way. For convenience, we conduct an eigenstate basis rotation and redefine those stop-like states in the four link fields as follows: In terms of the new eigenstate basis of ψ 2t,i , i = 1, 2, 3, 4, and ψ 3t,i , i = 1, 2, 3, 4, the mass terms from both the gauge interactions and superpotentials are: The origins of each term in the above two equations are: the first one is from the Super-Higgs mechanism, the second one and third one are from the µ 2,3 mass terms as well as the λ S interaction terms, while the last one comes from the y 1 Q 3 Φ 2 Φ 3 Yukawa interaction. Similar to the singlets case, the exact mass eigenstates are determined by diagonalizing these two equations. Examining Eq.
[11], it is easy to find out that only two linear combinations of stop-like states are still massless and they should be identified as the Goldstone bosons for the heavy X, Y gauge bosons.
π t = 1 η t = 1 With the mass spectrum for all the scalars fields in φ 2,3 and φ 2,3 , we proceed to discuss the effective D-term in this model. In a supersymmetric gauge theory, when a large gauge group breaks down into the MSSM gauge group, the vector supermultiplet corresponding to the unbroken generators will inherit the MSSM gauge interactions and remain massless after the gauge symmetry breaking. For the vector supermultiplet corresponding to the broken generators, they could achieve masses after eating a copy of chiral supermultiplet through the Super-Higgs mechanism. In the supersymmetric limit, all component fields (A µ , λ 1 , λ 2 , Σ) in a heavy vector supermultiplet have degenerate masses. And after integrating them out , the D-term effects from those heavy states will decouple . In order to retain the D-term effects from those heavy states till the low energy scale, a SUSY breaking mass term need to be added to the real scalar component field Σ, i.e. the lowest component field in the heavy vector supermultiplet, which will recouple the D-term effects from the broken gauge generators back into the effective Lagrangian. In the low energy scale, since the Higgs bosons are charged under the diagonal SU (2) W and U (1) Y gauge group, there should be two sources of D-term enhancement in this model: One is from an extra SU (2) embedded in the SU (5) gauge group and the other is from two extra U (1)s. For the heavy SU (2) vector supermultiplet, its corresponding scalar component field is a triplet state : 2T . While for two extra U (1)s , their scalar component fields are two heavy singlets: 3S . After integrating out those heavy fields, an effective D-term is obtained for the Higgs bosons : where g 2 is the gauge coupling for the SM gauge group SU (2) W and g Y is the gauge coupling for the SM hypercharge gauge group U (1) Y , whose values are determined by Eq. [3] . The D-term effects of these heavy scalar fields are nondecoupling due to spontaneous SUSY breaking effects in our scenario, and their effects can be summarized into two parameters ∆ 2 and ∆ Y : (16) m 2T , m 2S and m 3S are the respective F-term masses for the heavy scalar fields ψ 2T,1 , ψ 2S,1 and ψ 3S,1 induced by spontaneous SUSY breaking , whose values can be read from the first term in each column of Table II: In the supersymmetric limit i.e. µ 2 = µ 3 = 0, we can find that ∆ 2 = 1 and ∆ Y = 1, that is the D-term effects from those heavy fields are decoupled . However in this model, since we prefer to stay in the region ofĝ 5 f 3 ≪ µ 3 and g 5 f 2 ≪ µ 2 , we expect that there are notable enhancements for both of the SU (2) W and U (1) Y D-terms. Under the limit of m 2T ≫ f 2 , m 2S ≫ f 2 , and m 3S ≫ f 3 , ∆ 2 and ∆ Y are simply determined by those gauge coupling constants: We can see that in the large SUSY breaking limit, the effective D-term for the Higgs bosons is only proportional to three gauge coupling constantsĝ 2 5 ,ĝ 2 1H andĝ 2 1V , which is exactly the same as in the original unbroken gauge theory. The Higgs bosons can gain notable mass through the D-term enhancement effects as long as we choose the gauge couplings under which our Higgs bosons are charged to be larger than the gauge couplings in the other moose site. An simple example is choosingĝ 2 = 0.78,ĝ 5 = 1.2,ĝ 1H = 0.378 andĝ 1V =1.5, we will obtain ∆ 2 ∼ 3.36 and ∆ Y ∼ 8.21. With an O(1) tan(β) ∼ 2.0, at the tree level the Higgs mass squared M 2 h ≤ 1 4 (g 2 2 ∆ 2 + g 2 Y ∆ Y )v 2 cos 2 (2β) can be naturally raised to be around (115 GeV) 2 . Since the radiation corrections from the top quark and its superpartners will contribute to the running of the Higgs quartic term: at the loop level the Higgs mass is further enhanced such that with The precision of Higgs mass prediction depends on the mass of vector top partner and the mass of the right handed stop used to calculate the radiation correction. The vector top partner gains its mass through the link fields' VEVs, i.e. m Q =ĝ 2 5 (f 2 2 + f 2 3 ). While the right handed stop acquires its mass through the higgs µ H term µ H HH as well as from the soft SUSY breaking scalar mass term. If the vector top partner mass is m Q ≃ 2.8 TeV and the mass of right handed stop is mt R ≃ 300 GeV, and set the mass parameter to be M A = 800 GeV , we can get a heavy Higgs bosons m h ≃ 195 GeV.
III. ELECTROWEAK CONSTRAINTS: S, T , U AND Z → bb
Those gauge couplings should be chosen so that they could reproduce the SM gauge couplings at the EW scale after running by the renormalization group . In the following, we are going to take some specific sets of gauge couplings when we proceed with the electroweak analysis, so that the electroweak measurements are adopted to constrain the link field VEVs, and the S, T and U parameters are defined in the following way: with s = sin(θ) and c = cos(θ) and θ is Weinberg angle. The definition of S, T, U subtracts out the predicted SM contribution with fixed top quark mass and Higgs bosons mass so that they encode only new physics contributions.The contribution to the S and U parameters from the gauge bosons mixing is very less , which are at the order of v 4 /f 4 3 or v 4 /f 4 2 . The analytic expressions for S and U are simply given by: where it is easy to verify that ∆S and ∆U are related by the following identify: (25) The experimental constraints for S, T and U are given by [9]: S = 0.04 ± 0.10 , T = 0.05 ± 0.12 , U = 0.08 ± 0.11 .
If we assume 0.4 TeV < f 3 ≪ f 2 , and with a smallĝ 1H but a largeĝ 1V , the S and U parameters do not put any constraint to our parameter space. The situation is different for the other oblique parameter. The gauge bosons mixing can give sizable contribution to the T parameter: There is another big source of T parameter contribution from the heavy Higgs [8], with the reference higgs mass of m href = 120 GeV: Since in the interested region of parameter space this model gives a negligible contribution to the U parameter, we can fix U = 0, therefore the experimental constraints for the T parameter is [9]: The presence of a heavy Higgs boson with a mass much larger than 120 GeV will give a negative contribution to T parameter which may lead to a confliction with the experimental constraints. It is good for us that the mixing of gauge bosons instead drives the T parameter in the positive direction so that two effects may balance with each other and we can go back into the consistent region in the S − T plane. In Fig. [2], the contribution to T parameter Fig. [2], the gauge coupling constants are taken to beĝ 5 = 1.2,ĝ 1H = 0.378, g 1V = 1.5, andĝ 2 = 0.78. The mass of right handed stop is 300 GeV, µ-term masses are µ 2 = 5 TeV, µ 3 = 2 TeV and the input mass parameter for Higgs bosons in Eq.
[20] is M A = 800 GeV. The tan β is taken to be 2.0. This set of chosen parameters predicts the Higgs mass to be in the range of (188.5 GeV, 194 GeV), related to a vector top partner with its mass in the range of (2.6 TeV, 4.8 TeV). In the right panel of Fig. [2], we take another set of gauge coupling constantsĝ 5 = 1.2,ĝ 1H = 0.37,ĝ 1V = 2.5, andĝ 2 = 0.78, with the other input parameters unchanged. As we can see, increasingĝ 1V gauge coupling will reduce the T parameter's dependence on the value of f 3 parameter, i.e. the contour becomes more flat in the right panel, which will relax the lower bound of the vector top partner to be m Q > 2.55 TeV and therefore result in less radiative correction to the Higgs mass. However increasingĝ 1V gauge coupling at the same time enlarges the tree level Higgs quartic coupling through the ∆ Y parameter so that the total effect is that with a largerĝ 1V coupling the Higgs bosons mass is increased by just 1 − 2 GeV to be located in a range of (190 GeV,195 GeV). Varying the mass of the right handed stop by hundred GeVs can result in the lower bound of the Higgs mass slightly changing by a few GeVs. As shown in Table III, for a specific set of the gauge couplings:ĝ 5 = 1.2,ĝ 1H = 0.378,ĝ 1V = 1.5, andĝ 2 = 0.78, when the right handed stop mass is varied from 300 GeV to 550 GeV, the lower bound for the Higgs bosons mass which satisfies the requirement of T < 0.18 will change accordingly from 188.5 GeV to 200 GeV. The parameter space constrained by the oblique parameter does not exclude a light Higgs boson, which is preferred by current ATLAS and CMS search results. Recent LHC experiments observed an excess of events at 125 GeV in the final state of γγ hence give a hint that a Standard Model like Higgs may exist in the mass window of 123 GeV − 127 GeV. The light Higgs scenario can be achieved by tuning the tan β. When we take tan β = 0.86, the Higgs mass is limited to be m h ≥ 122.5 GeV depending on the specific gauge couplings and other input parameters as shown in Fig. [3]. But it will require a heavier vector top partner with its mass larger than 3.5 TeV to be consistent with the T parameter constraints.
Another important constraint comes from the corrections to the Z → bb vertex. The b L quark in this model is a linear combination of several fields and is mostly the gaugino of SU (31) The expression shows that the correction to Z gauge bosons coupled to the right handed bottom quark is much less as it is proportional toĝ 2 1H , and the value ofĝ 1H is assumed to be small in this model . Constraint from Z → bb is measured by the branch ratio of R b = Γ(Z → bb)/Γ(hadron). The deviation of R b due to the new physics can be expressed in terms of δg N P L and δg N P here R b is the SM value predicted by the electroweak fit and its value is R b = 0.21578 + 0.0005(−0.0008). The deviation δR b , used to describe the difference between its observation value and the SM fit result, is given by the experimental measurement [9]: Substituting Eq.
[32], and we plot the dependence of δR b on the two VEV parameters f 2 and f 3 in Fig. [4]. The lowest contour in that figure corresponds to the upper bound of δR b = 0.00117 , which gives a loose bound on the mass of the vector top partner compared with the T parameter constraint. For comparison reason, we will adopt the same two sets of gauge couplings to evaluate the value of δR b , as we do in analyzing the T parameter. In the left panel of Fig. [4] , the gauge couplings are :ĝ 5 = 1.2,ĝ 1H = 0.378,ĝ 1V = 1.5, andĝ 2 = 0.78, by requiring δR b < 0.00117, the mass of the vector top partner is limited to be m Q ≥ 1.63 TeV. In the right panel, gauge couplings are taken to beĝ 5 = 1.2,ĝ 1H = 0.37,ĝ 1V = 2.5 andĝ 2 = 0.78. Since both δg zbLbL and δg zbRbR will decrease as we increase the value ofĝ 1V , we could have a smaller bound value m Q ≥ 1.5 TeV for the vector top partner. It can be seen that the T parameter measuring the amount of custodial symmetry breaking constrains the parameter space in a more stringent way. By contrast the measurement from Z → bb gives a rather loose and negligible bound for the mass of the vector top partner in this model. Let us assume thatĝ 5 ,ĝ 1V are relatively big, andĝ 1H is much smaller, through tuning the other parameters, the theory is capable to accommodate a Higgs bosons with its mass in the range of (122.5 GeV, 200 GeV), after considering the electroweak constraints. Notice that in the case of a light Higgs boson, we generally demand tan β ∼ (0.8 − 0.9) and a large m Q in order to satisfy the T < 0.18 requirement.
IV. CONCLUSIONS
In this paper, I present that adding extra singlet chiral superfields to interact with the link fields can trigger the gauge symmetry breaking and with an appropriately arranged superpotential, the VEVs will lead to spontaneous Supersymmetry breaking at the same time. I also add two chiral superfields transforming under the SU (2) adjoint representation and the SU (3) adjoint representation respectively to lift the moduli so that there is no light mode after the gauge symmetry breaking. Due to the nondecoupling D-term effects of those heavy fields, a larger Higgs quartic coupling is obtained. We explicitly demonstrate that in the large SUSY breaking limit, the effective low energy D-term is the same as in the unbroken gauge theory. Since the gauge couplings of the extra gauge groups SU (5) × U (1) 1V under which the Higgs boson are charged are taken to be strong, with a moderate O(1) tan β the Higgs mass is heavy enough at the tree level. After taking the radiative corrections into account, the Higgs mass can be raised to be well beyond the LEP bound. | 2014-06-09T06:21:18.000Z | 2012-04-30T00:00:00.000 | {
"year": 2012,
"sha1": "3823070f29f0074ac78a0a46469b3cb7bcffd2b8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1204.6622",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3823070f29f0074ac78a0a46469b3cb7bcffd2b8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221238923 | pes2o/s2orc | v3-fos-license | Integrity Monitoring of Multimodal Perception System for Vehicle Localization
Autonomous driving systems tightly rely on the quality of the data from sensors for tasks such as localization and navigation. In this work, we present an integrity monitoring framework that can assess the quality of multimodal data from exteroceptive sensors. The proposed multisource coherence-based integrity assessment framework is capable of handling highway as well as complex semi-urban and urban scenarios. To achieve such generalization and scalability, we employ a semantic-grid data representation, which can efficiently represent the surroundings of the vehicle. The proposed method is used to evaluate the integrity of sources in several scenarios, and the integrity markers generated are used for identifying and quantifying unreliable data. A particular focus is given to real-world complex scenarios obtained from publicly available datasets where integrity localization requirements are of high importance. Those scenarios are examined to evaluate the performance of the framework and to provide proof-of-concept. We also establish the importance of the proposed integrity assessment framework in context-based localization applications for autonomous vehicles. The proposed method applies the integrity assessment concepts in the field of aviation to ground vehicles and provides the Protection Level markers (Horizontal, Lateral, Longitudinal) for perception systems used for vehicle localization.
Introduction
The second half of the last decade has seen a significant emergence of commercially available vehicles with autonomous driving capabilities. We can confidently say that the status of autonomy in vehicles is well into the realm of Society of Automotive Engineers (SAE) level 2 [1]. While the researchers and industries are rapidly moving towards SAE level 3 systems that can dramatically improve driving safety and efficiency, monitoring the integrity of sources and process used in such systems can often pose challenges [2]. In [3], the classical integrity concepts used in aviation are transposed to integrity requirements for ground vehicle localization. Using road-safety-related statistics and geometry of roads and vehicles, [3] derived bounds for localization error in both highway and urban scenarios. They further distributed the derived total integrity risk to allocate integrity levels to every subsystem present in autonomous vehicles. In this work, we focus on the integrity assessment of perception data sources such as vision, LiDAR, map, etc. Most advances in this area explicitly address the task of integrity monitoring of data sources by introducing redundancy in sensors [4,5], using sensors with advanced features [2,6], monitoring repetitive journeys [7], or assuming one source (often high-quality digital maps) as reliable ground truth [8,9]. While adding data redundancy (often different GPS receivers for map-matching and sensor fusion [5]) can monitor the integrity of processes, the integrity of data sources has to be largely assumed. Only a small number of works like [10] and [7] consider digital maps as a source with probabilities of error. However, to achieve context-aware
Problem Statement
Semi-urban and urban environments often contain a multitude of intersections, roundabouts, road-splits, and merges compared to highway scenarios. As discussed in Section 1, multimodal data from different sources are used to achieve accurate localization in such scenarios. Developing upon the framework presented in [18], finding a generalized common model for the representation of data from all sources is the primary objective of this work. Even though works like [12] and [17] propose geometrical models for several types of intersections, they are limited to a single perception data source and digital maps. They also require prior classification of intersections to reliably fit the predefined models to the data. On the other hand, sensors used in intelligent vehicles have considerably different behavior and output in such scenarios. Hence, the rest of this section is focused on how data from different sources are used in complex scenarios. We also examine the possible errors associated with these use cases and discuss the applicability issues of a simple common geometrical model (e.g., the polynomial model in [18]) in these situations.
Traditionally, vision data is used to detect ego lane markings and/or lanes parallel to the ego lane using a curvature-based model. In urban scenarios, such lane detection models fail due to different types of lane markings (e.g., stop lines, road separation markings, etc.), orientation (e.g., lane markings from other road sections in the junctions) and complex curvatures (e.g., splitting and merging lane markings). Another approach using visual data is to detect the drivable road region in front of vehicle. But due to the unforeseeable shapes of possible road segment detections, modeling of such output with a geometrical model is difficult. Intersections with multilane branch roads can have a large common region at the center, which can limit the observability of other road branches through visual inputs.
It is reasonable to assume that vehicles travel slowly and stop more often in semi-urban and urban scenarios than highways. GPS receivers are proven to have poor performance in slow-moving vehicles [19]. Combined with the fact that the presence of buildings and other obstructions can cause multi-path effects or even outages of signals [2], GPS receivers experience classical localization problems in urban environments.
With the exception of a few advanced and proprietary Geographic Information Systems (GISs, e.g., Google maps), publicly available GIS sources lack accurate road properties (lane or road widths, locations of lane splits and merges at junctions, etc.) and strongly depend on rule-based rendering to display maps. The discrepancies observed while overlapping the satellite view and rendered map structures from different GISs as shown in Figure 1 are examples of the limitation of this approach. GPS tracking of the vehicle is accurate in satellite view of the junction, which includes a lane change to the leftmost lane of the highway for a left turn and a smooth turn through the left side of link road. However, from the rendered road structure view of all the map sources, the track section corresponding to lane change appears to be wrong as it is outside the boundary of the road structure. It is also worth noticing that none of the GIS represent roads with their actual width, but with rule-based dimensions. It is evident from the same width of two highway sections despite different number of lanes in each of them. Likewise, modeling of junctions is also considerably different in each map source, particularly between Google Maps and OpenStreetMap. Hence, inclusion of map data in localization process is suboptimal in urban and semi-urban scenarios and forces us to consider it as a data source with associated instantaneous integrity rather than ground truth. While data from vision, GPS, and maps add complexities and impose limitations, LiDAR, on the other hand, can provide useful data in urban and semi-urban environments. It can observe the ego road and other road branches efficiently. By using the reflectivity information available in LiDAR data, we can detect bright surfaces like lane markings and curbs [15]. Though LiDAR poses challenges in the detection and modeling of features as in the case of vision, the accurate 3D information available makes it an important source for representing the structure of a large urban scenario.
The integrity monitoring method in [18] provides a weighting scheme for data sources that infers the cause of inconsistencies observed in the data-fusion method at a given time. For any data source combination that can be represented in a common frame and with a common model in that chosen frame, the cross-consistency analysis proposed in [18] can be applied. However, the discussion presented in this section shows that developing a common model is difficult when different sensor modalities and diverse features are introduced to the system in order to accommodate urban scenarios. To this extent, we could not find any integrity assessment solution in the literature that can handle more than two perception data sources and a wide variety of scenarios.
Contributions
The paper presents the following contributions based on the problem statement outlined above.
1. Defining a common reference frame and formalizing a common model to represent all data sources in all scenarios. 2. Prototyping an integrity assessment framework using the common model and providing proof of concept. 3. Analyzing the performance of the proposed framework using publicly available datasets and comparison with other state-of-the-art integrity monitoring solutions from the literature.
Methodology
The framework proposed for the integrity assessment developed in this work is given in Figure 2. The Detection Block includes sensor-specific routines to detect features that are relevant to different data fusion algorithms described in Section 2. The Rendering Block uses GPS position to extract data from surrounding map regions and applies rule-based rendering to reconstruct the geometrical structure of the area. The obtained information is represented in a common frame using a common model. In this work, the common reference frame is chosen as the ego frame of the vehicle as the transformations between ego frame, camera frame, LiDAR frame, and GPS frame can be determined by calibration procedures [20]. A decision algorithm is used to decide whether the optimization of localization is required in case of unknown transformations between frames of data and the common reference frame (in our case, map frame to ego frame). Once the required optimization is achieved, coherence between data representations is evaluated and integrity is assessed for each source. In this section, we outline the specific techniques and concepts used in the framework presented in Figure 2.
Detection
The purpose of the Detection Block is to extract the same information (features) from each data source. From the literature review, we identify three features that are most commonly used in state-of-the-art localization methods in urban scenarios-lane markings, drivable roads, and the structure of the surroundings of the vehicle. Here, we limit the surrounding structures to grass patches/vegetation and curbs and avoid building facades and other objects due to the complexities of their detection. Indeed, any feature can be used in this process if it is detectable from every data source considered. The methods used to detect these features from each source are explained here.
Vision
To accommodate varieties of lane markings present in different scenarios, all possible markings are detected. Images from cameras are transformed to bird's-eye view (BEV) using camera calibration. Intensity-based segmentation is used to detect all possible white lane markings. After detection of all the candidate lane markings, blob analysis is used to reject poor detections [14]. Seed-based wavefront segmentation is used to detect dark road regions with asphalt and regions with grass patches. For road segmentation, seeds are selected in front of the vehicle and using propagating waves from these seeds, connected road regions are segmented. Seeds for grass patch detection are selected by color-based keypoint detectors. After these detections, every pixel in the BEV can be classified into lane markings, roads, other surfaces, or unclassified.
LiDAR
A subset of LiDAR data containing points that lie inside a 3D region of interest (ROI) is selected. Points on the road and on the edges of the road are classified using 3D gradients. The ROI is divided into smaller patches in the XY plane, and the points belonging to each patch are examined for their Z values. This helps to differentiate between road segments, curbs, dividers, vegetation, etc. using the technique presented in [21]. Points with high reflectivity are also selected as they correspond to the bright surfaces such as lane markings and railings. These are further classified into reliable lane marking detections by combining their position with road regions. As a result of these detection steps, every point in the ROI is classified with lane markings, roads, other surfaces, or unclassified.
Map Handling
OpenStreetMap (OSM) is used in this work as a GIS source. OSM provides nodes corresponding to ways, grass patches, and railings, etc. However, finding relevant geometrical information in a vehicle's surroundings from maps involves two key components: location and orientation of the vehicle [10]. Using location measurements, all of the relevant map nodes in the ROI are selected and the map data is transformed into the vehicle's ego-frame using the orientation of the vehicle. The location estimate is provided by the GPS sensor, whereas the orientation estimate is given by the on-board Inertial Measurement Unit (IMU). Once the map nodes are represented in ego frame, a rule-based rendering algorithm is used to create a geometrical sub-map for the ROI. The number of lanes, lane width (when available), location of road boundaries, boundaries of curbs, dividers, and vegetation, etc. are used in the rendering process, producing an enriched geometrical model of the environment from OSM. In works like [13,14], custom-made high-definition maps (HD maps) that contain lane marking information and accurate road structure information are used. Even though the exact location or type of lane markings are unavailable in OSM, assuming continuous lane markings on the left side of the leftmost lane, right side of the rightmost lane, and dashed lane markings for the lanes in the middle, approximate lane level information can be produced. In case of missing lane width information, the standardized road construction guidelines of the country are used to render the map. However, it is evident that errors in GPS positioning or orientation estimation can greatly affect the accuracy of map data extraction and cause uncertainties in map rendering [10], especially for the exact locations of lane markings.
Representation
To be able to deal with the features and geometries of different types and shapes, a 2D feature grid (FG) is proposed as the model. FG consists of an array of cells where each cell c i represents a 20×20×100 cm block in the real world. Four feature labels (LBs) are assigned to cells in the FG according to the type of feature: (1) LB r -road, (2) LB l -lane marking, (3) LB o -other surfaces, (4) LB u -unclassified/unidentifiable. The blocks corresponding to each of the cells are examined for the information they contain. The type of feature with the highest ratio inside a block is used to assign the respective label to the cell. Each data source produces an FG following this criterion, as shown in Figure 3. Along with labels, it is important to model the confidence of data provided by each sensor. The accuracy of LiDAR data decreases as the distance from the sensor to the measurement location increases [22]. On the other hand, the Inverse Perspective Mapping (IPM) transformation used to create the bird's-eye view images from actual images increasingly introduces deformation as the distance from the camera increases due to camera calibration errors. To account for these facts, a confidence function is proposed, drawing inspiration from [18] for all relevant FGs. Using the concept of an Inverse Distance Weighting (IDW) function presented in [23], the weights are computed as where w ij is the weight associated with the cell C ij , x ij and y ij are the distances to the center of C ij from the sensor position, and h s is the height of the sensor. The min-max normalization operator x min−max is defined asx Hence, the total representation of data from sources will have two components: the labels and their importance, which is denoted by the weights. Other source-specific weighting functions using homography of image transformations and LiDAR data acquisition models can also be used for this purpose. However, data sources like maps use uniform weights for all the cells in their FGs due to the fact that they are not measured but just extracted.
Integrity Analysis
The treatment of different sensors as multimodal data sources with a common frame of representation and the same dimensionality allows us to use the definitions of integrity presented in the domain of data sciences. Integrity measures overall accuracy and consistency of data sources [24]. While accuracy is defined as the correctness of validated data, consistency refers to the measure of coherence between them. Data sources with high consistency can be treated as reliable, and their integrity can be expressed as a function of coherence with respect to other data sources.
Let S = {s 1 , s 2 , s 3 , · · · , s N } be the set of N sensors and s i FG be the feature grid provided by each sensor. One cell c k with feature label LB x from s i FG is defined as consistent if there is at least one matching cell with LB x in a 3×3 neighborhood around the cell c k in s j FG. By extension, a matching operation f m between FGs is defined as where N s i FG m is the number of matching cells in s i FG and N s i FG T is the total number of applicable cells in s i FG, i.e., cells with labels except LB u . After computing the matches between all of the possible combinations, the integrity associated with a source is computed as
Localization Optimization
The integrity analysis mentioned in Section 3.4 assumes that the localization of a vehicle is accurately known, i.e., the localization information used in map extraction is reliable. But in real-world applications, GPS positioning-even from an inertial/dead reckoning coupled GPS receiver-can have errors due to multipath effects, outages, or drifts. Inherently, error in localization affects the consistency of map data to the other sources, hence impacting the integrity of the whole system. Hence, we developed a localization optimization procedure that uses semantic-level information from data representations of sources. It can efficiently allow integrity assessment and also identify particular defaults such as map offsets or inconsistent map sections.
In this work, a particle filter [25] is developed for map-matching to improve localization. The block diagram for the localization optimization in the ego frame of the vehicle with decision criteria is given in Algorithm 1. In the first step, new position and orientation measurements from GPS and IMU are compared with the current best localization estimate. If the new measurements (x m : [x m , y m , θ m ]) are not within the non-holonomic constraints of the current state (X state : [x state , y state , θ state ]) of the vehicle, they are detected as an outlier [26]. Conversely, consistent position and orientation measurements are used to render a map from the database and the coherence between FGs of the map and other sources is computed. If sufficient coherence is observed (greater matching than the empirically-derived threshold for f m (s i FG, s j FG) considering different sensors and scenarios), localization optimization is not performed and the data representations from each source are used for integrity assessment. In case of poor coherence between the combinations, a sequential localization optimization using particle filters is performed. The transformation function t on map FG (MFG) used to maximize the coherence between sources is defined as where R(θ) is the 2D rotation matrix constructed using θ and T is the 2D translation vector constructed using x and y translations.
In the sequential localization optimization, coherence between the map (MFG) and each of the other sources (s i FG) is maximized in ego frame along the y direction (lateral) at first by iteratively distributing particles around the best match localizations. The lateral offset estimation y * and the final distribution of particles from this step is used for initializing the second particle filter, which maximizes the match along the x (longitudinal) and θ (heading) dimensions. The resulting optimized localization (x * , y * , θ * ) s i FG for each s i FG is checked for consistency by thresholding the distance between them. If they are not consistent, the coherence between all s i FG is computed. An issue with the map structure is identified if the coherence between other sources (other s i FG combinations, e.g., LiDAR-vision) is good but ] the localization optimization of these sources cannot produce consistent localizations (within 2σ uncertainty bounds). If the localization estimations for each sensor combination are consistent, the estimation that gives the best coherence is chosen and integrity assessment is carried out. This estimation is also used to update the current localization estimation for the next time step.
Experiments and Discussions
Experiments are conducted with scenarios available in the KITTI benchmark suite [27] to establish proof of concept. Real-Time Kinematic (RTK) GPS fixes in these datasets are added with noise generated using the GPS-noise simulation model proposed by [28] to simulate poor GPS localization fixes. Outliers that are higher than the 2σ variance of the GPS-noise simulation model are used to replace RTK GPS fixes at random sections of the trajectory. Finally, 5% of the RTK GPS fixes are randomly removed from the trajectory to emulate GPS outages as they may occur in generic GPS receivers. Since different data sources have different spatial ranges, a 3D region of interest (ROI) in the vehicle's ego frame is established. its limits in XY plane are chosen as 25 m in front of the vehicle (positive X axis), 15 meters behind (negative X axis), and 15 m at each side (Y axis). Since vision cannot provide data in the back of the vehicle as well as to the front bumper of the vehicle, the ROI of vision is limited from 3.5 m to 25 m along the positive X axis.
Even though the vision data used in this work does not cover the back view of the vehicle, the other two major sources-LiDAR and map-can provide information in the back of vehicle, hence justifying the choice of the limit in negative X axis.
The discussion on the results has three parts. Firstly, comparing the performance of the proposed method to the method in [18]. This includes a comparison of integrity markers in the datasets presented in [18] and showcasing the improvements provided by the new method in handling fault and feasibility predictors (FPs) produced by the previous method. FPs are the markers generated when the fitting of the common model to the data sources is not possible or feasible. These markers suggest the limitations of the method in [18], which mainly arise when the method is applied on non-highway scenarios. The set of five FP markers defined is • FP m : Not enough nodes in the map for model fitting; • FP v : Not enough lane markings for model fitting; • FP g : GPS measurement is not available or an outlier; • FP s : Vehicle not moving or moving very slowly; • FP t : Vehicle performing a hard turn.
The second part of this discussion considers more datasets in semi-urban and urban scenarios to evaluate the integrity estimation of sources in complex situations such as junctions, road splits, and merges, etc. In the final part, we compute classical integrity markers from our framework and compare them with values presented in [3].
Integrity Marker Comparison
In this section, we compare the results presented in [18] with the results obtained from the new method. The key difference between these two methods is the parameter they use for the integrity computation. The former uses the error observed in model fitting to evaluate integrity, whereas the latter uses coherence between data representations to achieve the same. Hence, the contribution of error by each sensor and the contribution of coherence by each sensor are used for this analysis of the results of these methods, respectively. The same errors are introduced in the GPS for each algorithm, and the results obtained from the dataset 2011_09_26_drive_0029 are shown in Figure 4.
The primary advantage of the proposed method is the ability to evaluate the integrity in conditions where FPs are produced due to the limitations of the model-based integrity analysis employed in [18]. The stopping of the vehicle between frame numbers 187 and 265 and a hard left turn at the junction from 265 to 330 cause poor model extraction using the previous method, resulting in an unusable integrity evaluation. Consistent coherence is observed during the same scenario as shown in Figure 4b using the new method, providing meaningful integrity estimation. Figure 5a shows an example frame (207) in this section, where polynomial model estimation fails to represent data from sources. On the other hand, the FGs are able to represent the scenario well. After frame 330, the vehicle enters a curved link road with challenging light conditions such as shadows and oversaturated road sections, as shown in Figure 5b, causing large model-fitting errors in vision, shown in Figure 4a. Though a decrease in the coherence is observed, the addition of LiDAR and introduction of new features helps the new method provide more consistent integrity markers. In Figure 6, the results of integrity assessment in a highway scenario are presented, where the old method reliably performed. The FP m instances observed in this dataset are due to the lack of map nodes to reliably fit the polynomial model in straight line road sections. In the new method, the model fitting is replaced with FG data representation, which eliminates such errors in modeling. The comparison of integrity markers in specific cases presented in [18] with the integrity markers provided by the new method is given in Table 1. A general tendency of improved integrity values is observed across all datasets and scenarios. For example, in the second row of Table 1, the integrity weight of vision computed using the old method was lower due to the improper detection of curved lane markings as straight lane markings. This resulted in an inconsistent polynomial model compared to the other two data sources, causing a low integrity weight of 0.175. But using the new method, drivable road detection along with surrounding structure detection improved the consistency of vision data with other sources, resulting in a higher integrity value of 0.612. The proposed method is proven to be able to handle every situation where FP was provided by the old method. In the first row of Table 1, the lack of sufficient map nodes on a straight road segment made model-based integrity estimation impossible, as confirmed by the FP m flag. The new approach enables integrity estimation and provides an integrity weight of 0.422. It is worth noting that a high integrity value is not observed because of poor map rendering due to lack of correct lane width information from the map. Incorrect road segment selection from the map does not affect the new method as it uses all of the neighborhood roads in integrity estimation. Table 1. Results obtained using the proposed method and the method presented in [18].
Complex Situations
This section is dedicated to analyzing the behavior of the integrity assessment system in some of the selected complex scenarios present in the KITTI dataset. In Figure 7a, an example of a semi-urban road junction is shown. Due to the lack of information from the map, the rendering process failed to reconstruct the continuity of lanes at the intersections. On the other hand, vision and LiDAR data detected all of the branch roads at the junction and managed to perceive the width of each of these road sections accurately. This results in a lower integrity value for the map at this junction (Frame numbers: 310-320) compared to other sources, as shown in Figure 4c.
One of the main reasons behind the proposed data representation is the fact that it is an improvement over other existing geometrical models for intersections, which fail to accommodate partially correct data. Figure 7b shows a partial road detection from LiDAR due to the difference in elevation of one of the road branches in the scenario. Even though data available from LiDAR is not complete, the part that is detected is coherent with both vision and map. In fact, LiDAR has more integrity than vision in the comparison, not only because of its coherence in road detections, but also, the available grass-patch detection compensates the partial road detection. The integrity values in this scenario (Frame numbers: 120-200, dataset 2011_09_26_drive_0011) are computed around 0.456, 0.349, and 0.165 for LiDAR, vision, and map, respectively.
Performance of Integrity Monitoring
To evaluate and compare the proposed integrity framework to the integrity concepts transposed from civil aviation concepts, the Horizontal Protection Level (HPL) is computed. According to [29], the HPL is the radius of a circle in the horizontal plane that describes the region assured to contain the indicated horizontal position. It is the statistical bound for horizontal position error with a confidence level derived from the integrity risk requirement of an application. We also compute the Lateral Protection Level (LatPL) and Longitudinal Protection Level (LonPL), as proposed in [3]. The illustration given in Figure 8 shows the geometrical interpretations of these protection levels with respect to the ego frame of the vehicle and feature grids. Extending these concepts, we use the final distribution of the particles from the localization optimization particle filter described in Section 3.5 to compute LatPL, LonPL, and HPL. The lateral and longitudinal positions of all the particles that belong to the 95th percentile of the coherence matching scores are modeled using a Gaussian distribution. LatPL, LonPL, and HPL are then computed using the average standard deviation of particle distributions from each sensor combination used to optimize localization as where σ 2 CX and σ 2 CY are the lateral and longitudinal variances of particles from the vision-map optimization result and σ 2 LX and σ 2 LY are the lateral and longitudinal variances of particles from the LiDAR-map optimization result. The results obtained from the HPL evaluation of two of the datasets presented in Section 4.1 are shown in Figure 9. Using historical HPL data available from hte European Global Navigation Satellite System Agency [30], the average value of the HPL over the last 5 years (from 01-2015 to 07-2020) for the nearest zone (Zurich) to the dataset location (Karslruhe) is calculated as 8.1 m. According to the total integrity levels and allocation of integrity risks derived in [3] Figure 9. In highway scenarios, the LatPL computed using our method is completely within the LatPL limit derived by [3], whereas in urban scenarios, 91% of the time, LatPL from our method is under the limit. On the other hand, the HPL computed using our method shows good coherence with the historical HPL calculated using [30]. However, the LonPL computations are, most of the time and in both scenarios, outside the limit of LonPL derived by [3]. This is due to the fact that the sensors considered in this work are better at providing lateral information ( [3]) than longitudinal information. This is evident from the highway scenario in Figure 9a, where the road is straight without any other significant information to bound the sensor data in the longitudinal direction. In Figure 9b, sections where the LonPL computed from our method is closer to the LonPL limit of 1.45 m contain curved road sections or other distinguishable surfaces, which helps to reduce LonPL considerably. Hence, the results presented in this section demonstrate the capability of the proposed method to assess the integrity of perception sensors in localizing vehicles with the accuracy required for urban and highway navigation.
Conclusions
This work presents a framework for integrity monitoring of sources used in the localization of autonomous vehicles. The limitations of common geometrical models in representing multimodal data sources are identified in this work. To overcome these issues, a semantic feature grid model is proposed that can geometrically represent different features using labels. A function for coherence evaluation between feature grids is formalized to iteratively optimize the localization as well as to assess the integrity of data sources. The framework is tested using different scenarios from datasets, and the results show the versatility of the proposed model, which is able to provide reliable and consistent integrity estimation in highway as well as semi-urban and urban environments. This method is proven robust against inconsistencies in feature detections such as partial detections, occlusions, and poor map rendering. The method presented claims scalability since it can be implemented with any number of sensors and digital map sources. The only requirement for the applicability of this framework is the ability to detect common features from all of the data sources and represent them geometrically in the proposed feature grid representations. This work also illustrates how classical integrity markers like protection levels can be transposed for perception data sources used in autonomous vehicles.
Future Works
The rule-based map rendering technique used in this method is observed to be contributing several inconsistencies, which makes it difficult to isolate map rendering errors from GPS positioning errors. We propose the use of high-definition maps, which are enriched with globally localized lane-level information, to address this issue. Accurate maps will improve the coherence estimation between features detected from other data sources such as stop lines, pedestrian crossings, lane merging information, road structure information, etc. It will also be important to study map-rendering techniques that improve integrity multi-source perception analysis by including precise building footprints, road width information, lane markings, and traffic sign localization. | 2020-08-20T10:11:45.391Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "7f3b8bc5e7434fb4d0c5c6a0a09090958fe67df0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/16/4654/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06f2c4e6a85d725efed2844fde65e5d74be9a20c",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
109840062 | pes2o/s2orc | v3-fos-license | Tribological properties of short carbon fibers reinforced epoxy composites
: Short carbon fiber (SCF) reinforced epoxy composites with different SCF contents were developed to investigate their tribological properties. The friction coefficient and wear of the epoxy composites slid in a circular path against a steel pin inclined at 45° to a vertical axis and a steel ball significantly decreased with increased SCF content due to the solid lubricating effect of SCFs along with the improved mechanical strength of the composites. The scanning electron microscope (SEM) observation showed that the epoxy composites had less sensitive to surface fatigue caused by the repeated sliding of the counterparts than the epoxy. The tribological results clearly showed that the incorporation of SCFs was an effective way to improve the tribological properties of the epoxy composites.
Introduction
Polymers are one of the most successfully exploited materials due to incredible variety of chemical structures available and their relatively low cost, ease of processing, acceptable thermal and environmental resistances and recyclability [1]. Generally, polymers have to show good wear resistance in order to be suitable for tribological applications. However, polymers have low-load carrying capacity and short running life when they are employed in tribological applications at high speed under heavy load [2]. In addition, wear of polymers contributes to a significant finical loss in industry [3].
Nowadays, a tremendous interest is increasingly raised in scientific and industrial communities to apply polymer composites for tribological applications such as gears, cams, wheels, bearings, seals and high wear and scratch resistant flexible risers because the specific development of polymer composites based on con-ventional polymers can obtain new materials with new structural and functional properties superior to those of pure polymers [4−7]. Moreover, the tribological properties of polymer composites can be tailored using carbon fillers such as carbon nanotubes (CNT), carbon fibers (CF), graphene and so on [6]. However, some carbon fillers such as CNTs still have drawbacks for the development of CNT reinforced polymer composites due to their high cost and difficult dispersion in polymer matrices [6,7]. Short carbon fibers (SCF) are one of the most popular candidates for the development of structural and functional SCF reinforced polymer composites because of their high surface-to-volume ratio, outstanding thermal, mechanical and electrical properties and good dispersion in polymer matrices [8,9]. Zhong et al. [10] reported that SCFs were found to improve wear resistance of poly (ether ether ketone) (PEEK) based composites by carrying a main load between contact surfaces and protecting polymer matrices from severe abrasion. Normally, an injection of fiber reinforced polymer composites is in a process of great expansion [11,12]. Manufacturing of abrasive polymer composites is 227 not very attractive because molds manufactured with metallic filled polymer composites exhibit limited durability due to their extensive wear during filling injection stage of the abrasive polymer composites [11,12]. Liquid epoxy resins exhibit better mixing and processing abilities with reinforcement materials in granular or fiber form and the mixtures result in composite materials with intermediate properties depending on combined actions of the components. Although the mechanical properties of SCF reinforced epoxy composites such as tensile and compressive strengths, hardness and elastic modulus have been widely investigated, the reports on the tribological properties of the epoxy composites such as friction coefficient and scratch and wear resistances are not satisfactorily enough. An understanding of a correlation between the SCF content in epoxy composites and their tribological properties is importantly essential for successful tribological applications.
In this study, epoxy composites with different SCF contents were prepared. The mechanical properties of the epoxy composites such as hardness and Young's modulus were measured with micro-indenter. The tribological properties of the epoxy composites, such as friction coefficient and wear, were investigated by sliding against a steel pin inclined at 45° to a vertical axis and a steel ball.
Experiment details
Epoxy resin (Epolam 5015, Axson) was mixed with SCFs (M-2007S, Kreca) at different concentrations in a glass beaker and mechanically stirred at 1,500 rpm for 30 min in a water bath at 60 °C . After degassing for 20 min in a vacuum oven, hardener (Epolam 5015, Axson) was added in the mixture followed by hand stirring for 10 min and degassing for another 15 min. The well mixed resin was then slowly poured into the Fixiform cup mold (Struers) and cured at room temperature (RT~ 22−24 °C ) for 24 h. The samples were demoded and post-heat treated at 80 °C for another 16 h for the following analysis and testing. The average diameter and length of the SCFs used were about 14.5 μm and 90 μm, respectively.
The surface morphology and topography of the samples were studied using scanning electron microscopy (SEM, JEOL-JSM-5600LV) and surface profilometry (Talyscan 150, Taylor Hobson) with a diamond stylus of 4 μm in diameter. For SEM measurement, the samples were coated with a gold layer to avoid charging. Three measurements on each sample were carried out to get an average root-meansquared surface roughness, R q .
The hardness and Young's modulus of the samples were measured using a micro-indenter (micro-CSM) with a pyramidal shaped diamond tip of 20 μm in diameter. The indentation test was performed in a load control mode with a total load of 3 N. In each indentation test, the loading and unloading rates and dwelling time at the peak load were 6 N/min, 6 N/min and 5 s, respectively. The hardness and Young's modulus of the samples were derived using Oliver & Pharr's method and average values were taken from sixteen indentation measurements carried out at different locations on each sample [13].
The tribological properties of the samples were investigated using a micro-tribolometer (CSM) by sliding against a Cr6 steel pin inclined at 45° to a vertical axis or a Cr6 steel ball in a circular path of 3 mm in diameter for about 40,000 laps at a sliding speed of 3 cm/s under normal loads of 2 and 6 N. The diameters of the steel pin and ball were about 6 mm. All the samples were polished using 1,200 grit papers prior to tribological test. Two to three measurements per sample were carried out to get an average friction coefficient. The widths and depths of wear tracks of the samples were measured using surface profilometry to get average wear width and depth with 4 measurements per wear track. Figure 1 shows the R q values of the mechanically polished epoxy composites with different SCF contents. The R q value of the epoxy is about 2.32 μm. The R q value of the epoxy composites almost linearly increases from 3.47 to 6.24 μm with increased SCF content from 2.5 to 7.5 wt% although they are mechanically polished under the same conditions. It indicates that the higher SCF content gives rise to the rougher surfaces of the epoxy composites during the mechanical polishing because the increased SCF content increases the numbers of protruded SCFs above the surface and SCF debonded sites on the surface. However, the further increased SCF content more than 7.5 wt% does not further significantly increase the R q value of the epoxy composites probably due to the enhanced uniform distribution of incorporated SCFs so that the epoxy composite with 20 wt% SCFs has the smaller R q value of about 5.89 μm than the one with 7.5 wt% SCFs. It is clear that all the polished epoxy composites possess the rougher surfaces than the polished epoxy. Figure 2 shows the surface topographies of the epoxy and epoxy composites with different SCF contents. In Fig. 2(a), the epoxy possesses a relatively smooth surface topography that is composed of small surface asperities although abrasive lines resulted during the mechanical polishing are apparently found on the surface. As shown in Fig. 2(b), the incorporation of 5 wt% SCFs apparently roughens the surface of the epoxy composite with protruded SCFs and SCF debonded sites. In addition, the possible aggregation of SCFs can also contribute to the rougher surface topography of the epoxy composite with 5 wt% SCFs. When the SCF content is further increased to 20 wt% through 10 wt%, the increased numbers of protruded SCFs and SCF debonded sites promote the surface roughness of the epoxy composites as found in Figs. 2(c) and 2(d). and 5.92 GPa, respectively, with further increased SCF content to 20 wt%. It indicates that the incorporation of SCFs in the epoxy matrix apparently improves the hardness and elastic modulus due to the much higher rigidity of the SCFs than that of the epoxy matrix [14−17]. Beyond the SCF content of 7.5 wt%, significant slowdowns in the increased hardness and Young's modulus of the epoxy composites with increased SCF content indicate that the SCF contents more than 7.5 wt% cannot further give rise to significant improvements in the hardness and elastic modulus of the epoxy composites.
Results and discussion
The tribological properties of the epoxy composites were investigated by sliding against a Cr6 steel pin of 6 mm in diameter inclined at 45° to a vertical axis because the inclined steel pin could behave as a V-shaped counterpart to generate scratch-induced wear on the surfaces of the composites (see the inset in Fig. 4(a)). Figure 4(a) presents the friction coefficients of the epoxy composites with different SCF contents slid against an inclined steel pin in a circular path of 3 mm in diameter for about 40,000 laps at a sliding speed of 3 cm/s under normal loads of 2 and 6 N.
The mean friction coefficient of the epoxy slid against the inclined pin under a normal load of 2 N is about 0.67. The increased normal load to 6 N slightly decreases the mean friction coefficient of the epoxy to about 0.61. The sliding of a steel counterpart on a polymer can cause localized softening or melting of the polymer so that the molten materials can be transferred onto the surface of the counterpart to form a transfer layer [18][19][20]. The transferred polymer layer can reduce the friction coefficient by changing the rubbing mode from the metal-on-polymer to the polymer-on-polymer during the sliding [21][22][23][24][25]. It is supposed that the increased normal load to 6 N increases frictional heat during the sliding and subsequently reduces the friction coefficient of the epoxy by promoting the transfer of epoxy materials onto the surface of the inclined pin. During the sliding, a possible occurrence of adhesion between two smooth surfaces in contact can give rise to a high friction via an effective interfacial shear strength between the two contacting surfaces [26,27]. It can be seen that the higher wear of the epoxy associated with the higher normal load of 6 N results in the higher surface roughening and the larger quantity of wear debris, which in turn lower the friction coefficient of the epoxy by weakening the interfacial shear strength between the inclined pin and epoxy [28,29]. However, such a decrease in the friction coefficient with increased normal load is not found for the epoxy composites. In addition, the epoxy composites exhibit the higher friction coefficients for all the SCF contents under the higher normal load of 6 N than under the lower normal load of 2 N. It is known that an incorporation of carbon fillers in a polymer improves the thermal stability of the polymer due to the much higher thermal conductivity of carbon fillers than that of the polymer matrix [30][31][32][33][34][35]. Therefore, the improved thermal stability of the epoxy composites with the incorporation of SCFs prevents the transfer of surface materials onto their counterpart surfaces by dissipating the frictional heat within the matrices [30][31][32][33][34][35]. Under this condition, the improved hardness and Young's modulus of the epoxy composites (Fig. 3) enhance the sensitivity of the composites to abrasive wear caused by the sliding of the inclined steel pin so that the higher abrasive wear of the epoxy composites associated with the higher normal load results in the higher friction coefficient of the composites through the larger contact area between the steel pin and composite [36][37][38][39].
In Fig. 4(a), the incorporation of 2.5 wt% SCFs apparently lowers the mean friction coefficients of the epoxy composite tested under the normal loads of 2 and 6 N to about 0.31 and 0.56, respectively, compared to those of the epoxy. The further increased SCF content from 5 to 20 wt% further decreases the mean friction coefficients of the epoxy composites from about 0.26 to 0.19 for the normal load of 2 N and from about 0.3 to 0.25 for the normal load of 6 N. It is known that carbon fillers can serve as a solid lubricant to lubricate the rubbing surfaces [21][22][23][24][25]. Therefore, the increased SCF content leads to the significantly decreased friction coefficient of the epoxy composites ( Fig. 4(a)) by promoting the solid lubricating effect of SCFs.
Normally, a larger contact between a polymer and its counterpart can give rise to a higher friction coefficient during sliding [36][37][38][39]. Therefore, the existence of SCFs on the surface can lower the friction coefficient of the epoxy composite by lessening a direct contact between the inclined pin and composite. During the sliding, the surface wear of the epoxy composite releases SCFs to an interface between the inclined pin and composite and the released SCFs would freely roll or slide under a lateral force for reducing the friction coefficient of the composite [40]. It is therefore supposed that the increased SCF content decreases the friction coefficient of the epoxy composites due to the reduced contact between the inclined pin and composite and the promoted free-rolling effect of the SCFs in addition to the solid lubricating effect of the SCFs.
Generally, the mechanical strength of a polymer has a significant influence on its friction coefficient and wear [41,42]. The poor mechanical strength of a polymer can result in the higher friction coefficient of the polymer by promoting the contact between the polymer and its counterpart and the wear of the polymer attributed to micro-plastic deformation and micro-cutting caused by the surface asperities of the counterpart [41,42]. The increased hardness and elastic modulus of the epoxy composites with increased SCF content (Fig. 3) can be therefore correlated to the decreased friction coefficient of the composites since the harder epoxy composites can have the lower friction coefficients by preventing their deformation, lessening their contact with the inclined pin and resisting to their wear.
The effect of surface roughness on the friction coefficient of the epoxy composites should be taken into account since a rougher surface can give a higher friction coefficient via mechanical interlocking between two mating asperities [36][37][38][39]. However, no correlation between the increased surface roughness (Fig. 1) and decreased friction coefficient ( Fig. 4(a)) of the epoxy composites clearly indicates that the effect of surface roughness on the friction coefficient of the epoxy composites in terms of mechanical interlocking is not significant in this study. On the other hand, the increased surface roughness contributes to the decreased friction coefficient of the epoxy composites by lessening a real contact area between the inclined pin and composite [36][37][38][39].
In Fig. 4(a), the epoxy composite with 2.5 wt% SCFs exhibits a significant increment in the friction coefficient with increased normal load from 2 to 6 N. At the low SCF content of 2.5 wt%, a continuous impact of the inclined pin against protruded SCFs can give a high friction coefficient during the sliding by enlarging a tangential force that is indicative of a frictional force [43]. As a result, an increase in the normal load results in a significant increment in the friction coefficient of the epoxy composite via the pronounced impact of the inclined pin against the protruded SCFs. However, such increments in the friction coefficients of the epoxy composites with SCF contents more than 2.5 wt% are not found in Fig. 4(a) because the densely, uniformly distributed SCFs over the composite surfaces lessen the impact of the inclined pin against them. Figure 4(b) shows the friction coefficients of the epoxy composites with different SCF contents, tested under a normal load of 2 N, as a function of the number of laps. The friction coefficient of the epoxy reaches about 0.58 after 3,000 laps and slightly increases with increased laps due to the promoted surface wear of the epoxy so that the friction coefficient of the epoxy at the 40,000 laps is about 0.67. The fluctuation in the friction coefficient of the epoxy becomes significant with prolonged sliding, which is indicative of the pronounced stick-slip phenomenon with the promoted wear of the epoxy [44,45]. The friction coefficient of the epoxy composites first increases with increased laps in the running-in period and turns to decrease before becoming stable for the rest as found in Fig. 4(b). As the trends of friction coefficient versus laps of the epoxy composites are much lower than that of the epoxy, the increased SCF content further depresses the trends of friction coefficient versus laps of the epoxy composites as a result of the further decreased friction coefficient of the composites during the entire sliding. Figure 5 shows the wear widths and depths of the epoxy and epoxy composites with different SCF contents measured against the inclined pin for about 40,000 laps under the normal loads of 2 and 6 N. The wear widths and depths of the epoxy composites are much smaller than those of the epoxy as well as significantly decrease with increased SCF content, indicating that the increased SCF content results in the dramatically decreased wear of the epoxy composites due to the solid lubricating effect of SCFs along with the improved hardness and elastic modulus of the composites [21-25, 40, 46-50]. Although the epoxy has a decrease in the friction coefficient with increased normal load (Fig. 4(a)), the wear width and depth of the epoxy increase (Fig. 5), which confirms that the increased wear of the epoxy with increased normal load is responsible for its decreased friction coefficient by promoting the surface roughening and the production of wear debris [28,29]. However, the epoxy composites exhibit increases in their both friction coefficient and wear with increased normal load (Figs. 4(a) and 5), implying that the frictional behavior of the epoxy composites is closely related to their wear behavior as a result of the increased contact between the inclined pin and composite [36][37][38][39]. Increments in the wear width and depth of the epoxy composite with 2.5 wt% SCFs with increased normal load (Fig. 5) are not as significant as an increment in its friction coefficient (Fig. 4(a)) because the increased normal load pronounces the impact of the inclined pin against protruded SCFs on the surface without giving a very much difference in the wear rates of the composite under the both normal loads.
The surface morphologies of the epoxy and epoxy composites with different SCF contents before and after the tribological test were observed using SEM. Figures 6(a) and 6(b) show the surface morphologies of the mechanically polished epoxy and epoxy composite with 20 wt% SCFs, respectively. In Fig. 6(a), the mechanically polished epoxy exhibits significant abrasive lines on the surface. It is clearly seen in Fig. 6(b) that the mechanical polishing apparently flattens SCFs on the surface of the epoxy composite with 20 wt% SCFs via their wear while the debonding of SCFs is found on the same surface. The epoxy composite with 20 wt% SCFs possesses the rougher surface morphology at the lower magnification than the epoxy (insets in Fig. 6) since the surface of the epoxy composite is apparently covered by SCF debonded sites (inset in Fig. 6(b)), which is also confirmed by the observation of SCF debonded sites on the surface topography of the same composite with the larger R q value than that of the epoxy (Figs. 1 and 2(d)). Fig. 7(a) show the surface morphology and topography of the worn epoxy, respectively, slid against the inclined pin for about 40,000 laps at a sliding speed of 3 cm/s under a normal load of 2 N, on which a V-shaped wear track is found. Comparison of Fig. 7(a) and 7(c) clearly shows that the sliding of the inclined pin on the epoxy under the higher normal load of 6 N generates a larger wear track on the surface due to the higher wear of the epoxy. Micro-wave features are apparently found on the wear tracks of the epoxy tested under the both normal loads of 2 and 6 N (Figs. 7(b) and 7(d)), which is indicative of surface fatigue caused by the repeated sliding of the inclined pin. A cyclic stress concentration occurred in front of the inclined pin under cyclic loading causes surface fatigue which initiates minute cracks perpendicular to the sliding direction and propagates the cracks into the subsurface of the epoxy [51,52]. The formation of a network of micro-cracks creates micro-wave features on the wear tracks of the epoxy as found in Figs. 7(b) and 7(d). It is also found that the repeated sliding of the inclined pin induces surface fatigue on the walls of the V-shaped wear track with an apparent appearance of micro-wave features as seen in Figs. 7(a) and 7(c). Figure 8 shows the surface morphologies of the worn epoxy composite with 20 wt% SCFs tested under different normal loads. The V-shaped wear tracks of the epoxy composite with 20 wt% SCFs tested under the normal loads of 2 and 6 N (Fig. 8) are much smaller than those of the epoxy (Fig. 7), which indicates that the incorporation of 20 wt% SCFs dramatically reduces the scratch-induced-wear of the epoxy composite. As shown in Figs. 8(b) and 8(d), worn SCFs are apparently found on the wear tracks of the epoxy composite, which implies that the existence of SCFs on the surface effectively prevents the scratch-induced wear of the epoxy composite by serving as a solid lubricant to lubricate the rubbing surfaces, lessening a direct contact between the inclined pin and composite and preventing an easy removal of epoxy materials with their much higher wear resistance. In addition, micro-wave features observed on the wear tracks of the epoxy (Figs. 7(b) and 7(d)) are not found on the wear tracks of the epoxy composite with 20 wt% SCFs (Figs. 8(b) and 8(d)), indicating that the incorporation of SCFs effectively lessens surface fatigue by preventing a direct contact between the inclined pin and composite under cyclic loading. However, microcracks still can be found along the interfaces between the SCFs and epoxy matrix because the cyclic stress concentration occurred in front of the inclined pin forms interfacial cracks via an occurrence of debonding between the SCFs and epoxy matrix. SCF pulled-out sites on the wear tracks of the epoxy composite with 20 wt% SCFs (Figs. 8(b) and 8(d)) suggest that the removal of SCFs from the epoxy matrix during the sliding contributes to the wear of the epoxy composite. Abrasive lines on the wear tracks of the epoxy composite (Figs. 8(b) and 8(d)) imply that the wear of the epoxy composite is still attributed to the abrasive wear even under the solid lubricating effect of SCFs. Nevertheless, the wear morphologies of the epoxy composite tested under different normal loads clearly point out that the incorporation of SCFs is an effective way to reduce the abrasive and fatigue wear of the epoxy composites during the prolonged sliding contact with the steel pin. Figure 9(a) shows the friction coefficients of the epoxy composites with different SCF contents slid against the Cr6 steel ball of 6 mm in diameter for about 40,000 laps at a sliding speed of 3 cm/s under normal loads of 2 and 6 N. The mean friction coefficients of the epoxy slid against the 6 mm steel ball under the normal loads of 2 and 6 N ( Fig. 9(a)) are about 0.71 and 0.68, respectively, which are slightly higher than those of the one tested against the inclined steel pin under the same normal loads (Fig. 4(a)). It indicates that the geometry of the counterpart has a significant influence on the friction coefficient of the epoxy because the larger contact area between the steel ball and epoxy compared to that between the inclined pin and epoxy gives rise to the higher friction coefficient of the epoxy during the sliding. In Fig. 9(a), the increased SCF content from 2.5 to 20 wt% significantly decreases the mean friction coefficients of the epoxy composites slid against the steel ball from about 0.24 to 0.18 for the normal load of 2 N and from about 0.53 to 0.24 for the normal load of 6 N, which is in agreement with the results reported in Fig. 4(a). The epoxy slid against the steel ball under the both normal loads ( Fig. 9(a)) exhibits the higher friction coefficients than the one tested against the inclined pin under the same normal loads ( Fig. 4(a)). However, the friction coefficients of the epoxy composites slid against the steel ball ( Fig. 9(a)) are slightly lower for all the SCF contents compared to those of the ones tested against the inclined pin ( Fig. 4(a)) because the higher surface roughness of the epoxy composite caused by the protruded SCFs and SCF debonded sites than that of the epoxy (Fig. 1) gives rise to more interaction between the inclined pin and composite. The results clearly confirm that the incorporation of SCFs effectively decreases the friction coefficients of the epoxy composites slid against the both inclined pin and steel ball with a more effective influence on the friction coefficients of the composites slid against the steel ball. Figure 9(b) presents the friction coefficients of the epoxy composites with different SCF contents, slid against the steel ball under a normal load of 2 N, as a function of the number of laps. The friction coefficient of the epoxy slid against the steel ball reaches 0.59 after 3,000 laps and becomes stable for the rest as a result of the stable wear during the entire sliding. However, the same epoxy slid against the inclined pin exhibits a linear increase in the friction coefficient with increased laps due to the promoted wear of the epoxy with prolonged sliding (Fig. 4(a)), which indicates the geometric effect of the counterpart on the friction coefficient of the epoxy. In Fig. 9(b), the friction coefficient of the epoxy composites slid against the steel ball apparently decreases throughout the experiment with increased SCF content. Normally, a contact between a sharp tip and a sample during sliding induces a cutting state between them, which results in an apparent removal of materials from the sample surface [53,54]. It is supposed that the V-shaped geometry of the inclined pin can generate the higher wear of the epoxy composite by inducing a cutting state between the inclined pin and composite compared to the spherical-shaped geometry of the steel ball so that the epoxy composite slid against the inclined pin needs longer time to reach a stable wear condition [29,30]. Therefore, the epoxy composites slid against the steel ball ( Fig. 9(b)) mostly exhibit shorter running-in period and longer stable wear period on the trends of friction coefficient versus laps than the ones tested against the inclined pin ( Fig. 4(b)). Figure 10 illustrates the wear widths and depths of the epoxy composites with different SCF contents slid Fig. 10 Wear width and depth of epoxy composites, tested under the same conditions as described in Fig. 9, as a function of SCF content.
Figures 7(a) and 7(b) and the inset in
against the steel ball under different normal loads. It is consistently found that the wear of the epoxy composites slid against the steel ball apparently decreases with increased SCF content due to the solid lubricating effect of SCFs along with the improved hardness and elastic modulus of the composites as the wear of the epoxy composites is higher for the higher normal load [21-25, 40, 46-50]. The wear widths of the epoxy composites slid against the steel ball (Fig. 10) are apparently larger than those of the ones tested against the inclined pin (Fig. 5) due to the larger interacting area between the steel ball and composite during the sliding. However, the wear depths of the epoxy composites slid against the steel ball are significantly smaller because the inclined pin enables the removal of materials from the deeper region via a cutting state between the inclined pin and composite than the steel ball [53,54].
The surface morphologies and topographies of the epoxy slid against the steel ball under the normal loads of 2 and 6 N are presented in Fig. 11, from which it is clearly found that the sliding of the steel ball apparently generates wear on the epoxy surface with a larger wear track for the higher normal load. The micro-wave features on the wear tracks of the epoxy tested under the both normal loads (Figs. 11(b) and 11(d)) indicate that the repeated sliding of the steel ball causes surface fatigue of the epoxy through the initiation and propagation of micro-cracks into the subsurface [51,52].
In Fig. 12, the wear tracks of the epoxy composite with 20 wt% SCFs slid against the steel ball under the normal loads of 2 and 6 N are not as significant as those of the epoxy tested under the same normal loads (Fig. 11), which is an indication of the greatly decreased wear of the epoxy composite with the incorporation of 20 wt% SCFs. The micro-wave features caused by surface fatigue are not found on the wear tracks of the epoxy composite tested under the both normal loads (Figs. 12(b) and 12(d)) as a result of the incorporation of SCFs. However, micro-cracks formed at the SCF/matrix interfaces still can be found on the wear tracks of the epoxy composite, which is in agreement with the reports made in Figs. 8(b) and 8(d). The SEM observation clearly confirms that the incorporation of SCFs greatly reduces the wear of the epoxy composites slid against the steel ball.
Conclusions
In this study, the tribological properties of the epoxy composites with different SCF contents were systematically investigated. The increased SCF content significantly increased the hardness and Young's modulus of the epoxy composites as a result of the incorporation of rigid SCFs. The friction coefficient and wear of the epoxy composites slid against the steel pin inclined at 45° to a vertical axis and steel ball dramatically decreased with increased SCF content due to the solid lubricating effect of the SCFs along with the improved hardness and elastic modulus of the composites. The SEM observation showed that the epoxy composites had less sensitivity to surface fatigue than the epoxy because the existence of SCFs on the surface lessened a direct contact between the counterpart and composite under cyclic loading. It could be concluded that the incorporation of SCFs was an effective way to improve the mechanical and tribological properties of the epoxy composites. | 2019-04-12T13:58:42.734Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "068bb31509902afd6d5ede6bb9331743c37c132b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-014-0043-5.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f990b3a3c32f3e533e95f3215f8c07da1672fef1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
257935788 | pes2o/s2orc | v3-fos-license | World Health Organization survey on the level of integration of traditional Chinese medicine in Chinese health system rehabilitation services
Background To meet the growing global demand for rehabilitation services, the World Health Organization (WHO) launched Rehabilitation 2030. This study was commissioned by the WHO to investigate the integration degree of traditional Chinese medicine (TCM) in Chinese health system rehabilitation services and the demand for TCM rehabilitation in China. Methods Twenty TCM rehabilitation experts and relevant government administrators were invited to complete the questionnaire between September 2019 and January 2022. The development of traditional, complementary, and integrative medicine (TCI) rehabilitation in China was assessed primarily based on six different health system components. Results 26 policies, regulations, and national strategic plans related to TCI rehabilitation were issued by relevant government departments since 2002; notably, 14 policies related to TCI rehabilitation development were intensively introduced from 2016 to 2021. These policies cover the three main areas of financing, infrastructure development, and service delivery. The National Administration of Traditional Chinese Medicine's investment in TCM clinical capacity infrastructure and scientific research in 2019–2021 increased by 66% compared to 2010–2012, and the average number of TCM hospitals with rehabilitation departments in 2020 increased by 6.5% compared to 2018. The proportion of community health service centers providing TCM services in primary medical and health institutions has increased by 30.8% over the past 10 years. Conclusion Long-term continuous policies, substantial financial investment, and expansion of the scope of TCI rehabilitation services in primary care institutions have effectively contributed to the rapid development of TCI rehabilitation. However, human resources and financing mechanisms for TCI rehabilitation need further improvement.
Introduction
With an aging population and rising incidence of chronic noncommunicable diseases, the number of people experiencing disability or declines in functioning is rapidly increasing world- wide, and there is an urgent need for suitable rehabilitation services. 1 , 2 In many parts of the world, however, the capacity to provide rehabilitation is limited or non-existent and it fails to adequately address population needs, particularly in low and middleincome countries (LMICs). 3 , 4 Accessible and affordable rehabilitation plays a fundamental role in achieving Universal Health Coverage (UHC) Sustainable Development Goal 3 (SDG3), which is to "ensure healthy lives and promote well-being for all at all ages." At the same time, there is evidence that traditional and complementary medicine (T&CM) can significantly contribute to the goal of UHC by involving in the provision of essential health services. 5 The WHO Global Report indicates that T&CM is important and often underestimated health resource with many applications, especially for the prevention and management of lifestyle-related chronic diseases and for meeting the rehabilitation needs of aging populations. 6 As an important part of Traditional, Complementary and Integrative Medicine (TCI), traditional Chinese medicine (TCM) has a long history. It has been proven to have definite efficacy for the clinical treatment of stroke, cancer, musculoskeletal system diseases, chronic respiratory system diseases, diabetes, and other chronic diseases, improving the functional impairment and quality of life of a large number of patients. [7][8][9][10][11] TCM rehabilitation is guided by TCM theory and uses TCM-specific rehabilitation methods such as Chinese medicine, acupuncture, Tuina massage, and traditional exercise to reduce or eliminate the functional impairment caused by disease and disability and to reintegrate patients into society. 12 Therefore, vigorously developing the application of traditional Chinese medicine (TCM) in rehabilitation has attracted the attention of an increasing number of countries.
China has an enormous population and is aging rapidly, with a large disabled population and an urgent need for rehabilitation. However, Chinese rehabilitation medicine started late and is dominated by Western rehabilitation techniques, resulting in an imbalance between supply and demand for rehabilitation. 13 , 14 Rehabilitation for people with disabilities in China began in the 1950s. To meet the needs of social development, China has introduced modern rehabilitation medicine and combined it with traditional rehabilitation medicine since the early 1980s. Rehabilitation has achieved rapid growth. In the past 20 years, the Chinese government has issued a series of policies to encourage the development of the field of TCM rehabilitation. In the Development Plan for Health Services of Traditional Chinese Medicine (2015 −2020), the Chinese government first proposed supporting the development of rehabilitation services with TCM characteristics. 15 In the Outline of the 14th Five-Year Plan for national economic and social development of the People's Republic of China and the vision 2035, China also established "paying equal attention to both Chinese and western medicine" as one of the important tasks in implementing the healthy China strategy that demonstrated the important role of TCM in meeting the needs of the people for health services and the general trend of promoting the integration of TCM technology and rehabilitation medicine. 16 This study aims to investigate the development of TCM rehabilitation services in China and evaluate the level of integration of TCM in Chinese health system rehabilitation.
Methods
This study used a questionnaire survey combined with a qualitative method of document review to detail the current situation and progress of the Chinese government in promoting the integration of Chinese medicine and modern rehabilitation. Data was collected between September 2019 and January 2022.
Study design and procedures
We performed an online questionnaire and document review to investigate the integration of TCI rehabilitation services in China's health system, providing evidence on how such services are effectively delivered at the national level with a measurable impact.
First, we sent questionnaires to 20 TCI rehabilitation experts from Shanghai, Beijing, Heilongjiang Province, and other areas, as well as some TCI rehabilitation administrators from national and local health commissions via email. The WHO TCI Rehabilitation Services Questionnaire consisted of 81 statements. These statements were divided into eight categories: "leadership and governance" (11 statements), "financing for TCI rehabilitation" (9 statements), "human resources for TCI rehabilitation" (9 statements), "TCI rehabilitation service delivery" (26 statements), "assistive products (APs) for TCI rehabilitation" (8 statements), "TCI rehabilitation infrastructure" (3 statements), "TCI rehabilitation information" (10 statements), and "emergency preparedness" (5 statements). Among them, there were 41 priority questions indicated in red. We requested further details from respondents when they answered "yes" to any portion of the questionnaire.
After the questionnaire survey process, we also conducted literature and network resource search as a supplement to the questionnaire, where we searched for specific reports and publications regarding TCI rehabilitation from major websites such as the Chinese government, the National Administration of Traditional Chinese Medicine (NATCM), and the National Health Commission (NHC) using keywords such as "traditional Chinese medicine," "rehabilitation," "development," "health," "personnel," "medical insurance," and "assistive devices." We also reviewed TCI policies and regulations in China to understand the current development of TCI in the rehabilitation field.
Analysis
We included in our analysis data collected through questionnaires as well as specific reports and publications on TCI rehabilitation obtained from the document review. In each section of the questionnaire, priority questions were shaded red. Collectively, these questions constituted a rapid assessment that can be conducted in the absence of the necessary information to complete all of the questions in the tool.
The survey data was categorized and summarized. We categorized the description of the results into six main areas (rehabilitation governance, financing, human resources, service delivery, assistive products, and infrastructure and medications) and two secondary aspects (assistive technology and emergency preparedness). We conducted a rapid assessment based on priority issues during the analysis in the information and research section since not all of the important information required for the questions was obtained in this section of the questionnaire. For the analysis of rehabilitation governance, we not only overviewed the primary plans of the Chinese government for TCI rehabilitation, but also identified the primary governance structures and coordination mechanisms for TCI rehabilitation. TCI rehabilitation service delivery will be described in terms of both availability and quality of TCI rehabilitation services. Regarding financing, we focused on the budget of the NATCM for scientific research and the clinical capacity infrastructure of TCI rehabilitation, as well as the major health financing mechanisms for TCI rehabilitation. We identified professionals providing TCI rehabilitation treatment at all levels of health facilities and summarized TCI rehabilitation workforce initiatives to assess the current status of TCI rehabilitation human resource development in China. Investment in the construction and management of rehabilitation departments in TCM hospitals and standardized guidance constituted the two primary aspects of evaluating infras- tructure. The analyses of the assistive technology and emergency preparedness were appraised by examining the current use of TCI rehabilitation assistive devices in TCI rehabilitation treatment and the application of TCI rehabilitation services in significant emergencies that have occurred in China thus far, respectively. Subsequently, the research team discussed all of the information and checked the consistency and quality of the information. If necessary, missing information was collected later ( Fig. 1 ).
Leadership and governance
According to our survey, continuous policies and coordinated management structure have provided the foundation and guarantee for the implementation of TCI rehabilitation development programs in provinces and municipalities over the past 20 years ( Fig. 2 ). These policies cover the three main areas of financing, infrastructure development, and service delivery, which have greatly promoted the capacity and level of TCM rehabilitation services. The State Council focused on strengthening the standardization of TCM and integrative medicine services in 2002-2014. From 2015 to 2022, government agencies such as the State Council, the NATCM, and the NHC have issued 17 policies to clarify specific implementation plans that enhance infrastructure development, professional training, and service delivery in TCM rehabilitation. Every five years, the NATCM publishes a development report on TCM rehabilitation to provide a reference for promoting the development of TCM rehabilitation in China. Furthermore, every city and province has a TCM administration that provides funds to support the development of TCM rehabilitation.
Financing for TCI rehabilitation
According to the relevant government administrators, the NATCM did not have specific statistics on the annual spending budget for TCI rehabilitation and its assistive products, but the spending budget for some relevant items on TCI has shown an increasing trend over the past 10 years. For example, the public report on the departmental budgets of the NATCM from 2010 to 2021 demonstrated that the investment in TCM scientific research and the clinical capacity infrastructure was up to 5616.99 million CNY in 2019-2021, an increase of 66% compared to 2010-2012 ( Fig. 3 ).
The major health financing mechanisms for TCI rehabilitation is the Urban and Rural Resident Basic Medical Insurance. In terms of the utilization of TCM services, the basic medical insurance has always adhered to the principle of paying equal attention to both Chinese and Western medicine. For the management of diagnosis and treatment projects, except for non-disease treatment and auxiliary treatment, other therapeutic TCM diagnoses and treatment projects that meet the regulations can be paid for by the medical insurance fund under the regulations. To date, most regions have included massage, acupuncture, bone-setting, and other TCM rehabilitation services into the medical insurance payment scope for employees and urban and rural residents. 17 In addition, 95% of TCM hospitals and 92% of the combined Chinese and Western medicine hospitals have been included in the reimbursement scope of medical insurance. 18
Human resources for TCI rehabilitation
The result of the questionnaire showed that TCM rehabilitation professionals working in primary, secondary, and tertiary hospitals can provide TCM rehabilitation treatment such as Tuina, moxibustion, scraping therapy, and cupping therapy. Existing TCM rehabilitation practitioners have complex educational backgrounds, mainly from acupuncture and massage, rehabilitation therapy technology, and TCM majors, with a few from TCM rehabilitation technology, human kinesiology, and other majors, as well as social personnel with professional training. However, the survey found a shortage of TCM rehabilitation technical personnel and poor job stability, with the main factors being low income, low social status, and lack of special preferential policies for the introduction of such personnel.
TCI rehabilitation service delivery
Upon investigation, we found that TCI rehabilitation services widely cover secondary and tertiary hospitals, such as general hospitals, rehabilitation hospitals, and Chinese medicine hospitals. It is expected that by 2025, the coverage of TCI rehabilitation services in tertiary and secondary Chinese medicine hospitals can reach 85% and 70%, respectively. The provision of TCM rehabilitation services is guaranteed based on two main models. One of the models combines traditional rehabilitation techniques with modern rehabilitation medicine theories to provide patients with comprehen- sive rehabilitation treatment. The other is to utilize the unique TCM rehabilitation methods to improve the patients' dysfunction under the guidance of the TCM rehabilitation concept. More importantly, the implementation of these two models is subject to strict quality control. Not only did every city have a quality control center for rehabilitation treatment and have expert groups regularly and randomly visit TCM rehabilitation departments for quality inspection, but national government agencies have also issued corresponding guidance documents, such as the Basic Standards and Management Norms for Rehabilitation Medical Centers and Care Centers (for Trial Implementation), to provide effective evidencebased TCI rehabilitation services. 19 Based on the questionnaire results, we found that almost every district in China has primary medical and health institutions, among which the proportion of community health service centers providing TCM services has increased by 30.8% over the most recent 10 years ( Fig. 4 ), and these community health service centers also provide TCM rehabilitation services. In addition, TCI rehabilitation services such as the external application of Chinese herbal, acupuncture, Tuina, cupping, and moxibustion have been included in a defined package of services for provision of primary health care in China. However, there was a lack of an effective referral process to support the timely delivery of a continuum of TCI rehabilitation care.
TCI rehabilitation infrastructure
In recent years, the construction of TCI rehabilitation infrastructure has entered a period of rapid development, laying the foundation for the public to benefit from standardized, convenient, and effective rehabilitation services with TCM characteristics. From 2018 to 2020, the average number of TCM hospitals with rehabilitation departments increased from 1834 to 2390, a relative increase of 6.5% ( Fig. 5 ). Meanwhile, TCM rehabilitation providers in both community and hospital settings have access to the equipment and consumables they require to provide quality TCM rehabilitation in China, and many guidelines, such as the Guidelines for the Construction and Management of Rehabilitation Departments in TCM Hospitals (Trial) and the Guidelines for the Construction and Management of Rehabilitation Departments in General Hospitals, provide standards for setting up traditional rehabilitation treatment rooms in rehabilitation hospitals and building rehabilitation departments in TCM hospitals.
TCI rehabilitation information
Through the questionnaire, we learned that diverse research projects and information collection methods would provide basic data for future large-scale development of TCM rehabilitation ser-vices and selection of appropriate TCM rehabilitation techniques in China. Many provinces and cities are conducting TCM rehabilitation research, including clinical TCM rehabilitation research, TCM rehabilitation basic research, and research related to the costeffectiveness of TCM rehabilitation. In addition, every hospital and health center would provide feedback data on the TCI rehabilitation availability and utilization to local administrators of TCM at the end of year. Moreover, the data on population functioning are primarily collected using two methods: (1) disabled people can apply for a disability evaluation in the local government agencies of the disabled persons' federation and obtain subsidies; and (2) resident committees collect data regarding disabled persons and then report them to the higher authorities.
Assistive products for TCI rehabilitation
The abundance of TCI rehabilitation APs further enhances the efficiency of TCI rehabilitation treatment, and the development of related TCI rehabilitation APs is supported by policies. Traditional rehabilitation APs, such as cupping, electroacupuncture, the device for Chinese herbal fumigation, infrared lamps, and acupuncture treatment beds, can be used as the essential list of AP for TCM rehabilitation treatment in China. To enhance the independent innovation ability of TCI rehabilitation AP industries, several diagnosis and treatment instruments that work together with the clinical application of TCM have been developed in China, such as the TCM pulse diagnosis bracelet, TCM facial diagnosis detection and analysis system, and far-infrared massage physiotherapy bed, to meet the population's health needs. Moreover, encouraging the production of rehabilitation APs with definite curative effects and characteristics of TCM has been included in the Several Opinions of the State Council on Accelerating the Development of the Rehabilitation Assistive Product Industry. 20
Emergency preparedness
China attached great importance to capacity infrastructure in TCM rehabilitation for responding to public health emergencies and preventing major diseases. TCM rehabilitation therapy, such as herbal medicine, acupuncture, and Qigong, plays an important role in the treatment of COVID-19, [21][22][23][24] and the TCM program has been included in the guidelines for the diagnosis and treatment of COVID-19. 25
Summary and analysis of the main results
This study aimed at an estimation of the need for rehabilitation in China and an analysis of existing TCM rehabilitation services. To better understand the quality of TCM rehabilitation services in China, we investigated key health policies and services in the field of TCM rehabilitation in China by means of a document review and a questionnaire survey. As aforementioned in the results, TCM is highly integrated in terms of infrastructure and service delivery in rehabilitation services in Chinese health system rehabilitation services. However, further improvements are needed in the provision of commercial insurance, the development of highly qualified personnel, and the standardization of referral processes for TCM rehabilitation.
The development of TCM rehabilitation in China has made remarkable achievements over the past two decades. This is consistent with significant progress in T&CM health services at the global level. 6 There are two potential primary reasons for the strong support that the Chinese government has provided for the development of TCM rehabilitation we observed in this investigation.
One reason that might also be associated with the development of China's social economy and the improvement in the level of health services. This is that the average life expectancy of residents is increasing. By the end of 2016, the number of elderly people aged 60 and above in China has reached 230 million, accounting for 16.7% of the total population, 26 and the population base with chronic diseases in China is also expanding. 27 In China, rehabilitation medicine began late and cannot meet the growing demand of the people for rehabilitation. 28 In the field of rehabilitation medicine, the use of TCM characteristic therapy can produce a better curative effect on the rehabilitation of diseases. 29 , 30 Under this situation, it is imperative to develop a rehabilitation medicine system with Chinese characteristics. Another reason for this is that TCM rehabilitation, as a unique traditional rehabilitation technique in China, has a long history, precise efficacy, no obvious toxic side effects, low cost, and many other advantages, and plays a unique role in promoting people's health and reducing the burden on families and society. Therefore, over the past 10 years, the Chinese government agencies have issued several policies to enhance the rehabilitation capacity of TCM specialties. For example, the Development Plan for Health Services of Traditional Chinese Medicine (2015-2020) 15 and the State Council on the Promotion of Innovation and Development of TCM Heritage Views, 31 especially the Implementation Plan of the Rehabilitation Service Capability Improvement Project of TCM (2021-2025), 18 have clearly defined the development direction in the future and primary tasks of Chinese medicine rehabilitation.
Implications and significance for future TCI rehabilitation development
Further reflection on our results can lead to general insights and suggestions that may be of great significance in promoting the further development of TCM rehabilitation in China and the integration of complementary and alternative medicine into rehabilitation in other countries around the world. Primarily, long-term policy continuity as well as a clear TCM rehabilitation management structure and coordination mechanism are the basis for the development of TCM rehabilitation. According to the results, the Chinese government has issued nearly 30 policies to promote the development of integrated Chinese and Western medicine and TCM rehabilitation services over the past two decades. A study has shown that long-term continuous policy making can provide direction and substantial support for the development of Chinese and western medicine in China. 32 As we have seen, the NATCM is the primary management organization for TCM rehabilitation in China. It is responsible for TCI rehabilitation in China under the leadership of the State Council and in coordination with various other governmental agencies. The NATCM will invite national TCM experts and related government departments to discuss and form specific implementation plans and schemes after the State Council has issued relevant policy instructions. These plans will then be distributed to all provinces, cities, and autonomous regions across the country to form specific implementation rules for local TCM rehabilitation. In addition, a chain development strategy for the development of TCI rehabilitation business is crucial. To meet the growing social demand for TCI rehabilitation services, provinces, municipalities, and autonomous regions throughout the country can utilize policy support from the central government to develop local programs. These programs include strengthening investment in the construction of TCM rehabilitation medical institutions and improving the medical service guarantee system to improve the quality, utilization efficiency, and income of TCI rehabilitation medical treatment. As a result, this attracts more excellent TCI rehabilitation professionals and accumulated funds to promote the long-term and benign development of TCI rehabilitation. According to the data, com-pared with 2018, the number of TCM licensed (assistant) physicians and TCM hospitals with rehabilitation medicine departments increased by 0.7% and 6.5%, respectively, in 2020, thus improving the rehabilitation of Chinese medicine accessibility and quality of services. 33 , 34 Most importantly, extensive investigation is required to understand the current situation of the application of complementary and alternative medicine in rehabilitation in China and the needs of people with different functional disabilities for TCI rehabilitation so as to lay a good foundation for formulating clinical guidelines for TCM rehabilitation for related diseases in the future. The development of guidelines can improve the professionalism and standardization of TCI rehabilitation medical services and further enhance the medical effects.
Limitations of the study
Our study has some limitations. First, we sent a questionnaire link to the NATCM and the heads of various institutions and organizations by email. However, due to patient privacy concerns and the confidentiality of some internal data, it was difficult to accurately collect certain types of information. Second, although there were many open sources of information regarding TCM and rehabilitation during the investigation process, some data on TCM rehabilitation were still insufficient.
Conclusion
In summary, TCI has been better integrated in Chinese health system rehabilitation services relying mainly on coordinated leadership and governance, substantial financial investment, and extensive infrastructure. To maintain progress, further improving the supply of commercial insurance based on urban and rural resident basic medical insurance and cultivating TCI rehabilitation professionals are required. Furthermore, this survey could provide implications for other countries around the world to incorporate complementary and alternative medicine into rehabilitation.
Acknowledgments
We are grateful to the World Health Organization for assistance with the design of the questionnaire. We thank the Chinese Association of Rehabilitation Medicine, the National Administration of Traditional Chinese Medicine, the National Health Commission, and TCM health commissions and administrations, as well as hospitals in Shanghai, Fujian, Heilongjiang, and other cities for the information provided during the interviews.
Author contributions
Lei Fang, Chunlei Shan, and Qi Zhang contributed to the study design. Chaoyang Guo and Zhenrui Li conducted the document review and data collection. Stéphane Alexandre Espinosa, Lin-yun Zheng, Yanwei Xiang, Zhen Sang, and Xiao-ting Xu accessed and verified the data. Lei Fang and Ran-ran Zhu analyzed the data and drafted the manuscript. All authors reviewed and approved the final manuscript. All authors had full access to all the data in the study and share final responsibility for the decision to submit for publication.
Funding
This study was supported by the Three-year Action Plan for the Development of TCM in the Shanghai-Highland Construction for International Standardization of TCM (No. ZY(2021-2023) −0212 ).
Ethical statement
Not applicable.
Data availability
The data will be available on request from the corresponding author.
Declaration of Competing Interest
The authors declare that they have no conflicts of interest. | 2023-04-05T15:28:34.944Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "72ce16fb6f211b10dce374e7c80f1d42c46f3a0c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.imr.2023.100945",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6424ebe3ff48a0374b13dc1109cc376864daa63b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
17465036 | pes2o/s2orc | v3-fos-license | Dried Saliva Spots: A Robust Method for Detecting Streptococcus pneumoniae Carriage by PCR
The earliest studies in the late 19th century on Streptococcus pneumoniae (S. pneumoniae) carriage used saliva as the primary specimen. However, interest in saliva declined after the sensitive mouse inoculation method was replaced by conventional culture, which made isolation of pneumococci from the highly polymicrobial oral cavity virtually impossible. Here, we tested the feasibility of using dried saliva spots (DSS) for studies on pneumococcal carriage. Saliva samples from children and pneumococcus-spiked saliva samples from healthy adults were applied to paper, dried, and stored, with and without desiccant, at temperatures ranging from −20 to 37 °C for up to 35 days. DNA extracted from DSS was tested with quantitative-PCR (qPCR) specifically for S. pneumoniae. When processed immediately after drying, the quantity of pneumococcal DNA detected in spiked DSS from adults matched the levels in freshly spiked raw saliva. Furthermore, pneumococcal DNA was stable in DSS stored with desiccant for up to one month over a broad range of temperatures. There were no differences in the results when spiking saliva with varied pneumococcal strains. The collection of saliva can be a particularly useful in surveillance studies conducted in remote settings, as it does not require trained personnel, and DSS are resilient to various transportation conditions.
Introduction
Streptococcus pneumoniae (S. pneumoniae) is an inhabitant of the human respiratory tract and frequently carried asymptomatically. On rare occasions, it breaches the host's immune barrier and becomes a pathogen causing a range of diseases; including otitis media, pneumonia, bacteremia, and meningitis [1,2]. In general, pneumococcal colonization is considered a necessary precursor to disease [3], with a direct link between strains circulating in carriage and disease [4,5]. This link is commonly utilized by epidemiological surveillance studies targeting asymptomatic colonization in order to assess the effects of therapeutic and preventive strategies [6][7][8].
For the past several decades, the nasopharynx has been considered the optimal sampling niche for detection of S. pneumoniae colonization [6][7][8][9]. Yet a look into historical research reveals that saliva was the preferred diagnostic specimen [10]. Both Pasteur and Sternberg independently discovered S. pneumoniae in 1881 by infecting animals with human saliva, and, for more than half a century, testing saliva in mice was considered the optimal method for carriage detection [10,11]. Cross-sectional studies using this method in the pre-antibiotic era identified 45% to 60% of adults [10,11] and 50% to 80% of schoolchildren as asymptomatic carriers [12]. With the rise of antibiotic use, and the progress in culture-based diagnostic methods, interest in the use of saliva for epidemiological surveillance declined. Furthermore, the sensitivity of conventional cultures was low when used for recovering live pneumococci from highly polymicrobial saliva samples [10], contributing further to the downfall of oral fluids as diagnostic specimens in pneumococcal studies. However, recent progress in culture-independent molecular methods has led to a dramatic increase in sensitivity and specificity of pathogen detection [9,[13][14][15][16], and prompted our interest in re-visiting saliva as a specimen for epidemiological studies on S. pneumoniae carriage.
Saliva is an easily accessible, easy-to-collect body fluid secreted into the oral cavity at the crossover of the digestive and respiratory tracts. The general understanding is that preserving saliva samples requires storage and transport at low temperatures, preferably snap frozen and transported on dry ice [17,18]. The requirement for a cold chain would therefore complicate the use of saliva as a diagnostic specimen for pneumococcal detection. However, historical studies provide alternative solutions for the storage and preservation of saliva samples; Nissen was the first to report on drying pneumococci for later use in 1891 [19]. By the 1930s dehydration was considered "an excellent physical method for the preservation of cultures of Pneumococcus, particularly . . . (for) long period(s) of time (that) require that the characters of the strain be held uniform and constant" [11].
The dried spot method is widely used to collect and preserve blood and other body fluids for a range of diagnostic purposes [20][21][22][23][24]. We therefore evaluated dried saliva spots stored at various temperatures as a method for detecting pneumococcal presence in saliva of asymptomatic individuals. The presence of pneumococci was detected using quantitative PCR (qPCR).
Optimization of Dried Saliva Spots
To examine the efficiency of recovery of S. pneumoniae-specific DNA from saliva samples collected with dried saliva spots (DSS), we compared raw saliva to DSS. Individual samples from five donors were used to examine the effect that drying saliva on filter paper had on the recovery of pneumococcal DNA. A comparison of DSS and fresh samples spiked with live cells of serotype 19F ATCC6319 strain of S. pneumoniae revealed no significant difference in quantity of the S. pneumoniae-specific gene lytA detected by qPCR (p = 0.450), as was the case for un-spiked samples (p = 0.753) ( Figure 1). The detection of lytA in un-spiked donor saliva was not altogether surprising, as our previous studies show penumococccal colonization is present in the general population of adults in the Netherlands [13,16]. Additionally, testing un-spiked saliva allowed us to determine the baseline lytA signal before the sample was spiked.
We studied the effect of humidity on sample quality, since it had been reported that storing dried blood spots with desiccant improves DNA stability over time [25,26]. Indeed we found a decrease in S. pneumoniae-specific signal of the value 2 to 3 cycle threshold (C T ) (increase in C T corresponds to a decline in signal strength) at day 7 in samples stored without desiccant. With the clear benefit of lowering humidity levels for DSS storage, we continued all further experiments with desiccant. We also tested the survival of S. pneumoniae in DSS and found that no live pneumococci recovered from freshly dried DSS, with few oral commensals surviving (~10 colony forming unit per spot (CFU/spot). This killing effect was equally present when spots were inoculated with cells suspended in phosphate buffered saline (PBS), but not when desiccated on plastic Petri dishes, indicating a strong bactericidal effect of the filter paper itself.
Considering that DSS is an unexplored method for pneumococcal detection, we assessed the lower limit of S. pneumoniae detection in DSS. We first tested the sensitivity of the molecular method by quantifying lytA presence in DNA extracted from a serial dilution of bacteria in PBS. In the absence of saliva components that could potentially interfere with pathogen detection, the qPCR method itself showed reproducible quantification in samples containing the equivalent of approximately 10 CFU per qPCR reaction with the lower limit of detection (LOD) of 6 CFU/reaction. Then, we tested the DSS method, spiking the saliva with serially diluted pneumococci. The results for DSS were also highly reproducible and precise for samples with ě10 2 CFU per qPCR reaction (LOD = 25 CFU/reaction), equivalent to approximately 10 4 CFU per spot.
for DSS were also highly reproducible and precise for samples with ≥10 2 CFU per qPCR reaction (LOD = 25 CFU/reaction), equivalent to approximately 10 4 CFU per spot. Figure 1. Streptococcus pneumoniae (S. pneumoniae) detection in raw saliva versus dried saliva spots (DSS). Saliva samples from 5 donors were spiked with a serotype 19F strain of S. pneumoniae. Both spiked and un-spiked raw saliva and DSS specimens were processed for DNA isolation and quantitative-PCR (qPCR)-based pathogen detection. Each dot represents an individual sample (DSS were performed in duplicate). The dashed line marks the lower limit of detection of qPCR. Differences between un-spiked raw and un-spiked DSS specimens as well as spiked raw and spiked DSS specimens samples are not significant.
Temperature Stability
DSS specimens generated from saliva of five individual donors spiked with pneumococcal cells were stored for up to five weeks at five different temperature conditions, ranging from −20 to 37 °C, and tested weekly for pneumococcal-specific DNA by qPCR. Whenever possible, duplicates of DSS were tested throughout the course of time. Although we observed variation in the signal strength over time, there was no evidence of a significant decline of pneumococcal-specific signals in DSS specimens stored for up to one month at temperatures ≤30 °C (Figure 2A Saliva samples from 5 donors were spiked with a serotype 19F strain of S. pneumoniae. Both spiked and un-spiked raw saliva and DSS specimens were processed for DNA isolation and quantitative-PCR (qPCR)-based pathogen detection. Each dot represents an individual sample (DSS were performed in duplicate). The dashed line marks the lower limit of detection of qPCR. Differences between un-spiked raw and un-spiked DSS specimens as well as spiked raw and spiked DSS specimens samples are not significant.
Temperature Stability
DSS specimens generated from saliva of five individual donors spiked with pneumococcal cells were stored for up to five weeks at five different temperature conditions, ranging from´20 to 37˝C, and tested weekly for pneumococcal-specific DNA by qPCR. Whenever possible, duplicates of DSS were tested throughout the course of time. Although we observed variation in the signal strength over time, there was no evidence of a significant decline of pneumococcal-specific signals in DSS specimens stored for up to one month at temperatures ď30˝C ( Figure 2A Differences between un-spiked raw and un-spiked DSS specimens as well as spiked raw and spiked DSS specimens samples are not significant.
Temperature Stability
DSS specimens generated from saliva of five individual donors spiked with pneumococcal cells were stored for up to five weeks at five different temperature conditions, ranging from −20 to 37 °C, and tested weekly for pneumococcal-specific DNA by qPCR. Whenever possible, duplicates of DSS were tested throughout the course of time. Although we observed variation in the signal strength over time, there was no evidence of a significant decline of pneumococcal-specific signals in DSS specimens stored for up to one month at temperatures ≤30 °C (Figure 2A
Short-Term DNA Stability of DSS of Different Strains of Streptococcus pneumoniae (S. pneumoniae)
A previous study showed that S. pneumoniae tolerance to desiccation is not strain-specific, although some clinical strains were more robust than others [27]. We examined if the same was true for pneumococcal DNA in DSS. To study this, we tested an additional seven clinical isolates of serotypes 1, 2, 3, 4, 6B, 11A, 19A, and an acapsular strain constructed in the lab ( Table 1). Aliquots of saliva samples collected from three donors were individually spiked with cells of pneumococcal strains, spotted onto filter paper, and stored. Overall, pneumococcal DNA was stable for up to 7 days in all strains tested ( Figure 3). In concordance with the aforementioned results for serotype 19F, the qPCR CT values were robust and showed only minimal variation over time. However, we observed inter-strain variation in the quantity of S. pneumoniae-specific signal detected at time zero. When comparing all strains, results for serotypes 1 and 6B were significantly different compared to the acapsular strain with mean 2.3 CT and 1.7 CT difference of DSS compared to raw saliva, respectively (Two-way ANOVA with Bonferroni post-test, p < 0.05), despite matching number of cells used to spike samples, suggesting a variation in strains' sensitivity to lysis in human saliva. Each sample was also run in the corresponding serotype-specific qPCRs and qPCR targeting another.
Short-Term DNA Stability of DSS of Different Strains of Streptococcus pneumoniae (S. pneumoniae)
A previous study showed that S. pneumoniae tolerance to desiccation is not strain-specific, although some clinical strains were more robust than others [27]. We examined if the same was true for pneumococcal DNA in DSS. To study this, we tested an additional seven clinical isolates of serotypes 1, 2, 3, 4, 6B, 11A, 19A, and an acapsular strain constructed in the lab ( Table 1). Aliquots of saliva samples collected from three donors were individually spiked with cells of pneumococcal strains, spotted onto filter paper, and stored. Overall, pneumococcal DNA was stable for up to 7 days in all strains tested ( Figure 3). In concordance with the aforementioned results for serotype 19F, the qPCR C T values were robust and showed only minimal variation over time. However, we observed inter-strain variation in the quantity of S. pneumoniae-specific signal detected at time zero. When comparing all strains, results for serotypes 1 and 6B were significantly different compared to the acapsular strain with mean 2.3 C T and 1.7 C T difference of DSS compared to raw saliva, respectively (Two-way ANOVA with Bonferroni post-test, p < 0.05), despite matching number of cells used to spike samples, suggesting a variation in strains' sensitivity to lysis in human saliva. Each sample was also run in the corresponding serotype-specific qPCRs and qPCR targeting another. S. pneumoniae-specific sequence within the pia iselet [13]. These signals were equally robust as the signal for the lytA gene (data not shown), indicating that DSS will be suitable for molecular determination of the serotype composition of the sample as well.
Detection of Pneumococcal Presence in DSS Specimens from Clinical Saliva Samples
In order to test the DSS method in a clinical setting, we tested saliva samples collected from 12 school children [15]. DNA was extracted from raw saliva and matching DSS, which were stored at RT for 7 days. When we used our strict criteria to assign positivity to a sample (presence of signals <45 CT for both lytA and pia in qPCR) [15], samples from 8/12 children were positive both in raw saliva and in DSS samples at day zero ( Figure 4) and from 7/12 children at day 7. S. pneumoniae-specific sequence within the pia iselet [13]. These signals were equally robust as the signal for the lytA gene (data not shown), indicating that DSS will be suitable for molecular determination of the serotype composition of the sample as well.
Detection of Pneumococcal Presence in DSS Specimens from Clinical Saliva Samples
In order to test the DSS method in a clinical setting, we tested saliva samples collected from 12 school children [15]. DNA was extracted from raw saliva and matching DSS, which were stored at RT for 7 days. When we used our strict criteria to assign positivity to a sample (presence of signals <45 C T for both lytA and pia in qPCR) [15], samples from 8/12 children were positive both in raw saliva and in DSS samples at day zero ( Figure 4) and from 7/12 children at day 7. S. pneumoniae-specific sequence within the pia iselet [13]. These signals were equally robust as the signal for the lytA gene (data not shown), indicating that DSS will be suitable for molecular determination of the serotype composition of the sample as well.
Detection of Pneumococcal Presence in DSS Specimens from Clinical Saliva Samples
In order to test the DSS method in a clinical setting, we tested saliva samples collected from 12 school children [15]. DNA was extracted from raw saliva and matching DSS, which were stored at RT for 7 days. When we used our strict criteria to assign positivity to a sample (presence of signals <45 CT for both lytA and pia in qPCR) [15], samples from 8/12 children were positive both in raw saliva and in DSS samples at day zero ( Figure 4) and from 7/12 children at day 7. In general, we observed a minimal decline in the signal detected in DSS compared to raw saliva samples (mean decline 0.88 C T , range 0.22-2.9 C T ). Additionally, we tested the DSS and raw clinical samples for the presence of pneumococcal serotypes using serotype-specific qPCRs (data not shown). Differences in the C T values in serotype-specific qPCR for raw saliva compared to DSS were not significant.
Discussion
Historical records of sensitive mouse inoculation methods detecting high carriage rates in saliva of asymptomatic individuals triggered our interest in saliva as a diagnostic specimen to study pneumococcal carriage. It was further strengthened by the outcome of our recent study on pneumococcal carriage in the elderly [16], where we found saliva to be superior to nasopharyngeal and oropharyngeal sampling for S. pneumoniae. Additionally, two studies used dried spots for molecular detection of pneumococci in cerebral spinal fluid [24,34], and two recent studies have shown that dried saliva spots can be used to detect chemicals such as lidocaine [35] or lactic acid in saliva [36]. We hypothesized that the dried saliva spot (DSS) method could also be utilized for molecular detection of S. pneumoniae in highly polymicrobial saliva samples, simplifying sampling methodologies for studies on pneumococcal carriage. To our knowledge, this is the first attempt to explore the possibility of using DSS as a diagnostic tool for S. pneumoniae carriage detection.
We found that, despite processing through filter paper and desiccation, pneumococcal DNA was stable in DSS stored for up to one month and over a broad range of temperatures. This is in line with results reported by Peltola et al., who detected pneumococcal DNA in cerebrospinal fluid after it had been applied to dried spots and stored at room temperature for up to 8 months [24]. The robustness of the DSS method provides the opportunity for unassisted sample collection. Saliva can be applied to DSS by individuals at home or in remote study centers, and sent via regular mail to the laboratorial facility for processing. This would be of particular advantage for surveillance studies on pneumococcal carriage conducted in resource-poor countries or remote areas.
Interestingly, at day zero, we already saw limited but significant inter-individual variation in the quantity of S. pneumoniae-specific signal detected in saliva samples spiked with an equal number of pneumococcal cells. Given that the C T value obtained for raw spiked saliva and for fresh or stored DSS was nearly constant for any one individual tested, it suggests that the variation is not the effect of drying, handling, or storing the saliva, but the intrinsic variability in saliva composition between individuals donors. This may be due to differences in composition of saliva itself (e.g., bactericidal molecules and enzymes produced by humans), the competition among microorganisms present in the oral cavity, and/or mediated by bacterial and fungal products and bacteriophages.
We also observed minor variation in quantity of S. pneumoniae-specific signal detected by qPCR in saliva from single donors spiked with different pneumococcal strains. This suggests the presence of inter-strain variation in sensitivity to bactericidal effects of saliva. We believe these limits can be easily addressed within the method. Since each qPCR reaction used only a fraction of the isolated DNA, increasing template volume, or concentrating the template, could further increase the sensitivity of the DSS method or any molecular-based method. Since saliva can be easily collected from a donor, sample volume should not be an important limiting factor.
The limitation of this study is the relative small sample size of saliva samples tested in the clinical part of the study. Another limitation is the inability to culture live pneumococci from the filter paper, which failed due to the apparent bactericidal effect of the paper used. Therefore, the DSS should be considered for studies not requiring isolation of live S. pneumoniae as the primary endpoint. However, the sterilizing effect of the DSS could also be considered beneficial, as it would reduce the biohazard associated with collecting and transporting the samples.
Further studies are needed to determine the diagnostic value of DSS compared to the gold standards, i.e., conventional culture or molecular detection of pneumococci in nasopharyngeal swabs. Since this is the first description of this methodology, further validation on larger data sets, plus side-by-side comparisons with detection of pneumococcal carriage in other types of respiratory samples (nasal, nasopharyngeal or oropharynegeal swabs, nasal washes), will be necessary. Future research should aim to gain insight whether DSS has the potential to be used as a molecular quantitative method. For this, testing DSS samples from patients with pneumococcal disease or at risk of pneumococcal infection would be particularly informative. For use of this method in young or disabled individuals, we suggest using diagnostic kits designed for saliva collection before applying saliva on the filter paper. Further studies are needed to validate this additional step to the protocol.
In conclusion, DSS is an easy, novel, and robust saliva sampling method that shows promise as a tool for pneumococcal surveillances in remote areas and resource-poor countries. Pneumococcal DNA is stable in DSS stored with desiccant in a wide range of temperatures for up to one week. Furthermore, long-term storage of DSS is possible at´70˝C and consequently has great potential for diagnostic purposes. When molecular, culture-free methods are used for S. pneumoniae detection, DSS may be considered as an attractive alternative to nasopharyngeal or oropharyngeal swab samples in surveillance studies on pneumococcal carriage.
Bacterial Strains
The S. pneumoniae strains used in this study are listed in Table 1. Pneumococci were grown to mid-log phase in brain-heart infusion broth (BHI, Oxoid, Wesen, Germany), and aliquots were frozen in 10% glycerol at´80˝C. Prior to use, bacterial cells were thawed, washed twice with PBS, tittered by culturing tenfold dilutions on blood agar supplemented with gentamicin (SB7-Gent, Oxoid), and re-suspended in PBS to reach a particular CFU concentration. Experiments were performed with serotype 19F strain ATCC6319, unless specified otherwise.
Mock Saliva Experiments
Saliva samples were collected from nine healthy volunteer donors aged 21-49 years, 8 female and 1 male. Individuals were asked to expectorate saliva into a 50-mL polypropylene tube (Sarstedt, Nümbrecht, Germany), which was kept on ice. The saliva samples were considered whole mouth unstimulated saliva. Samples were vortexed vigorously for 10 s, and 100 µL of un-spiked saliva was stored for determination of baseline presence of S. pneumoniae. The remaining saliva was spiked with S. pneumoniae to a final concentration of approximately 10 6 CFU/mL, and vortexed again.
Dried Saliva Spots
Immediately after spiking, 100 µL of saliva was used to inoculate the spot on a diagnostic filter paper card (Whatman 903 Protein Saver Card, VWR International, Amsterdam, The Netherlands) and dried at room temperature (RT) for 2 h; day zero samples were processed for S. pneumoniae detection when spots were fully dry. Unless specified otherwise, all DSS were stored in a sealed plastic zipper bag with a Minipax absorbent packet (Sigma-Aldrich, Zwindrecht, The Netherlands) as desiccant, and placed in darkness to limit light exposure. The DSS were processed in duplicate whenever possible. Samples were stored at the following temperatures:´20, 4,~19 (RT), 30, and 37˝C.
Clinical Study
A small selection of samples were used from a study already conducted [15]. The study was conducted on a single day in June 2012, at a rural school of 190 students in the Utrecht province. Fifty students (aged 5 to 10 years, median 8 years) attending two different classes took part in the study. Except for the age of each student, no demographic data were collected. The study was conducted in line with the Ethical Committee Guidelines. Written informed consent was obtained from the parents of each child. Children were asked to spit saliva into a 15 mL polypropylene tube (Sarstedt), and 12 random samples were in parallel snap frozen and processed through DSS as described above. The saliva samples were considered whole mouth unstimulated saliva. One DSS sample at day zero was lost due to technical difficulties.
Isolation of Bacterial DNA
DNA was extracted with DNeasy Blood & Tissue Kit (Qiagen, Venlo, The Netherlands). DSS were cut out of the card and supplemented with 180 µL of 20 mM Tris-Cl, 2 mM EDTA; incubated for 15 min at 95˝C to inactivate DNAses; supplemented with equal volume (180 µL) of 2.4% Triton X-100, 80 mg/mL lysozyme in 20 mM Tris-Cl, 2 mM EDTA, vortexed and incubated at 37˝C for 30 min. The liquid phase was separated from the filter paper by pipetting, mixed with ethanol and processed according to the kit's original protocol. DNA was eluted with 200 µL of an elution buffer and stored at 4˝C.
Real-Time Quantitative PCR Targeting S. pneumoniae
Detection of S. pneumoniae-specific DNA was conducted by real-time qPCR using primers and probes specific for the gene coding for the major S. pneumoniae autolysin lytA [37] and for the pia iselet of the iron uptake ATP-binding cassette (ABC) transporter [13]. Genomic DNA of S. pneumoniae 19F was used as a positive control in qPCR, and, unless stated otherwise, 2.5 µL of DNA template was tested in a 25-µL PCR volume. It corresponded to approximately 1.25ˆ10 3 CFU of the artificially spiked saliva samples. Clinical samples were classified as positive for S. pneumoniae when C T values for both targeted genes were below 45 [38]. Serotype-specific genes were quantified using qPCR protocol published by Azzari et al. [14].
Lower Limit of S. pneumoniae Detection
We spiked saliva with a range of ten-fold PBS dilutions of pneumococcal cells from 10 5 to 100 CFUs per 100-µL volume of saliva, and quantified S. pneumoniae presence in DNA extracted from DSS inoculated with these samples. Cell suspensions in PBS containing corresponding numbers of S. pneumoniae CFUs were processed as a reference curve.
Statistical Analysis
Results were analyzed using GraphPad Prism version 5.0 for Windows (GraphPad Software, San Diego, CA, USA). Due to small sample sizes, we assumed the data was normally distributed and used a Student's t-test. For multiple comparisons, either one-or two-way ANOVA with Bonferroni post-tests were used as indicated. | 2016-03-14T22:51:50.573Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "ab28beb11aa258f24bdf7ee9d1f73a29e7d1ef74",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/3/343/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab28beb11aa258f24bdf7ee9d1f73a29e7d1ef74",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
10518484 | pes2o/s2orc | v3-fos-license | Ceftazidime-avibactam has potent sterilizing activity against highly drug-resistant tuberculosis
Ceftazidime-avibactam is highly efficacious against extensive- and multidrug-resistant strains of Mycobacterium tuberculosis.
INTRODUCTION
Highly drug-resistant tuberculosis (TB) has left a large number of patients therapeutically destitute and functionally incurable (1,2). In South Africa, we found that 60% of such difficult-to-treat patients have unfavorable outcomes and are dead within 10 months of discharge (3). These highly drugresistant cases spread the infection before death, and 50% of the secondary cases were dead by the end of the study. Newer antibiotics such as bedaquiline and delamanid improve outcomes but do not eliminate treatment failure. In patients with extensively drug-resistant TB receiving bedaquiline, the 24-month treatment failure rate was 38%, and patients with isolates resistant to both bedaquiline and delamanid are being documented with increasing frequency (4)(5)(6). The field urgently needs new treatments that can be deployed immediately without waiting for drug and clinical development programs that can take almost a decade.
One solution is reuse of antibiotics and other therapies with proven clinical safety records that are already known to penetrate lung lesions. We disregarded the standard dogma that only certain special antibiotic classes would be effective against Mycobacterium tuberculosis (Mtb) and created a program to deliberately examine all antibacterial agents in current clinical use.
We screened several antibiotic compounds in test tubes and then quickly moved to identify optimal doses for possible clinical use on the basis of the hollow fiber system model of TB (HFS-TB) (7). The HFS-TB allows for quick examination of drug efficacy based on human lung pharmacokinetics and for immediate translation of results from the laboratory to the clinic. The HFS-TB has a 94% predictive accuracy for clinical therapeutic events, such as dose, concentrations/exposures optimal in patients, and expected rates of clinical efficacy, and has been formally qualified as a drug development tool by the European Medicines Agency (EMA) and endorsed by the U.S. Food and Drug Administration (FDA) for this purpose (8)(9)(10).
Recent chemical screens have shown several older cephalosporins, in combination with the b-lactamase inhibitor clavulanate, as potential anti-TB agents (11). However, there are several drawbacks. First, clavulanate inhibition of Mtb's broad-spectrum b-lactamase, BlaC, is slow and relatively inefficient. Second, clavulanate is currently only available for immediate use in combination with amoxicillin and not with cephalosporins. Third, amoxicillin-clavulanate administered for several months for TB is likely to be associated with high rates of side effects, such as diarrhea. We screened several other cephalosporins for anti-TB effect when in combination with the non-b-lactam b-lactamase inhibitor avibactam, which potently inhibits BlaC. One of these, ceftazidime, was first marketed almost 40 years ago and has no activity against Mtb and other Grampositive bacteria (12). It was recently coformulated with avibactam, and this combination is already in clinical use for Gram-negative bacterial infections. Ceftazidime-avibactam (CAV) has a serum to lung epithelial lining fluid penetration of 32%, which makes it a good drug for pneumonias (13). We tested for CAV efficacy in the HFS-TB using clinically achievable intrapulmonary pharmacokinetics. We examined three Mtb metabolic subpopulations encountered in cavitary TB: logarithmic phase growth bacteria (log-phase growth) that are ordinarily killed by isoniazid in what is defined as bactericidal activity, intracellular Mtb, and semidormant bacteria under acidic conditions that are ordinarily killed by the combination of pyrazinamide and rifampin as part of the sterilizing effect (14)(15)(16)(17). Other more potent cephalosporin-avibactam combinations were identified. However, we focused on CAV because this coformulation is available for immediate use as salvage therapy. concentration (MIC) distribution in multidrug-resistant (MDR) and extensively drug-resistant (XDR) TB clinical isolates. In step 2, we used the CAV concentration-time profiles encountered in patients to identify the bactericidal and sterilizing effect rates in the HFS-TB. In step 3, the HFS-TB results were used together with the known between-patient pharmacokinetic variability of CAV to translate results to TB programs and for clinical trials. These steps allowed for delivery of a dosing regimen for clinical use in less than 9 months, making the drug immediately available for clinical studies.
Screening to identify effect of CAV against Mtb The efficacy of first-line drugs in the HFS-TB, as well as in the clinic, is well known, which makes them ideal benchmarks. Figure 1 shows that our first step was to perform concentration-effect studies of the first-line drugs along with CAV against log-phase growth Mtb in Middlebrook 7H9 broth (hereinafter "broth") in test tubes and in phorbol myristate ester-activated human-derived THP-1 monocytes infected with Mtb in 24-well plates, using Mtb H37Ra, as described previously In the first step of the program, we examined the effect of CAV in comparison to standard first-line agents in intracellular and extracellular assays in a biosafety level 2 (BSL2) laboratory using avirulent Mtb. After demonstrating potential effectiveness, we then identified the MIC distribution in X/MDR-TB clinical strains from South Africa, in a BSL3 laboratory. In this first step, static concentrations of CAV were used. In step 2, we examined the efficacy of intrapulmonary concentration-time profiles of the CAV in the HFS-TB in several strains, for both bactericidal and sterilizing effect. These studies with dynamic concentrations of CAV against different Mtb metabolic subpopulations identified the concentrations and exposures associated with optimal kill and resistance suppression. They also generated CAV-resistant isolates, which then underwent whole-genome sequencing (WGS) to explore for mechanisms of effect.
Step 3 takes place in silico, and uses output of step 2 as well as population-level pharmacokinetic parameters and measures of between-patient pharmacokinetic variability, plus MIC distributions from step 1, in Monte Carlo experiments to identify optimal clinical doses for use in patients with drug-resistant TB and for susceptibility breakpoints for decision-making of whom should be treated with the drug. Step 4 involves handing over of the clinical dose for immediate clinical trial studies and salvage therapy. (18)(19)(20). We used commercially available CAV (ceftazidime/avibactam ratio of 4:1), which was purchased from our hospital pharmacy. Maximal Mtb kill, denoted by the symbol E max , was 0 to 1.43 log 10 colony-forming units (CFU)/ml for pyrazinamide and 3.32 to 3.56 log 10 CFU/ml for isoniazid, whereas that for rifampin was 5.30 to 5.68 log 10 CFU/ml, after 7 days of coincubation (Fig. 2, A to C). Neither ceftazidime nor avibactam alone killed Mtb, even a little (Fig. 2, D and E). However, the combined CAV killed Mtb with an E max of 4.19 to 7.05 log 10 CFU/ml, exceeding isoniazid and pyrazinamide and equaling rifampin, at concentrations that are clinically achievable (Fig. 2, F and G). A repeat study of ceftazidime concentration combined with avibactam at a concentration of either 0, 1, 5, or 15 mg/liter revealed that a minimum avibactam concentration of 1 mg/liter was needed to confer the ceftazidime microbial kill. Because neither ceftazidime alone nor avibactam alone killed Mtb, but CAV did (Fig. 2, F and G), the reason for the poor activity of ceftazidime against Mtb is not the lack of a penicillin-binding protein target but b-lactamase activity.
Drug-resistant Mtb clinical isolates are susceptible to CAV Next, we determined how widespread the CAV susceptibility is among Mtb isolates, using two MIC assays. Five laboratory isolates had identical MICs in microbroth dilution and Mycobacteria Growth Indicator Tube BACTEC (MGIT) assays, at MICs shown in Fig. 2H. Next, we used the MGIT assay to identify MICs for 25 clinical strains from South Africa, which included 80% X/MDR-TB strains representing all known Mtb phylogenetic lineages. We identified the MIC distribution shown in Fig. 2H. This shows that the MICs of 24 of the 25 (96%) clinical strains were below the CAV peak concentrations of 90 to 100 mg/liter, achieved with therapeutically achievable concentrations at standard doses. Thus, most clinical strains from X/MDR-TB patients were susceptible to CAV.
CAV human-like intrapulmonary pharmacokinetics have high bactericidal effects
The HFS-TB allows us to perform dose-response studies using concentration-time profiles of antibiotics at the site of infection. We used the HFS-TB of log-phase Mtb H37Ra to recapitulate the intrapulmonary concentration-time profiles of seven CAV doses using the same commercially available CAV formulation (ceftazidime/avibactam ratio of 4:1) administered every 8 hours for 27 days; each infusion was administered over 2 hours, on the basis of pharmacokinetics reported to the FDA and EMA for licensing purposes, and had a half-life of 3.3 hours (13,14,21). We measured the CAV concentrations in each HFS-TB, and pharmacokinetic modeling confirmed the half-life of 3.3 hours. Figure 3A shows that CAV achieved marked microbial kill of the log-phase growth Mtb at bactericidal effect kill rates higher than standard dose isoniazid, pyrazinamide, and rifampin monotherapy with drug-susceptible Mtb in the HFS-TB in the past (18,22,23). Figure 3A shows an unprecedented effect in the HFS-TB, which is 6.0 log 10 CFU/ml kill in just 7 days; the most effective first-line drugs of isoniazid and rifampin have a kill of <2.0 log 10 CFU/ml over the same time period in the same HFS-TB model and in sputum of patients (15,22,23). Thus, at a minimum, CAV administered with human-like pharmacokinetics demonstrated a bactericidal effect exceeding that of first-line drugs as monotherapy and in combination.
CAV therapeutic exposure targets are time-dependent Next, we wanted to identify the dosing schedule and optimal concentrations or exposures (defined as concentration/MIC) that are associated with maximal microbial kill and resistance suppression. We used a dose-fractionation design and a slightly faster half-life, which allowed us to examine the effects of different dosing schedules and exposure patterns by breaking colinearity that would accompany dose changes, as shown in the drug concentration-time profiles we measured in each HFS-TB in Fig. 3 (B to D). We achieved a half-life of 2.6 ± 0.3 hours (r 2 = 0.99). Figure 3 (B to D) shows that the percentage of time (24 hours) that CAV concentration persisted above MIC (%T MIC ) was lowest with the once-a-day dosing schedule, followed by twice a day, although the peak concentrations were the same despite dosing schedule. Thus, we achieved the experimental design objective. The effect on Mtb burden was assessed using two methods: determining CFU per milliliter (CFU/ml), which is commonly used in the research laboratory, and determining time to positivity (TTP) in the MGIT, which is more commonly used by clinicians and correlates with long-term treatment outcomes (24,25). On the basis of the Akaike information criteria scores for exposure versus bacterial burden model fits, the %T MIC was the best driver of microbial kill by both CFU/ml and TTP, better than peak-to-MIC and area under the concentration-time curve to MIC (AUC/MIC). This means that the CAV effect against Mtb will be optimized by the three-times-a-day dosing schedule, and the once-a-day dosing schedule would kill less. Figure 3E and fig. S1 show these exposure-effect relationships between %T MIC and Mtb burden for each sampling day. We used these exposure-effect relationships to calculate the CAV exposure that would achieve the same kill rates in the HFS-TB and in the sputum of patients as those of the first-line drugs, which are 1.95 log 10 CFU/ml (or 0.28 log 10 CFU/ml per day) for rifampin, 1.8 log 10 CFU/ml (or 0.6 log 10 CFU/ml per day) for isoniazid during the first 7 days, and 0.10 log 10 CFU/ml per day after the first 4 days for pyrazinamide (15,18,22,23,26,27). The CAV exposure that achieved the same kill rates as those of the most active of the firstline drugs was a %T MIC of ≥47%. We also calculated the CAV exposure associated with maximal kill, which was a %T MIC of ≥63%. Therefore, CAV has to be dosed at exposures exceeding a %T MIC of 63% (that is, 63 to 100%) for optimal efficacy. Exposure target values are known to be the same between the HFS-TB and TB patients (8,27,28). Thus, the % T MIC values of 47 and 63% are the exposure targets that must be achieved or exceeded by CAV doses in patients to achieve the same amounts of microbial kill in patients, as was observed in the HFS-TB (8,27,28).
We also modeled the size of the drug-resistant subpopulation (CFU/ ml), captured using CAV concentration of six times MIC on agar (18,28). The size of the drug-resistant subpopulation was driven by the AUC/ MIC ratio achieved, as shown in Fig. 3F, with suppression of acquired drug resistance at an AUC/MIC ratio of 250. We collected 12 of the CAV-resistant isolates, confirmed that they grew at a CAV concentration of 96 mg/liter in broth, and then extracted DNA for WGS. We identified 149 single-nucleotide variants (SNVs) common to the 12 CAV-resistant isolates not present in the wild type, shown in table S1. The nonsynonymous SNVs are shown in Fig. 4. Excluding the mutations in the highly polymorphic genes encoding proteins carrying proline-glutamic acid (PE) or proline-proline-glutamic acid (PPE) motif, there were a total of 53 genes with at least one SNV, mostly in genes encoding cell wall and membrane components and processes. The most notable SNVs included genes encoding the bifunctional penicillin-binding protein ponA1 (average coverage of 192 times) transpeptidase domain, rpfE (average coverage of 200 times) encoding Mtb's lytic transglycosylase, and secretion system genes. In several resistant isolates (but not all), there was also a mutation in rpfA, which encodes the lytic transglycosylase. Notably, there were no mutations in blaC, Ldt Mt1 , and Ldt Mt2 (average coverage of 125 times). We confirmed the lack of mutations in blaC, Ldt Mt1 , and Ldt Mt2 in a second DNA extraction of the strains using Sanger sequencing.
Both ceftazidime and avibactam are highly concentrated inside Mtb-infected cells To determine whether CAV would be effective against intracellular Mtb, we infected THP-1 monocytes with Mtb, as described above, and then inoculated them into HFS-TB conditioned with RPMI and 2% fetal bovine serum (FBS) (19,20). HFS-TB replicates were then treated with CAV daily for 26 days, which we confirmed by repetitive sampling of both the central compartment (systemic concentrations) and Mtb-infected monocytes. Figure 5A shows the concentrations of ceftazidime and avibactam achieved inside the Mtb-infected monocytes: The concentrations achieved by both drugs at all time points were much higher inside the monocytes than extracellularly. Thus, all CAV-treated systems achieved a %T MIC of 100% both extracellularly and intracellularly. Microbial kill results based on TTP are shown in Fig. 5B, showing that the CAV monotherapy increased TTP >2-fold from day 0. Figure 5C shows that Mtb in untreated control HFS-TB replicates grew at an average rate of 0.119 log 10 CFU/ml [95% confidence interval (CI), 0.051 to 0.186], with maximal growth rate of 0.293 log 10 CFU/ml (95% CI, 0.156 to 0.429) in the first 10 days. Figure 5C further demonstrates that in the CAV-treated HFS-TB replicates, microbial kill was 3.0 log 10 CFU/ml below that of day 0, and with a difference of >5 log 10 CFU/ml with untreated controls. This kill rate magnitude is in the same range as that of the three first-line drugs combined in the same HFS-TB model in the past (29). These results mean that CAV kills the intracellular Mtb subpopulation and has the advantage of high intracellular penetration.
CAV sterilizing effect is similar to that of the three first-line drugs in combination Next, we wanted to compare the sterilizing effect of CAV monotherapy to the first-line drug combination (isoniazid, rifampin, and pyrazinamide). We used the HFS-TB model of extracellular semidormant Mtb H37Rv, in broth acidified to pH 5.8 (18), treated three times daily for 6 weeks. The drug concentrations we measured and the exposures achieved in each replicate HFS-TB are shown in Fig. 6A. Because a %T MIC of 100% and the exposures of first-line drugs achieved are those associated with optimal microbial kill for each, we compared the best possible sterilizing effect of CAV to the best of the first-line drug combination. Figure 6B shows that untreated controls grew very slowly in this HFS-TB model, at a rate of 0.026 log 10 CFU/ml (95% CI, 0.009 to 0.04), which
S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E
is 11.27 times slower than the intracellular population and is in the expected range for semidormant bacilli (30). There was a biphasic decline in CFU/ml with CAV monotherapy treatment, a 2.3 log 10 CFU/ml decline during the first week, followed by a slower decline, which was nevertheless sustained for up to 6 weeks. The microbial kill by the standard three-drug combination therapy, although better than that of the CAV monotherapy, was only marginally so. By the end of the experi-ment on day 42, the CAV monotherapy had killed exactly the same as the first-line drug combination. Thus, CAV monotherapy may have a sterilizing effect that is only slightly less than that of the three-drug firstline drug combination.
Treatment of X/MDR-TB and incurable TB would be achieved with doses tolerated by patients The main drivers of therapeutic outcomes in TB patients are pharmacokinetic variability and Mtb isolate MICs (31)(32)(33)(34)(35)(36)(37). The translational pathway that starts with HFS-TB findings, followed by Monte Carlo simulations that take into account the pharmacokinetic and MIC variability, has a quantitative forecasting accuracy of 94% for identified optimal doses and susceptibility breakpoints (8,28). Exposure target values are known to be the same between the HFS-TB and TB patients; thus, the same exposure targets of a %T MIC of either 47 or 63% were used to translate CAV efficacy from the laboratory to the clinic (8,27,28).
To identify the CAV clinical dose that would achieve or exceed the exposure target in >90% of 10,000 TB patients, we entered the population pharmacokinetic parameter estimates of CAV and their interindividual variability in children and in adult TB patients, as well as the lung penetration ratios (13,(38)(39)(40), in subroutine PRIOR of ADAPT software (table S2). Each of the computational steps for the simulations was outlined in detail in Materials and Methods. We validated the clinical trial simulations using steps we previously described in detail (28). Our simulations identified the ceftazidime and avibactam concentrations for the standard CAV dose of 50 mg/kg in 10,000 young children (Fig. 7A), which was similar to those identified in the clinic with that dose (38). This means that the simulation accurately achieved drug concentrations achieved by specific doses in the children. For the exposure target %T MIC of 47%, which gives similar microbial kill as the most active standard first-line drugs, CAV doses of 50, 100, 150, and 200 mg/kg infused over 2 hours every 8 hours achieved or exceeded the target at each MIC in proportions of children shown in Fig. 7B. More than 90% of the children treated with a CAV dose of 50 mg/kg achieved this exposure target in Mtb isolates with an MIC of up to 32 mg/liter, 90% of those treated with a CAV dose of 100 mg/kg achieved the target with an MIC of up to 64 mg/liter, whereas 90% of the children treated with CAV doses of 150 to 200 mg/kg achieved or exceeded the target with an MIC of up to 128 mg/liter. In the HFS-TB, a CAV %T MIC of 63% actually kills 11 times more than isoniazid monotherapy, which could shorten therapy duration. Figure 7C shows the target attainment for the %T MIC of 63% in children treated with the same four doses: The MICs, above which target attainment falls to less than 90%, were one tube dilution lower than the %T MIC of 47% target. Figure 7D is a summation over the entire MIC range for the clinical isolates from South Africa and is the proportion of 10,000 children who achieved each of the two exposure targets. Figure 7D shows that the dose of 100 mg/kg, which has been shown to be tolerated by children (40), would achieve exposures that kill as well as the first-line anti-TB drugs in 90% of the children and would also achieve kill rates faster than those for the first-line drugs in 60% of the children. Simulations of 10,000 children patients for the avibactam component revealed that all tested doses achieved the target of 1 mg/liter of avibactam over both 47 and 63% of dosing interval, when administered as the commercially available CAV combination that has a ceftazidime/avibactam ratio of 4:1.
We performed similar Monte Carlo experiments for adult cavitary TB, in which 80% of Mtb is extracellular (16); patients have tolerated seven to eight times the standard 2-g dose even as a continuous infusion in the past (39,40). Figure 8A shows that concentrations achieved in our 10,000 simulated adult patients were similar to those reported in the FDA docket in patients treated with a 2-g dose three times a day. Therefore, our simulations accurately recapitulated concentrations achieved by the specific doses in the clinic. Figure 8B shows that for exposure target of %T MIC of 47%, the target attainment was achieved in >90% of the patients with Mtb isolates that had CAV MICs of ≤16 mg/liter for standard 2-g dose and MICs of up to 64 mg/liter for 12-g dose a day. Figure 8C shows that the more stringent exposure target of %T MIC of 63% was more difficult to achieve, at each MIC, in adults than in chil-dren. Figure 8D shows the proportion of 10,000 adult TB patients who would achieve each of the two exposure targets over the entire MIC range; the graph flattens out after 12 g for the %T MIC of 47% target. Therefore, 12 g is the dose to be used as salvage therapy in adult TB. With regard to avibactam, all doses tested achieved the target of 1 mg/liter of avibactam over 47% of dosing interval. In summary, we identified a CAV dose of 100 mg/kg in children, and 12-g dose a day in adults would achieve kill rates similar to rifampin; 60% of children actually exceed that kill rate.
DISCUSSION
We found that CAV has remarkable sterilizing effect at clinically achievable concentrations. Neither ceftazidime nor avibactam on its own killed Mtb, but the combined formulation did. This suggests that the natural resistance of Mtb to ceftazidime is via degradation of the drug and not because Mtb does not have the cephalosporin's target. blaC is the lone gene that encodes a b-lactamase in Mtb (41). Mtb BlaC shows broad activity against cephalosporins such as ceftazidime; however, clavulanate is a relatively poor inhibitor of this enzyme (42). In contrast, avibactam is an effective BlaC inhibitor. However, none of our 12 resistant isolates harbored a BlaC mutation. Instead, all 12 CAV-resistant isolates harbored both a PonA1 Prol631Ser and an RpfE Arg126Gln nonconservative mutation. ponA1 encodes penicillin-binding protein 1 (PBP1), a bifunctional enzyme with a C-terminal transpeptidase in residues 561 to 820, catalyzing (D,D) 4→3 linkages in peptidoglycan synthesis and with a hydrolytic N-terminal domain transglycosylase (43). The mutations in these genes in CAV-resistant isolates suggest that PBP1 could be a putative site for CAV binding. CAV-resistant isolates also had rpfA and rpfE mutations, encoding resuscitation protein factors. Mtb's five resuscitation protein factors share a lytic transglycosylase domain and hydrolyze glycan chains in peptidoglycan. One of these, rpfB, forms a complex with the endopeptidase RipA, which synergistically hydrolyses peptidoglycan; this hydrolysis action by the complex is inhibited by PBP1 (43,44). RipA also interacts with rpfE, which shares 66% identity with rpfB (44). The rpf mutations further suggest CAV interference with peptidoglycan synthesis. However, our findings are not definitive and only give a plausible hypothesis of the ceftazidime Mtb target; thus, more detailed biochemical work will be needed to confirm our hypothesis.
In the global fight against TB, we have run out of options for the treatment of many patients. Despite the recent introduction of bedaquiline and delamanid in high-TB burden countries, there continues to be a large number of patients with incurable TB. We found that CAV, which is already commercially available as a combined formulation, is likely to be useful as salvage therapy for patients with difficult-to-treat TB: It can also be added to bedaquiline and delamanid. Use of CAV circumvents the need to use amoxicillin-clavulanate in conjunction with injectable b-lactams and the attendant common side effects. In addition, the 9-month MDR-TB treatment regimen recently announced by the World Health Organization includes aminoglycosides as an important part of the regimen. These injectables are associated with hearing loss of up to 70% in some of adult patients and in up to 25% of children (45,46). CAV could be a less toxic alternative and arguably more effective. In addition, CAV could be safer for use in neglected TB populations, such as pregnant women, for which anti-TB drugs with minimal teratogenicity are needed. Finally, CAV has no known interactions with antiretroviral agents and can thus be used in HIV/TB co-infection.
We identified a CAV dose of 100 mg/kg three times a day for use in the treatment of TB of children, and up to 12 g in adults, as optimal. Patients have tolerated avibactam doses of up to 2000 mg and ceftazidime doses of up to 200 mg/kg as a continuous infusion in the past (38)(39)(40). However, given that long-term administration would be required to treat TB, the full side effect profile over longer treatment durations is unknown and will require careful documentation. Nevertheless, given the urgent need, the dose and dosing schedule should be tried as salvage therapy in patients who have no other treatment options. We also identified a CAV susceptibility breakpoint of 128 mg/liter for Mtb. This remains to be validated in the future; however, the approach we used on HFS-TB findings followed by Monte Carlo simulations has identified susceptibility breakpoints for rifampin, isoniazid, and pyrazinamide, which were later confirmed in clinical studies (34)(35)(36)(37). This is also likely in the case of CAV.
Finally, we introduce a paradigm for rapid screening of different antibiotics for effect against Mtb, examining susceptibility in X/MDR-TB strains and then quickly taking promising candidates through steps to identify optimal doses. This approach takes months rather than years and could alleviate the current crises while new anti-TB molecules are being developed. These medications, such as CAV, could be rapidly advanced to clinical testing and to use as salvage therapy.
Bacterial strains and cell lines
The following laboratory Mtb strains were used in the experiments: H37Ra [American Type Culture Collection (ATCC) #25177], H37Rv (ATCC #27294), CDC 1551, HN878, and Mtb 18b (donated by S. Cole). The 25 clinical strains from TB patients were collected by the Medical Research Council of South Africa. Human-derived THP-1 cells (ATCC TIB-202), grown in RPMI 1640/10% FBS, were infected with H37Ra, as described previously (19,23,29). All studies with virulent Mtb strains were performed in a BSL3 laboratory.
Materials and drugs
Ceftazidime and CAV were purchased from the Baylor University Medical Center pharmacy. Avibactam was purchased from BOC Sciences. Hollow fiber cartridges were purchased from FiberCell.
Human and animal subjects
No human or animal studies or experiments were performed.
Determination of MICs CAV (4:1 ratio) MICs were examined using the final ceftazidime concentrations of 0, 1,2,4,8,16,32,64,128, and 256 mg/liter, in triplicate. In the MGIT, the lowest concentration of CAV that prevented the drugcontaining tube from fluorescing within 2 days of the drug-free tube or control was defined as the MIC.
The subsequent HFS-TB studies examined a single CAV dosing scheme, designed to achieve a %T MIC of 100% in an intracellular Mtb HFS-TB model of H37Ra for 26 days and a sterilizing effect model of Mtb H37Rv at pH 5.8. Central and peripheral compartments were sampled for bacterial burden, as described above and in Results.
WGS of CAV-resistant isolates DNA was extracted from CAV-resistant isolates using the methods described previously (47). Sequencing libraries were prepared using KAPA S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E Biosystem Hyper kit (KK8504). Six-bases-long unique barcodes were added to each sample by ligating Illumina-compatible adapters and after size selection libraries were amplified using four polymerase chain reaction cycles, followed by cleaning using XP beads. About 9 pM of each library was used for sequencing on HiSeq 2500 PE100 (paired-end 100 base pairs). After sequencing, all the reads were sorted on the basis of the attached barcodes using SAM tools (http://samtools.sourceforge.net/). Raw reads were processed to remove adapter artifacts and to deconvolute the set of reads into their constituent isolates. Reads with no identifiable barcode or with a barcode containing one or more ambiguous base calls were excluded. CLC Genomics Workbench (v9.5.2) was used to perform quality control (read quality, nucleotide content, and sequence redundancy) as well as to align sequencing reads to the reference Mtb genome NC_000962 and to make the variants call for SNV detection.
CAV drug assay Ceftazidime concentrations in the samples collected from the central compartment of HFS-TB were analyzed by liquid chromatographytandem mass spectrometry in positive ion mode. Ceftazidime and ceftazidime-D5 (internal standard) were purchased from Toronto Research Chemicals and Sigma, respectively. Calibrator, controls, and internal standard were included in each analytical run for quantitation. Stock solutions of ceftazidime and internal standard were prepared in 80:20 methanol/water at a concentration of 1 mg/ml and stored at −20°C. A seven-point calibration curve was prepared by diluting ceftazidime stock solution in drug-free media (0.25, 1, 5, 10, 25, 50, and 100 mg/ml). Quality control samples were prepared by spiking media with stock standards for two levels of controls of 0.4 and 8 mg/ml. Samples were prepared in 96-well microliter plates by adding 10 ml of calibrator, quality controls, or sample to 190 ml of 0.1% formic acid in water containing internal standard (1 mg/ml) followed by vortex. Chromatographic separation was achieved on an Acquity UPLC HSS T3 analytical column (1.8 mm, 50 × 2.1 mm) (Waters) maintained at 30°C at a flow rate of 0.2 ml/min with a binary gradient with a total run time of 6 min. The observed ion mass/charge ratio (m/z) values of the fragment ions were 547.11→468.11 for ceftazidime and 552.15→468.11 ceftazidime-D5. Sample injection and separation were performed using Acquity UPLC interfaced with a Xevo TQ mass spectrometer (Waters). All data were collected using MassLynx version 4.1 SCN810. The limit of quantitation for this assay was 0.25 mg/ml. The between-day and within-day percentage coefficients of variation were 14 and 21%, respectively.
Pharmacokinetic and pharmacodynamic modeling Ceftazidime, avibactam, rifampin, pyrazinamide, and isoniazid concentrations were analyzed using ADAPT software, as described previously (29). Pharmacokinetic parameter estimates identified in the models were then used to calculate the AUC 0-24 , AUC 0-24 /MIC, peak/MIC, and % T MIC in each HFS-TB. For monotherapy experiments, microbial kill versus exposure was examined using the inhibitory sigmoid E max model.
Monte Carlo experiments
Monte Carlo experiments allow for examination of different drug doses to determine whether they will achieve the exposure targets associated with specific rates of microbial kill in patients. We examined this target attainment in at least 10,000 patients; this number is required to stabilize the tail of the variance. Population pharmacokinetic parameter estimates for CAV, and their covariance, for either children or adults that are shown in table S2 were based on studies published previously (13,(38)(39)(40), were entered as domain of input into subroutine PRIOR of ADAPT software. Both normal and lognormal distributions were examined. We used the option for population simulation without noise. The following doses were examined for children with TB: 50, 100, 150, and 200 mg/kg. The following doses were examined for adults with TB: 2, 4, 6, 8, 12, 14, and 16 g. We used two levels of validation. For the first level, we compared the simulated data to determine whether the pharmacokinetic parameter estimates and variances in the simulated subjects were similar to the data sets entered into the domain of input. For the second level of validation, we examined a separate database, such as the one reported to the FDA, to determine whether the drug concentrations achieved by the standard doses were similar to those we identified in our Monte Carlo experiments.
Each of the doses was examined for the ability to achieve, or exceed, the target CAV exposure %T MIC of either 47 or 63% derived in the HFS-TB studies, at each MIC. We examined each of these for MICs ranging from 2 to 128 mg/liter, based on a twofold dilution. The MIC range was from our results in Fig. 2H. Probability of target attainment (PTA) at each MIC was used to calculate the cumulative proportion of patients [also termed cumulative fractional response (CFR)] who would achieve or exceed the target ceftazidime exposure target %T MIC of either 47 or 63%, on summation over the MIC range based on the formula where PTA is at each MIC and F is the proportion of isolates at each MIC (i). For avibactam, the target concentration was 1 mg/liter. The percentage of time that avibactam concentration persisted above 1 mg/liter was set to be equal to the %T MIC for ceftazidime (47 or 63%), assuming that ceftazidime only works when sufficient concentrations of avibactam are achieved. We assumed a ceftazidime/avibactam concentration ratio of 4:1 in all simulations, as in the current commercial preparation. | 2017-09-02T11:13:57.004Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "859fe06ab1d93848f9ee7ffcb66ad6f5fa05f74a",
"oa_license": "CCBYNC",
"oa_url": "https://advances.sciencemag.org/content/advances/3/8/e1701102.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "859fe06ab1d93848f9ee7ffcb66ad6f5fa05f74a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245196924 | pes2o/s2orc | v3-fos-license | MACROECONOMIC STABILITY AND TRANSPORT COMPANIES’ SUSTAINABLE DEVELOPMENT IN THE EASTERN EUROPEAN UNION
. The paper’s primary aim is to evaluate the influence of macroeconomic stability on transport companies’ sustainable development in the eastern EU from 2008 to 2019. The first part discusses the theoretical problems. The empirical part includes the methodology, results of the research and conclusions. To determine the relationship between variables, we use Pearson’s R and the Ordinary Least Square Method. The contribution to knowledge is using the pentagon of macroeconomic stability to evaluate macroeconomic stabilisation’s influence on transport companies’ sustainable development. The results indicate that macroeconomic stability is one of the essential determinants of the transport companies’ sustainable development. According to Pearson’s R, the highest level of dependence is in Slovenia (0.96), Bulgaria (0.9), and Slovenia (0.83). The lowest is in Latvia (0.69). The OLS regression results indicate that the highest significance is in Slovakia ( α 1 = 1.994), the lowest is in Lithuania ( α 1 = 0.691). The states’ economic policies should favour the freedom to conduct business, create appropriate legal regulations, and support ecological investments. It is necessary to act for a stable and fair tax system, ensure access to finance. The issue is contemporary and requires further analysis.
Introduction
The relationship between macroeconomic stability (M SP ) and transport companies' sustainable development (SD TC ) is a current and important issue in climate degradation. The literature on companies' sustainable development is gaining importance and requires more in-depth and broader analysis (Evers, 2018;Chang, 2020).
Researchers undertake theoretical analyzes of sustainable development, focusing on its evaluation and development determinants (Bordon & Schmitz, 2015). Many of them focus on individual economic entities' situation (Mao et al., 2018), analyzes reports on sustainable development of companies (Harymawan et al., 2020), and attempts to evaluate and measure the companies' sustainable development and determine its determinants (Misztal, 2019;Matinaro et al., 2019;Comporek et al., 2021). Some researchers analyze transport companies in terms of their impact on the natural environment (Brussel et al., 2019;Pieloch-Babiarz et al., 2021); analyzes focus on green supply chains, ecological innovations (Andersson & Forslund, 2018) or an attempt to identify determinants influencing the sustainable development of transport companies (Brussel et al., 2019).
Although the macroeconomic stability for the development of companies is the subject of analyzes and scientific considerations, there is a certain insufficiency, as there are no analyzes of the influence of M SP on SD TC . Researchers indicate that macroeconomic situations, including the level of GDP, inflation, unemployment, and the trade balance, affect the transport sector (Misztal & Kowalska, 2020;Comporek et al., 2021). Investigating the nature and direction of these links will increase the dynamics of companies' sustainable development and implement a more effective economic and environmental policy.
The paper's primary aim is to evaluate the influence of M SP on SD TC in the eastern EU from 2008 to 2019. The research supplements the literature on the subject and is important from the point of view of implementing states' economic policy. To evaluate the statistical relationship between variables, the Authors use the Ordinary Least Square Method, which is commonly used for similar analyzes (Oberhofer & Dieplinger, 2014). The estimated model is linear and fulfils the conditions necessary for the application of this method.
The research sample includes transport companies from the countries of the eastern European Union. The research sample covers the years from 2008 to 2019. Transport companies were selected for the research sample due to their role in developing other economic sectors. Moreover, this sector has one of the largest negative impacts on the natural environment.
The structure of the paper is as follows: an introduction, a literature review, a research methodology, research results, conclusions, and references.
The Authors discuss selected theoretical issues connected with the sustainable development of transport companies in the context of macroeconomic stability. The empirical part of the paper presents the research results and conclusions. We build the single equation model, use the Pearson' R and the Ordinary Least Square Method (OLS) to verify the research hypothesis. The research's significant limitation. It does not consider the situation before the economic crisis and its impact on companies' sustainable development. Also, only one explanatory variable was included in the model. Therefore, further research should be carried out to identify the key determinants for companies' sustainable development. Moreover, the model considers only quantitative data, which is also a significant limitation.
The literature review
Sustainable development means achieving the best economic performance while respecting the environment and social development (Evers, 2018;Cohen et al., 2021). Over the years, the concept of sustainable development evolved significantly, becoming a key reference area in many global programs and initiatives for the common good (Mao et al., 2018;Pieloch-Babiarz et al., 2021). Business activities are fundamental for stable economic growth. Unfortunately, it has very often a negative influence on the natural environment (Škare & Golja, 2013;Słupik & Lorek, 2019). Companies should implement the assumptions of sustainable development into their business processes (Salari & Bhuiyan, 2018;Powe, 2020). It requires achieving the best possible financial results, multidimensional management, testing various business models and scenarios, implementing continuous learning processes, looking for and levelling threats around achieving sustainable development goals (Misztal, 2019;Saygili et al., 2021). The implementation of sustainable development tasks provides a competitive advantage (Suprayoga et al., 2020).
Numerous empirical studies focus on the environmental activities of transport companies (Valjevac et al., 2018;Banik & Lin, 2019). It is necessary to minimize the negative impact of transport entities, create balanced transport systems, and implement eco-innovation (Zikic, 2018). Ecological activities should reduce emissions of harmful substances and waste, minimize the use of non-renewable resources, reduce noise, etc. (Misztal, 2019;Cohen et al., 2021).
Sustainable development of transport companies' factors is internal (a financial situation, environmental awareness, etc.) and external (micro and macroeconomic factors) factors (Bordon & Schmitz, 2015;Andersson & Forslund, 2018;Brussel et al., 2019). One crucial factor for sustainable development is macroeconomic stabilization, which means lasting economic balance (internal and external) in both the real and monetary aspects (establishing a macroeconomic system characterized by an equilibrium of flows and stocks alike). It eliminates uncertainty in business and boosts future economic activity growth (Kołodko, 1993;Sokolov Mladenović et al., 2019;Chang, 2020).
The company's sustainable development is strongly associated with the level of macroeconomic growth (Škare & Hasić, 2016;Comporek et al., 2021). A higher economic level means higher expenditure on research and development, greater availability of knowledge and greater environmental awareness of customers. Thus, stable economic growth leads to rationalization of decisions in environmental protection (Cek & Eyupoglu, 2020).
Macroeconomic stability understood as stable conditions for economic growth is of key importance for sustainable economic development. The improvement of stability is related to improving business conditions and stable legal regulations (Misztal & Kowalska, 2020;Lisiński et al., 2020). Most researchers emphasize that high GDP, low inflation, and low unemployment rate increase confidence and improve its sustainable development (Krajnakova et al., 2018;Misztal, 2019). The companies' sustainable development is dependent on interest rates, foreign investments, and government expenditure (Barkauskas et al., 2015).
Macroeconomic stability ensures full and productive employment and decent work for all people. Hence, from the perspective of the sustainable development of companies, a decrease in the unemployment rate has a positive effect on the sustainable development of companies (Fedulova et al., 2019). As for the issue of interest rates, they largely influence the investment decisions of companies. Higher interest rates mean a higher credit price and lower ecological innovations (Wu et al., 2021).
Macroeconomic stability affects the sentiments and expectations of entrepreneurs about the future. A good economic situation is conducive to undertaking ecological investments (Kekre, 2016;Raczkowski, 2015;Harting, 2019). There is also a positive correlation between macroeconomic conditions and consumer expectations. There is pressure on companies in developed countries to take care of the environmental and social aspects (Pieloch-Babiarz et al., 2021).
The methodology of the research
The paper's primary aim is to evaluate the influence of macroeconomic stability on transport companies' sustainable development in the eastern EU from 2008 to 2019. The research period and the sample selection result from the adopted purpose and the availability of data. The study's significant limitation is that it does not consider the situation before the economic crisis and its impact on companies' sustainable development. Moreover, the model considers only quantitative data, which is also a significant limitation.
We focus on eleven eastern European Union countries, which have several common characteristics, including geolocation, history, economic systems transformation, and business operations changes.
The study refers to the transport companies, which can contribute to the region (the sample was selected to ensure the results' statistical significance). Not without value is that transport companies emit several pollutants, which hurt the natural environment and human health and life.
The central research hypothesis is "Macroeconomic stability has a statistically significant influence (p < 0.05) on the transport companies' sustainable development in the eastern European Union in the period 2008-2019". To evaluate the significance of the variable M SP 's influence on the variable SD TC , we verify the hypothesis: with the alternative hypothesis H 1 : α j ≠ 0 (p-value < 0.05).
Assumption: macroeconomic stability is one of the decisive determinants affecting green business investments.
Also, highlighted the sub-hypothesis: -H1: "The transport companies' sustainable development in the eastern part of the EU has a positive trend from 2008 to 2019". The following equation describes the dynamics: SD TC = α 1 t + α 0 , we verify the hypothesis: H 0 = α 1 > 0; the alternative hypothesis H 1 = α 1 < 0. Justification for the H1 hypothesis: actions taken by state and EU authorities to initiate environmental and social investments, including the introduction of standards and legal principles in environmental protection. The positive trend is also the result of the increased environmental awareness of entrepreneurs and customers.
-H2: "The macroeconomic stability in the eastern EU has a positive trend from 2008 to 2019".
we verify the hypothesis H 0 = α 1 > 0; the alternative hypothesis H 1 = α 1 < 0. Justification for the H2 hypothesis: the research period covers the time to recover from the economic slowdown and slow growth in corporate investment.
-H3: "The highest average value of the transport companies' sustainable development (SD TC ) is in countries with the highest mean value of the macroeconomic stability (M SP )". We verify the hypothesis Justification for the H3 hypothesis: M SP means stimulating economic growth, increasing employment, ensuring internal balance (by reducing the inflation rate), and providing external balance (by striving to achieve the balance of payments). Thus, attain M SP has a positive effect on the level of investment in the company's sector.
The variables are stimulants (positively affect synthetic indicators) andstimulants (alytical variables whose increase affects the decrease in the sustainabldevelopment indicator).
We use following variables to assess the indicators: -economic development ( where GDP ∆ -∆ gross demestic product, HICP -Harmonised Index of Consumer Prices, U -unemployment rate, G -government debt, CA -current account balance to gross domestic product. We use the Pearson' R to measure the correlation between M SP and SDTC and create two types of the regression model (the model meets the conditions for the application of the least square method) based on the formula:
Result of the research
The research sample consists of 44% Polish (146 039), 12% Czech and Romanian (39 424, 39 646), 9% Hungarian (28 926), 6% Bulgarian (20 625), 5% Slovak (15 266), 3% Slovenian, Lithuanian and Croatian (8 580, 11 286, 9 460), 2% Latvian (6 672) and 1% Estonian (4 806) transport companies (Figure 1). Figure 2 presents SD TC from 2008 to 2019. All countries show a positive trend in the SD TC over the analyzed period, which should be assessed as a favourable situation, which means that activities in the transport sector undertaken for economic, social, and environmental development are effective and efficient. The highest dynamics is in Hungary (SD TC = 0.0523t + 0.2; R² = 0.9585) and in Estonia (SD TC = 0.052t + 0.2485). The SD TC fell during the economic crisis of 2008, and after 2012, it began to rise rapidly in all countries. Figure 3 presents M SP in east EU countries. There is a positive trend in the M SP in the analyzed countries. In most countries, its values slightly decreased during the crisis and then Figure 4 presents the result of the correlations between M SP and SD TC . The Pearson's R between SD TC and M SP is significant at p < 0.05. The highest correlation is in Slovenia (0.96), the lowest in Latvia (0.69). The correlations between the variables are either strong or very strong, which proves a high degree of relations between the variables. Table 1 presents the OLS regression. All factors have a positive influence on transport companies. The highest impact of M SP1 is in Estonia (4.868), the lowest is in Romania (0.495). The highest impact of M SP2 is in Slovakia (2.392) and the lowest is in Estonia (0.087). In most countries, the M SP1 and M SP2 are statistically significant (except M SP1 in Slovakia). The coefficient of determination (R 2 ) is from 0.573 (M SP1 , M SP2 and SD TC in Czechia) to 0.981 (M SP1 , M SP2 and SD TC in Romania). M SP has a positive influence on the transport companies' sustainable development. The highest impact is in Slovakia (1.994), while the lowest is in Lithuania (0.691) ( Table 1). The results of the research allow confirming the research hypothesis (H). The study gathered evidence that macroeconomic stabilization has a statistically significant impact on transport companies' sustainable development from 2008 to 2019. According to the Pearson's R, the highest level of dependence occurred in Slovenia (0.96), Bulgaria (0.9), and Slovenia (0.83). The lowest in Latvia (0.69). The OLS regression results indicate that the highest impact of M SP on SD TC is in Slovakia (α 1 = 1.994) while the lowest is in Lithuania (α 1 = 0.691).
In the analyzed period in the eastern part of the European Union, there are positive phenomena that go hand in hand, as there are balanced economic growth and sustainable transport companies' development. Moreover, lasting economic balance leads to an increase in social well-being and changes the conditions for doing business. Table 1 The sub-hypothesis H1 is correct because, in all countries, SD TC is positive from 2009 to 2019. It means that entrepreneurs take actions for economic, social, and environmental development. The programs implemented by the European Union and countries work well.
End of
The sub-hypothesis H2 is true. In all analyzed countries, it is the positive dynamics of M SP . This is the result of an improvement in the economic situation, an increase in investments, and an improved positive mood among consumers.
The sub-hypothesis H3 is wrong because only in Estonia, the highest mean value of the sustainable development of transport companies (SD TC = 0.59) is accompanied by the highest average value of the macroeconomic stabilization indicator (M SP = 0.31).
The model with two explanatory variables M SP1 and M SP2 does not indicate which group of factors, internal (M SP1 ) or external (M SP2 ) is crucial for the sustainable development of transport companies. The highest impact of internal factor is in Estonia (α 1 = 4.868) while the lowest is in Romania (α 1 = 0.495). The highest impact on external factors is in Slovakia (α 2 = 2.392), and the lowest in Estonia (α 2 = 0.087).
The sustainable development of transport companies is very important research issues. This research focuses only on macroeconomic stability, which is a severe limitation. The most important conclusion is that the more advanced countries are, the more meaningful demand for companies to comply with SDG.
Therefore, it is vital to create favorable circumstances for doing green business. From this perspective, the state authorities' role is necessary and essential for the countries' stable development with harmony with nature. The transparent legal regulations and substantive and financial support are also crucial for undertaking ecological investments by companies.
Conclusions
The sustainable development of the companies is conditioned by several factors, both internal and external. Internal factors include assets and financial possibilities, the adopted business model, the strategy, and the environmental management approach. External factors, including the industry's competitiveness and ecological harmfulness, socio-economic increase in the country and its future perspective, and legal regulations in environmental protection.
The research results indicate that macroeconomic stability (stable economic growth) is one factor determining the transport companies' sustainable development in east EU countries. The Pearson's R and the OLS regression indicate the high correlation between macroeconomic stabilization and transport companies' sustainable development. From 2008 to 2019, there is a positive dynamic of SD TC and M SP .
The research's significant limitation. It does not consider the situation before the economic crisis and its impact on companies' sustainable development. Also, only one explanatory variable was included in the model. Therefore, further research should be carried out to identify the key determinants for companies' sustainable development. Moreover, the model considers only quantitative data, which is also a significant limitation.
The research results are useful for setting the direction of governments' economic and environmental policies and for managing companies. The directions of the states' economic policies should favour the freedom to conduct business, create appropriate legal regulations, and support the development of ecological investments. It is necessary to act for a stable and fair tax system and ensure access to finance.
Authorities should use regulatory mechanisms and market control, from corporate governance to verifying the public finances sector (only to create appropriate self-regulating mechanisms). Achieving macroeconomic stability is a challenging task, especially for developing economies. In countries where economic transformation has also taken place, it is crucial to conduct macroeconomic policy to support ecological and pro-social companies' initiatives. Macroeconomic stability strengthens the economy's position and is the starting point for ecological development and reducing the negative influence of economic activities on the natural environment. It affects the credit policy, which is essential for making new environmental investments.
From business managers' perspective, the information about macroeconomic stabilization is vital in defining development strategies and building business models. Maintaining appropriate economic relations affects the moods and expectations of companies and customers. The persistent macroeconomic stabilization leads to an increase in society's welfare and changes the consumption model. Not only economical but also social and environmental issues are gaining in importance.
The SD TC and M SP have a growing trend. Which indicates that the actions taken so far in the analysed countries are right, although a more comprehensive approach to the development of economies is required. It seems that these countries, apart from taking care of economic development, need to implement environmental protection and community support policies more actively and effectively.
The sustainable development of transport companies is significant as this sector is responsible for some of the highest emissions of harmful substances into the environment. Moreover, the development of the transport sector influences other sectors of the economy.
The research shows the relationship between sustainable development and macroeconomic stabilization, which means implementing current and forecasted macroeconomic information in strategies and business models in business practice. The obtained results also indicate the tasks faced by the ruling states whose role in creating conditions for companies' stable and sustainable development is undeniable.
Macroeconomic stability is only one of the factors influencing the sustainable development of economic entities. It is necessary to conduct further analyses devoted to isolating the determinants of economic, social, and environmental decision-making by companies. Further research will focus on assessing the influence of determinants on the transport companies' sustainable development in the EU. It is also essential to identify the determinants of sustainable development in other companies and conduct a comparative analysis. | 2021-12-16T16:39:32.475Z | 2021-12-14T00:00:00.000 | {
"year": 2021,
"sha1": "e5698f5b1b95019d356307fb391a51d55583bdf6",
"oa_license": "CCBY",
"oa_url": "https://journals.vgtu.lt/index.php/JBEM/article/download/15913/10863",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "325e72590b96f267b6c111cc63b0b2bc9cb986a5",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
257377676 | pes2o/s2orc | v3-fos-license | Effect of the Tetravalent Dengue Vaccine TAK-003 on Sequential Episodes of Symptomatic Dengue
ABSTRACT. In the pivotal phase 3 efficacy trial (NCT02747927) of the TAK-003 dengue vaccine, 5 of 13,380 TAK-003 recipients and 13 of 6,687 placebo recipients experienced two episodes of symptomatic dengue between the first dose and the end of the study, ∼57 months later (patients received the second dose 3 months after the first dose). Two of these participants experienced repeat infection with the same serotype (i.e., homotypic reinfection). In comparison with placebo, the relative risk of a subsequent episode of symptomatic dengue was 0.19 (95% CI, 0.07–0.54) in TAK-003 recipients. Based on the small number of subsequent episodes, these data suggest a potential incremental effect of TAK-003 beyond prevention of the first episode of symptomatic dengue after vaccination.
INTRODUCTION
Dengue is a mosquito-borne viral disease that is endemic in more than 100 countries worldwide, predominantly in tropical and subtropical regions. 1 Infection by one of the four dengue virus (DENV) serotypes is thought to confer lifelong homotypic immunity, but may also confer temporary crossprotection against the heterotypic serotypes. 2,3 Hence, study of sequential serotype-confirmed dengue infections in the same individual requires meticulous data collection in longterm or cohort studies. The clinical outcome of subsequent infections is believed to be influenced by many factors, such as the number of reinfections, sequence of serotypes, and time interval between the infections, in addition to the wellknown risk of severe disease manifestation upon secondary infection. [3][4][5][6] The tetravalent, recombinant, live-attenuated dengue vaccine (TAK-003), which is based on a DENV serotype 2 (DENV-2) backbone, is currently being assessed in a large-scale, longterm, phase 3 efficacy trial in healthy children and adolescents living in dengue-endemic areas (NCT02747927). [7][8][9] The surveillance of febrile illnesses in this trial was designed to detect all symptomatic cases (both nonhospitalized and hospitalized dengue) throughout the trial, thus providing a unique opportunity for identifying symptomatic sequential infections. We have reported previously 10 that TAK-003 is efficacious against symptomatic dengue both in baseline seronegative and seropositive participants, with a profile of variable performance against individual serotypes. Efficacy was demonstrated against all four serotypes in the baseline seropositive subpopulation, and against DENV-1 and -2 in the baseline seronegative subpopulation. In the latter subpopulation, the available data did not suggest efficacy against DENV-3, and the case counts were too small to assess efficacy against DENV-4.
During the trial, vaccine efficacy was estimated using a Cox proportional hazards model, and only the first episodes of virologically confirmed dengue (VCD) were considered in the efficacy estimation. For serotype-specific efficacy, again only the first episodes caused by that specific serotype were considered. Because of the long duration of active febrile surveillance, we were able to document participants who experienced multiple VCD episodes from the first dose until 4.5 years after the two doses of trial vaccination. Herein, we describe these multiple episodes of symptomatic dengue, together with an exploratory estimate of the effect of vaccination on subsequent dengue episodes.
METHODS
The dengue episodes reported in this article were observed during a phase 3 randomized, double-blind, placebo-controlled trial of TAK-003 in eight countries in Latin America and Asia considered to be endemic for dengue (NCT02747927). During the trial, healthy children and adolescents age 4 to 16 years were randomized 2:1 to receive either two doses of TAK-003 (months 0 and 3) or placebo. Full details of enrollment criteria and trial procedures have been published previously. [7][8][9] The trial was conducted in accordance with the Declaration of Helsinki and the principles of Good Clinical Practice, and informed assent/consent was obtained from participants and their parents or legal guardians prior to enrollment.
In brief, the multipart trial had up to 4.5 years of follow-up for individual participants after administration of the two doses of TAK-003 or placebo, [7][8][9] and has another ongoing 25 months for those age 4 to 11 years at randomization who enrolled to participate in a follow-up booster evaluation phase. Febrile surveillance during all parts of the trial includes at least weekly contact with participants or their legal guardians for robust detection of all symptomatic dengue cases using serotype-specific reverse transcription-polymerase chain reaction (RT-PCR) testing in the acute samples. Virologically confirmed dengue was defined as a febrile illness (body temperature $ 38 C for 2 of any 3 consecutive days) or illness clinically suspected to be dengue by the investigator in association with a positive serotype-specific RT-PCR result. Severity of hospitalized dengue cases was assessed by an adjudication committee (severity criteria reported previously [7][8][9] ) and by a program that analyzed data to identify VCD cases meeting WHO 1997 dengue hemorrhagic fever criteria. 11 Dengue RNA was detected and quantified with a validated serotype-specific RT-PCR assay. The upper limit of quantification (ULoQ) was determined to be 85,714,286 genome copy equivalents per milliliter (log 10 [ULoQ] 5 7.9) for all four DENV serotypes. Serostatus, based on the presence or absence of DENV neutralizing antibodies determined using a microneutralization test, was assessed at baseline for all participants. Microneutralization test results are expressed as the reciprocal of the dilutions of test serum that show a 50% reduction in plaque counts compared to the virus controls. Seropositivity was defined as a neutralizing antibody titer of $ 10 to at least one DENV serotype.
Descriptive details of VCD in participants who experienced multiple episodes between the first dose and 4.5 years after the second dose are presented. Relative risk was calculated as the number of events divided by the number of participants in the TAK-003 group, over the number of events divided by the number of participants in the placebo group. All the analyses presented in this manuscript are exploratory and post hoc in nature.
RESULTS
In total, 13,380 participants in the TAK-003 group and 6,687 in the placebo group received at least one dose of TAK-003 or placebo between September 2016 and March 2017, and were included in the safety population by vaccination group. There were an additional four participants who received both TAK-003 and placebo as a result of an administrative error. Of the 20,063 participants who were tested at baseline, 5,547 (27.6%) were seronegative to all four serotypes. 8 In 57 months after the first dose, 442 participants (3.3%) in the TAK-003 group and 547 (8.2%) in the placebo group experienced at least one VCD episode (Table 1). No coinfections of multiple serotypes were noted. Five participants (0.04%) in the TAK-003 group and 13 (0.19%) in the placebo group experienced two VCD episodes during this time frame, resulting in a relative risk of the subsequent VCD episode versus placebo of 0.19 (95% CI, 0.07-0.54) based on the safety set, and 0.48 (95% CI, 0.17-1.32) based on the population of participants who had experienced at least one VCD episode. Although the latter analysis involved a postrandomization subset population and a nonstatistically conclusive estimate, both are directionally similar and suggest a favorable effect.
Ten of the sequential episodes of VCD occurred at the sites in the Philippines, four in Colombia, three in Sri Lanka, and one in Thailand ( Table 2). Ten of these 18 participants were seropositive at baseline. All but two of the participants who experienced two episodes during the trial were children age 4 to 8 years at the time of randomization. Subsequent episodes occurred 46 to 1,181 days after the first episode recorded in the trial (mean, 500 days). Of the subsequent episodes, five were DENV-1, seven were DENV-2, three were DENV-3, and three were DENV-4. Details of the treatment group are not provided to prevent unblinding in the ongoing study. There were two instances of homotypic VCD: one DENV-3 case occurring 763 days after the first episode in a male participant from the Philippines who was 12 years old at enrollment and seropositive at baseline, and one DENV-1 case occurring 207 days after the first episode in a 7-year-old seronegative female participant from Colombia.
In the 18 participants who experienced two VCD episodes, five of the first episodes after trial vaccination were diagnosed clinically as dengue by the investigators, and three participants were hospitalized (one episode each of DENV-1, -2, and -3). Two of the subsequent episodes (DENV-1 homotypic reinfection and DENV-4 following DENV-2) were diagnosed clinically as dengue and both required hospitalization. One of the first VCD episodes (DENV-3 in a baseline seronegative participant) was classified as severe by the adjudication committee; none of the other episodes were classified as severe or dengue hemorrhagic fever.
DISCUSSION
Active febrile surveillance over 57 months in this phase 3 efficacy trial enabled the identification of 18 multiple, symptomatic dengue infections. Although this is a small number of cases, the distribution by vaccination group (five in the TAK-003 group versus 13 in the placebo group; 2:1 randomization ratio) provides some evidence of the lower risk of a subsequent symptomatic dengue episode in people who have postvaccination breakthrough cases. This incremental effect of TAK-003 is relevant because people living in dengue-endemic countries are at risk of multiple, sequential dengue infections during their lifetime.
In earlier reports from this ongoing trial, we noted a trend of a lower proportion of breakthrough cases in vaccinees presenting with dengue-relevant clinical characteristics compared with the placebo group, such as signs of plasma leakage, thrombocytopenia, or signs of bleeding. 9 These data, together with the data reported herein, suggest that the immune response to TAK-003 may have an attenuating effect on the clinical manifestation of dengue infections. It is plausible to hypothesize that TAK-003 vaccination and subsequent breakthrough infections might help in transitioning to a postsecondary-like state so that subsequent infections are less symptomatic. In this context, the breakthrough infections (both symptomatic and asymptomatic) in vaccinated individuals can potentially serve as natural boosters. Younger participants (4-11 years at enrollment), who made * Age at randomization. † Classified as severe dengue by the adjudication committee. Treatment group is not provided to prevent participant-level unblinding because the trial is ongoing and remains blinded. Dengue RNA was detected and quantified with a validated dengue detection reverse transcription-polymerase chain reaction. up the majority of these sequential cases, are now being evaluated further after the administration of a booster dose in the ongoing trial. Among the 18 sequential episodes, the majority of the cases (i.e., 13 of 18 first episodes and 12 of 18 subsequent episodes) were caused by DENV-1 or -2. This reflects the data in the placebo group over 57 months in eight dengueendemic countries, in which these two serotypes accounted for the majority of dengue cases (423 of 560). This observed serotype distribution pattern also aligns generally with decades of dengue epidemiology globally. 12 Notably, we observed no clear patterns in causative serotype of sequential episodes, but we did record two cases of homotypic reinfection. The first case was a DENV-3 infection in a baseline seropositive participant who most likely had at least one DENV-2 infection prior to enrollment in the trial, based on the baseline neutralizing titers (DENV-1, 105; DENV-2, 4,374; DENV-3, 146; and DENV-4, 330), although this was not confirmed. This observation is particularly interesting because TAK-003 vaccination did not show efficacy against DENV-3 in baseline seronegative participants. 7,9 Most DENV-3 cases in the trial, including the homotypic VCD reinfection, were reported at the sites in the Philippines. The second case was a DENV-1 reinfection in a baseline seronegative participant who was hospitalized for the subsequent episode but not the first. Although it is believed that dengue infections provide complete and lifelong protection against the same serotype, recent findings have questioned this existing dogma. 13 It may be, in part, because the cases are difficult to detect outside the settings, with a long duration of follow-up, as well as laboratory testing of all febrile illnesses with serotype-specific PCR. The potential for homotypic reinfection poses additional complexities in dengue vaccine development.
The majority of dengue infections tend to be asymptomatic 14 ; hence, it is likely there were many more subsequent asymptomatic infections than the 18 symptomatic sequential cases identified in the trial. These asymptomatic cases might have also altered the immunological profile of the trial population to some extent. However, we believe the placebo control minimizes any potential bias on our conclusions. In addition, the few symptomatic sequential cases in the TAK-003 group did not allow for robust comparison of symptoms between the earlier and the later episodes. These are some of the limitations in this exploratory analysis besides the overall small number of cases.
CONCLUSION
In conclusion, the available data suggest that TAK-003 vaccination resulted in a reduced risk of experiencing sequential episodes of symptomatic dengue in children and adolescents age 4 to 16 years in dengue-endemic areas. These data indicate some potential benefit even in vaccine recipients who might experience breakthrough symptomatic dengue. | 2023-03-08T06:18:24.775Z | 2023-03-06T00:00:00.000 | {
"year": 2023,
"sha1": "95953c5e1957b3bbe6e4906e3163400d5ada4832",
"oa_license": "CCBY",
"oa_url": "https://www.ajtmh.org/downloadpdf/journals/tpmd/aop/article-10.4269-ajtmh.22-0673/article-10.4269-ajtmh.22-0673.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d06fc65f6b9058e745b1ebedc2f5b35daebea98",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8443097 | pes2o/s2orc | v3-fos-license | Clinical implications of SPRR1A expression in diffuse large B-cell lymphomas: a prospective, observational study
Background Certain markers have been identified over the last 10 years that facilitate the prediction of a patient’s prognosis; these markers have been proposed to be useful for risk stratification of lymphoma patients and for the development of specific therapeutic strategies. In the present study, we assessed the potential prognostic value of SPRR1A expression in 967 patients with diffuse large B-cell lymphomas. Methods All patients were enrolled between 2001 and 2007 (median follow-up, 53.3 months) in the Second Hospital of Dalian Medical University, First Hospital of China Medical University, and Liaoning Cancer Hospital. Immunohistochemical analysis was used to evaluate the expression of SPRR1A. Survival was analyzed using the Kaplan–Meier method. Multivariate analysis was conducted to adjust the effect of SPRR1A expression for potential, well-known, independent prognostic factors. Results Of the 967 patients examined, SPRR1A expression was detected in 305 (31.54%) patients on immunohistochemical analysis. The 5-year survival rate was significantly lower in patients with SPRR1A expression than in those without (26.9% vs. 53.2%, P < 0.001). Multivariate analysis identified SPRR1A expression as an independent predictor of survival in addition to lactate dehydrogenase level, clinical stage, and histologic subtype. Conclusions SPRR1A expression may be useful as a prognostic factor for diffuse large B-cell lymphoma.
Background
Diffuse large B-cell lymphomas (DLBCL) constitute a heterogeneous category of aggressive lymphomas [1] that are diagnosed based on the morphology and immunophenotype [2,3], and represent 30-40% of cases of adult non-Hodgkin's lymphoma [4]. Although the use of combination chemotherapy has improved the outcomes of DLBCL, many patients do not achieve complete remission (CR) and ultimately relapse. Therefore, it is important to determine factors that can assist with the identification of patients at a high risk of recurrent disease [5].
Immunohistochemical tests are routine procedures for the diagnosis of several malignancies and are considered to be essential in cases of lymphoma. Certain markers have been identified over the last 10 years that facilitate the prediction of a patient's prognosis. These markers have been proposed to be useful for risk stratification of lymphoma patients and for the development of specific therapeutic strategies. Molecular abnormalities of the cell death-cell viability balance, as reflected in bcl-2 overexpression [6][7][8][9] or p53 mutation [10,11], have emerged as important prognostic indicators of DLBCL.
Small proline-rich (SPRR) proteins are characterized by an unusually high content of proline residues and were originally identified in cultured keratinocytes as ultraviolet-inducible genes [12,13]. Several studies have suggested that SPRRs are related to increased epithelial proliferation and malignant processes and are markers for terminal squamous cell differentiation, although they also function in nonsquamous tissues [14]. Moreover, primary basal cell carcinomas, squamous cell carcinomas, and thin melanomas have been reported to exhibit a considerably higher level of SPRR1A gene expression [15].
In the present study, we examined 967 specimens obtained from patients with DLBCL to investigate SPRR1A expression and its prognostic value.
Patient selection
The present study included patients (n = 2456) with a pathologically confirmed DLBCL diagnosis who were treated in the First Hospital of China Medical University, Fourth Hospital of China Medical University, or Liaoning Province Cancer Hospital between January 1, 2001, and December 31, 2007. At the time of the analysis, 39% of the slides were available for pathologic review, and 967 patients were considered to have DLBCL (centroblastic, immunoblastic, or anaplastic).
Disease dissemination was evaluated before treatment by physical examination, bone marrow (BM) biopsy, and computed tomography of the chest and abdomen. Patients were staged according to the Ann Arbor system. The number of extranodal sites and larger tumor mass diameters were also determined. Performance status was assessed according to the Eastern Cooperative Oncology Group scale: 0, patient had no symptoms; 1, patient had symptoms but was ambulatory; 2, patient was bedridden for less than half of the day; 3, patient was bedridden for half of the day or longer; and 4, patient was chronically bedridden and required assistance with activities of daily living. Performance status was then classified as 0-1 (the patient was ambulatory) or 2-4 (the patient was not ambulatory).
All the patients provided written informed consent, and the study protocol and the sample collection were approved by the Ethics Committee of China Medical University.
Assessment of response
The primary endpoint was overall survival. Response to therapy was evaluated after the initiation of treatment. CR was defined as the disappearance of all clinical evidence of disease and normalization of all laboratory values, radiographs, computed tomography scans, and BM biopsy findings.
Histologic and immunophenotypic study
The histologic diagnosis of DLBCL was independently determined by 3 pathologists. The diagnosis was based on morphologic examination of slides from routinely processed paraffin-embedded samples stained with hematoxylin-eosin, Giemsa, and Gordon-Sweet stains and on immunophenotyping results. The immunohistochemistry panel consisted of antibodies against CD20, CD10, CD3, CD5, BCL2, BCL6, IRF4/MUM1, human leukocyte antigen (HLA)-DR, and Ki-67.
Staining for CD20, CD3, CD10, and HLA-DR was scored as positive or negative. Each individual case was unanimously scored as negative or positive by the 3 independent investigators, scored using the 2 matching scores when the third investigator did not agree, or recorded as "not evaluable" for a given antigen when there was no agreement between the investigators.
Patient selection for SPRR1A expression SPRR1A expression in DLBCL was analyzed when there were 4 unstained slides available for that case. Only formalin-fixed specimens were selected, while specimens fixed in Bouin's fluid were excluded. To avoid bias related to treatment, only patients treated with CHOP were included in the study. Overall, 967 cases were studied for SPRR1A expression by immunohistochemical analysis.
Immunohistochemical analysis
Thin slices of tumor tissue for all cases were fixed in 4% formaldehyde solution (pH 7.0) for a duration that did not exceed 24 hours. The tissues were processed in a routine manner for paraffin embedding, and 4-μm thick sections were cut and placed on glass slides coated with 3aminopropyl triethoxysilane for immunohistochemical analysis. The sections were mounted on microscope slides, air dried, and then fixed in a mixture of 50% acetone and 50% methanol. The sections were then de-waxed with xylene, gradually hydrated with gradient alcohol, and washed with phosphate-buffered saline (PBS). Sections were then incubated for 60 minutes with the primary antibody. After repeated washing with PBS, the sections were incubated for 30 minutes with the secondary biotinylated antibody (Multilink Swine anti-goat/mouse/rabbit immunoglobulin; Dako, Inc.). Thereafter, the avidin-biotin complex (1:1000 dilution; Vector Laboratories, Ltd.) was applied to the sections for 30-60 minutes at room temperature. The immunoreactive products were visualized by catalysis of 3,3′-diaminobenzidine with horseradish peroxidase in the presence of H 2 O 2 , after extensive washing. Sections were then counterstained in Gill's Hematoxylin and dehydrated in ascending grades of methanol, prior to clearing with xylene and mounting under a coverslip.
Statistical analysis
Patient characteristics were compared using the Chi-square test. Overall survival was analyzed using the Kaplan-Meier method. The log-rank test was used to analyze survival differences. Multivariate analysis was conducted to adjust the effect of SPRR1A expression for potential independent prognostic factors (age, sex, extranodal sites, performance status, clinical stage, bulky disease [_____ >10 cm], evolution, lactate dehydrogenase level, and SPRR1A expression) using the Cox proportional hazards model with forward stepwise selection. A P value of <0.05 was considered statistically significant. All data were analyzed using SPSS (Version 17.0; SPSS Inc., Chicago, IL, USA).
SPRR1A expression in diffuse large B-cell lymphomas
The median age of the population was 56 years. Except for histologic subtype, the clinical characteristics were well balanced between the 2 groups (SPRR1A − and SPRR1A + groups) ( Table 1). We found that the lymphoma tissue was positive for anti-SPRR1A staining in 305 (31.54%) cases ( Figure 1).
Prognostic value of SPRR1A expression
The hazard ratio for death was 1.792 (95% CI, 1.364-3.778; P < 0.001) in the SPRR1A + group ( Table 2, univariate analysis). After adjustment for the 11 baseline variables with the use of Cox regression analysis, the hazard ratio remained similar ( Table 2, multivariate analysis).
Expectedly, the multivariate analyses showed that clinical stage, lactate dehydrogenase level, and SPRR1A expression were independent prognostic factors ( Table 2, multivariate analysis).
Predictive value of the expression of SPRR1A for germinal center B-cell-like/non-germinal center B-cell-like diagnosis
Considering histologic diagnosis as the gold standard method for diagnosis, the sensitivity and specificity of SPRR1A − for the diagnosis of the subtype of germinal In the present study, we determined that SPRR1A expression is a predictive factor for overall survival with DLBCL. The classic prognostic factors for aggressive B-cell lymphomas (i.e., advanced clinical stage, unfavorable performance status, elevated lactate dehydrogenase values) and, consequently, the International Prognostic Index score were significantly associated with a poor outcome in these patients. Importantly, the multivariate analyses demonstrated that the influence of SPRR1A expression on overall survival was independent of these well-established prognostic factors. Moreover, SPRR1A can be used to determine the GCB and non-GCB subtypes of DLBCL. | 2017-06-20T21:40:25.220Z | 2014-05-14T00:00:00.000 | {
"year": 2014,
"sha1": "745cea0eed1f2bb0d70bdc15503038440e5b5fed",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-14-333",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "745cea0eed1f2bb0d70bdc15503038440e5b5fed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248093586 | pes2o/s2orc | v3-fos-license | Meshing Stiffness Calculation of Disposable Harmonic Drive under Full Load
: Mechanical equipment in the field of aerospace that is used only once is called disposable machinery. As a piece of typical disposable machinery, disposable harmonic gear exhibit stiffness failure with a large load. This manuscript distinguishes disposable harmonic gear from conventional harmonic gear in terms of the application environment and structure. Then, this paper determines the single-tooth stiffness of the disposable harmonic gear under full load by using the non-uniform beam model and the improved energy method. In addition, the multi-tooth meshing in the disposable harmonic drive is considered, and the improved energy method is modified. Besides, the normal contact force and comprehensive elastic displacement at each meshing position are calculated according to the finite element model. Additionally, curves of the single-tooth stiffness and the comprehensive meshing stiffness are obtained. The theoretical results of the modified analytical method and FEM are compared to verify the correctness of the proposed method in terms of calculating the meshing stiffness of the disposable harmonic drive. Finally, FEM is used to obtain the failure form of the disposable harmonic gear under overload.
Introduction
Different from conventional long-running and reused machinery, machinery which is not reused is named disposable machinery. Such machines are designed to have a very short service life (measured in "minutes") and take the use of a short-term high load as the working normality. The harmonic reducer in the disposable electromechanical actuator is a key component that determines the performance of the system. Harmonic gears transmit motion and powers through deformation waves caused by flexible parts with controllable deformation. Harmonic drives (HDs) have the advantages of a large transmission ratio, high transmission accuracy, and small volume weight, which are of great significance for the application of a high power-weight ratio [1]. The short-term extreme load limitation and dynamic properties of the disposable harmonic gear transmission have gradually developed into a focus of research in this area [2]. Due to the extremely short service life, the flexible wheel's high-cycle fatigue failure, which is the main concern in the research of conventional harmonic gears, will not appear in the disposable harmonic gear. Therefore, determining the meshing stiffness of harmonic gears is an important direction of research on full-load harmonic drives.
Many papers have discussed the meshing stiffness associated with gear transmission. The harmonic drive is characterized by a straight tooth profile, extremely thin rim, and simultaneous meshing of multiple teeth. The research findings on the meshing stiffness of cylinder gears with a thin rim and high engagement ratio can be used for reference. Most research on the stiffness of cylindrical gear transmission to date has used the analytical method (AM) or the finite element method (FEM). As early as 1987, Yang et al. [3] proposed decomposing the total energy of gear meshing into Hertz contact energy, bending energy, and axial compression energy. The subsequent researches on the meshing stiffness of gear transmission were carried out on the conclusions of Yang. Considering the shear energy generated by the component of the contact force on the teeth of the gear, Tian [4] introduced shear stiffness using Yang's model. Based on the models proposed by Weber [5], Attia [6], and Cornell [7], Sainsot et al. [8] calculated the offset caused by the action of the teeth of the gear when its foundation is subjected to a force. Fakher [9] introduced the offset energy and the offset stiffness to calculate the stiffness of the gear meshing based on Sainsot et al.'s research. Sun et al. [10] divided the spur gear into thin slices along the width of the tooth and modified the model of meshing stiffness by considering the influences of lead crowning relief and tip relief. Wang et al. [11] equated contact among the teeth with the elastic contact of a spring to study the influence of the width of the web, its hole radius, and the length of the crack on the TVMS of a spur gear with webbing according to the potential energy method. This method could help to analyze the meshing stiffness when the gear foundation is shared under multi-tooth. Considering the periodically varying load distribution in tooth surface wear, Chen et al. [12] established a new model for calculating the TVMS of external spur gears. The model also discussed the effect of surface wear on stiffness. Sánchez et al. [13] studied the load distribution and meshing stiffness of standard and high-contact-ratio spur gears after profile modifications. The finite element method is also commonly used to solve the problems of tooth deformation and meshing stiffness of gear drives by employing finite element analysis software. Ma et al. [14] used ANSYS to establish a FEM of a cracked spur gear transmission to analyze the influence of extended tooth contact on the meshing stiffness owing to flexible teeth. This model could analyze the meshing stiffness of multi-tooth meshing under high torque, which provided help for the establishment of FEM in this paper. Based on the Quasi-static Algorithm (QSA), Zhan et al. [15] proposed an integrated CAD-FEM-QSA system to analyze the TVMS of gears. Compared with traditional methods, this technique had higher precision and efficiency. Chen et al. [16] proposed a FEM of the meshing stiffness of a spur gear by considering complex paths for gear and crack propagation based on finite element theory and the contact analysis of loaded teeth. The model proved that the meshing stiffness is affected by the rim thickness. The relationship between the meshing stiffness of the gear and the thickness of its web and fracture mode was studied. Considering that the FEM is less efficient at solving the TVMS of gears than the AM, and the results are affected by such factors as the division of the meshing, the FEM is generally used as a supplement to the AM.
Most of the analytical methods mentioned above focus on the meshing stiffness of spur cylindrical gears. The above methods and models confirmed that the meshing stiffness is affected by the thickness of the rim and the number of meshing teeth, which established the foundation for the research on the mesh stiffness of harmonic drive in this paper. In general, the flexible wheel in HD is prone to fatigue failure, because of which less research has been conducted on the stiffness of large-load flexible wheels. Considering the application environment and the extremely thin rim structure, it is necessary to study the stiffness of the transmission of a disposable harmonic gear. Gear deformation is the research basis for the meshing stiffness of the harmonic drive, and flexible wheel deformation occupies an important part of the total deformation. Ma et al. [17] discussed the flexible wheel deformation characteristics in HD under different driving speeds. Dong simplified the flexible wheel and obtained the strain and stress on the front, middle, and rear sections of the flexible rim [18]. Based on the finite element model, Kayabasi [19] analyzed the maximum stress and position of the flexible wheel during transmission so that the flexible tooth profile could be optimized. On the basis of the involute tooth profile optimization method proposed by Dong [20], Chen et al. [21] combined the variable section beam element and shell element. Then, ANSYS was used to analyze the deformation of the flexible teeth and neutral layer after assembly and transmission. In addition, they also studied the influence of wave generators with different shapes on the flexible middle plane's deformation [22,23]. The variable-section beam and shell models mentioned in the above studies are common methods for the calculation of the deformation and stiffness of the flexible wheel. By analyzing the global sensitivity of the harmonic drive, Hrcek et al. [24] discussed the effects of geometric parameters, such as the module, tooth height, and rim thickness, on lost motion and torsional stiffness. Hu et al. [25] considered the ring flexibility of thin-walled gears, and divided the inner ring gear into multiple curved beams to establish the meshing stiffness model of a thin-walled flexible ring gear. In addition, he also analyzed the influence of the ring gear thickness and cross-sectional shape on meshing stiffness. Dong et al. [26] analyzed the elastic motion behavior of the flexible middle plane under no load and a small load. Subsequently, Gravagno et al. [27] proposed a new method to calculate the tension of the flexible neutral layer in HD and studied the relationship between the bending stress and circumferential strain of the flexible wheel and rollers of the wave generator. Tjahjowidodo [28] established a harmonic drive torsional compliance model to accurately capture the hysteresis in the torsional stiffness. Then, Zhang et al. [29] established a model for the compliance behavior of the flexible wheel to analyze the torsional compliance and stiffness of the harmonic drive system. Rheaume et al. [30,31] used finite element software to establish a numerical model of a harmonic gear to obtain the torsional stiffness and discussed the influence of geometric parameters on stiffness. Timofeev et al. [32] considered the deformation, processing error, and meshing characteristics of the flexible wheel to establish a mathematical model of harmonic gear transmission, and studied the torsional stiffness of HD. The study of this work on high-torque harmonic drives was beneficial to this paper. Ma et al. [33] established a FEM to study the meshing stiffness and torsional stiffness of HD. In addition, they also analyzed the influence of torque on the meshing teeth and meshing length. The study showed that the number of meshing teeth would increase with increasing load. Ma et al. [34] established an integrated system to analyze the meshing characteristics of a harmonic drive with multiple contacts among the teeth at the same time. Using a FEM, Wei et al. [35] combined the static and dynamic contact characteristics of the harmonic gear to obtain the meshing stiffness of HD. His work provided a basis for the dynamic analysis of HD. At present, research on the meshing stiffness of HD mostly adopts the finite element method, and the stiffness of the disposable harmonic gear is rarely involved. Therefore, it is necessary to propose a theoretical method to calculate the meshing stiffness of disposable harmonic gear.
Disposable machinery is a burgeoning development field. Different from the traditional HD, the disposable HD applied to a high load has a higher load capacity, smaller volume weight, and lower service life. In the previous study, we discussed the contact characteristics of the disposable HD from the point of strength and analyzed the no-load backlash, load distribution, and contact stress of HD [36]. However, research on the stiffness of the disposable HD has not been carried out. The combination of the research results on strength and stiffness can help to establish the design theory of disposable harmonic drive. Therefore, the comprehensive meshing stiffness of the disposable HD under full load operation will be studied in this work. Taking into account the cost of a disposable HD, the involute curve, which is more convenient to process, was selected as the flexible tooth profile in this work. In Section 2, the structure of the disposable harmonic gear is discussed. In Section 3, the non-uniform beam model and the improved energy method are used to calculate the single-tooth stiffness of a disposable harmonic drive under full load. Moreover, considering the influence of multi-tooth meshing, the improved energy method is modified to obtain the comprehensive meshing stiffness. Additionally, the stiffness curves obtained by the two methods are compared. Finally, a loaded harmonic gear is represented by a three-dimensional (3D) FEM, and the normal force and comprehensive elastic displacement while in engagement are extracted in Section 4. The failure mode of the disposable flexible wheel under overload is also discussed by FEM. The conclusions of this work are generalized in Section 5. The disposable HD consisted of a flexible wheel, a rigid wheel, and a wave generator. Figure 1 shows a conventional cup harmonic gear. A long cup flexible wheel was used as the conventional harmonic reducer to reduce the bottom stress concentration and extend the service life. However, this structure is contrary to the requirements of a high power-weight ratio for disposable harmonic gear. Considering that the disposable harmonic reducer is used for high loads and has a very low service life, and in order to improve the ultimate bearing capacity of a disposable harmonic drive with limited size, a straight flexible wheel with the complex wave transmission was selected (see Figure 2). This type of flexible wheel can be equivalent to a thin-walled external gear. It can increase the bearing torque of the disposable harmonic drive while compressing the axial dimension. In addition, it can also decrease the torsional hysteresis caused by the deformation of the cylinder to improve the transmission accuracy and meet the requirements of the disposable harmonic reducer. Therefore, the structure of a complex wave harmonic drive is more suitable for a disposable harmonic gear under full load operation.
force and comprehensive elastic displacement while in engagement are extracted in Section 4. The failure mode of the disposable flexible wheel under overload is also discussed by FEM. The conclusions of this work are generalized in Section 5.
Design of the Disposable Harmonic Flexible Wheel
The disposable HD consisted of a flexible wheel, a rigid wheel, and a wave generator. Figure 1 shows a conventional cup harmonic gear. A long cup flexible wheel was used as the conventional harmonic reducer to reduce the bottom stress concentration and extend the service life. However, this structure is contrary to the requirements of a high power-weight ratio for disposable harmonic gear. Considering that the disposable harmonic reducer is used for high loads and has a very low service life, and in order to improve the ultimate bearing capacity of a disposable harmonic drive with limited size, a straight flexible wheel with the complex wave transmission was selected (see Figure 2). This type of flexible wheel can be equivalent to a thin-walled external gear. It can increase the bearing torque of the disposable harmonic drive while compressing the axial dimension. In addition, it can also decrease the torsional hysteresis caused by the deformation of the cylinder to improve the transmission accuracy and meet the requirements of the disposable harmonic reducer. Therefore, the structure of a complex wave harmonic drive is more suitable for a disposable harmonic gear under full load operation. High-cycle fatigue failure is usually not considered in disposable HD, but low-cycle failure should be emphasized. Medium-carbon alloy steels, such as 40Cr, 40CrNiMo, and 30CrMnSiA, are the first choice for flexible wheels. Cr, Ni, Mo, Mn, and Si can refine metal grains so as to improve the stiffness and toughness of steel. In this paper, 40CrNiMoA under a quenching-tempering process was selected as the disposable flexible wheel material. The stress-strain curve obtained by the tension and compression test High-cycle fatigue failure is usually not considered in disposable HD, but low-cycle failure should be emphasized. Medium-carbon alloy steels, such as 40Cr, 40CrNiMo, and 30CrMnSiA, are the first choice for flexible wheels. Cr, Ni, Mo, Mn and Si can refine metal grains so as to improve the stiffness and toughness of steel. In this paper, 40CrNiMoA under a quenching-tempering process was selected as the disposable flexible wheel material. The stress-strain curve obtained by the tension and compression test of 40CrNiMoA is shown in Figure 3, and the yield strength σ s was determined as 960 MPa. High-cycle fatigue failure is usually not considered in disposable HD, but low-cycle failure should be emphasized. Medium-carbon alloy steels, such as 40Cr, 40CrNiMo, and 30CrMnSiA, are the first choice for flexible wheels. Cr, Ni, Mo, Mn, and Si can refine metal grains so as to improve the stiffness and toughness of steel. In this paper, 40CrNiMoA under a quenching-tempering process was selected as the disposable flexible wheel material. The stress-strain curve obtained by the tension and compression test of 40CrNiMoA is shown in Figure 3, and the yield strength σs was determined as 960 MPa.
Tooth Profile Design of the Disposable Harmonic Rigid Wheel
According to the working characteristics of disposable harmonic drives, the following assumptions are made in this paper: Based on the above assumptions, on the premise that the flexible tooth profile and the shape of the wave generator have been determined, the envelope method is used to
Tooth Profile Design of the Disposable Harmonic Rigid Wheel
According to the working characteristics of disposable harmonic drives, the following assumptions are made in this paper: Based on the above assumptions, on the premise that the flexible tooth profile and the shape of the wave generator have been determined, the envelope method is used to calculate the rigid tooth profile (see Figure 4). There are three coordinate systems in Figure 4: the wave generator coordinate system C(O; x, y, z), the flexible wheel coordinate system C r (O r ; x r , y r , z r ), and the rigid wheel coordinate system C g (O g ; x g , y g , z g ). O and O g coincide with the rotation center of the harmonic drive, while O r is the intersection of the symmetry line of the deformed flexible tooth and the neutral layer of the flexible wheel rim. y r is the symmetry axis of the flexible tooth and y g is the symmetry axis of the rigid tooth cogging. The original curve before harmonic gear assembly was set as C r and the motion trajectory of a point on the original curve in C g was defined as C r . Then, the envelope curve of the curve family when the flexible tooth profile r moved along C r in C g was the rigid wheel tooth profile. symmetry line of the deformed flexible tooth and the neutral layer of the flexible wheel rim. yr is the symmetry axis of the flexible tooth and yg is the symmetry axis of the rigid tooth cogging. The original curve before harmonic gear assembly was set as r C and the motion trajectory of a point on the original curve in Cg was defined as r C . Then, the envelope curve of the curve family when the flexible tooth profile r moved along r C in Cg was the rigid wheel tooth profile. The elliptical cam wave generator is a wave generator with an ellipse as the basic profile of the cam. Compared with other wave generators, the elliptical cam wave generator can ensure the excellent performance of the harmonic drive and is easy to process. The function of the original curve after assembly can be expressed as: where rm represents the neutral layer radius of the flexible wheel rim, * 0 denotes the radial deformation coefficient of the flexible wheel, and m is the gear module. The radial deformation of point H at the flexible wheel rim neutral layer after assembly can be expressed as: According to Equation (1), the included angle between the vector radius and the curvature radius traversing through point H can be expressed as: In the flexible coordinate system, the flexible tooth profile curve can be calculated with: The elliptical cam wave generator is a wave generator with an ellipse as the basic profile of the cam. Compared with other wave generators, the elliptical cam wave generator can ensure the excellent performance of the harmonic drive and is easy to process. The function of the original curve after assembly can be expressed as: where r m represents the neutral layer radius of the flexible wheel rim, ω * 0 denotes the radial deformation coefficient of the flexible wheel, and m is the gear module.
The radial deformation of point H at the flexible wheel rim neutral layer after assembly can be expressed as: According to Equation (1), the included angle between the vector radius and the curvature radius traversing through point H can be expressed as: In the flexible coordinate system, the flexible tooth profile curve can be calculated with: The flexible tooth profile and wave generator profile curve are shown in Figures 5 and 6, respectively. For the disposable harmonic drive, φr and φg can be expressed as: x y For the disposable harmonic drive, φr and φg can be expressed as: According to the envelope theory of harmonic drive, the rigid tooth profile conjugated with the disposable flexible wheel can be obtained as: For the disposable harmonic drive, ϕ r and ϕ g can be expressed as: where Zr and Zg indicate the tooth number of the flexible and rigid wheels, respectively. Then, the transformation matrix from the flexible wheel coordinate system to the rigid wheel coordinate system can be expressed as: The discretization of the flexible tooth profile curve was substituted into Equation (8), and a series of curve clusters of 0 ≤ ϕ ≤ π/2 were obtained. Then, the envelope curve of the curve cluster was obtained through program calculation, which was defined as the rigid tooth profile.
The rigid tooth profile obtained by the envelope method is shown in Figure 7. The thin curve family in this figure is the motion trajectory of the flexible tooth. And the solid blue line is the tooth profile of the rigid wheel after enveloping.
where Zr and Zg indicate the tooth number of the flexible and rigid wheels, respectively.
Then, the transformation matrix from the flexible wheel coordinate system to the rigid wheel coordinate system can be expressed as: The discretization of the flexible tooth profile curve was substituted into Equation (8), and a series of curve clusters of 0 ≤ φ ≤ π/2 were obtained. Then, the envelope curve of the curve cluster was obtained through program calculation, which was defined as the rigid tooth profile.
The rigid tooth profile obtained by the envelope method is shown in Figure 7. The thin curve family in this figure is the motion trajectory of the flexible tooth. And the solid blue line is the tooth profile of the rigid wheel after enveloping. After data-fitting, the rigid tooth profile curve of the disposable harmonic drive in this paper can be expressed as: ( ) 32 18.732 7.995 1.2504 16.82
Analytical Model to Compute the Meshing Stiffness of the Disposable Harmonic Drive under Full Load
The flexible wheel in the harmonic drive is a thin-walled component. The thickness of the rim is similar to the height of each of its teeth. According to Refs. [16,24], the thin rim has an effect on meshing stiffness. In addition, the simultaneous meshing of multiple teeth during the disposable harmonic drive will also affect the stiffness. Thus, prevalent methods cannot accurately calculate the stiffness of the flexible wheel. Because the rim of the rigid wheel is much thicker than the tooth, the stiffness of the rigid wheel can be calculated by the potential energy method for a cylindrical gear. After data-fitting, the rigid tooth profile curve of the disposable harmonic drive in this paper can be expressed as:
Analytical Model to Compute the Meshing Stiffness of the Disposable Harmonic Drive under Full Load
The flexible wheel in the harmonic drive is a thin-walled component. The thickness of the rim is similar to the height of each of its teeth. According to Refs. [16,24], the thin rim has an effect on meshing stiffness. In addition, the simultaneous meshing of multiple teeth during the disposable harmonic drive will also affect the stiffness. Thus, prevalent methods cannot accurately calculate the stiffness of the flexible wheel. Because the rim of the rigid wheel is much thicker than the tooth, the stiffness of the rigid wheel can be calculated by the potential energy method for a cylindrical gear.
Stiffness of the Flexible Wheel Tooth
The single-tooth stiffness of gear transmission can be expressed by the elastic deformation of a single gear tooth in the meshing process. For the disposable harmonic gear, the elastic deformation mainly includes tooth root bending deformation, shear deformation, and tooth surface contact deformation. The general expression for the stiffness of single-tooth transmission is as follows: where F is the transmission force acting on the tooth and δ denotes the comprehensive displacement along the direction of the force. The equivalent model of the flexible wheel is shown in Figure 8. In this figure, AM and BN are the curves of the involute profile. The single-tooth model of the involute profile is shown in Figure 9. Along the action line, F can be decomposed into axial component F a1 and radial component F b1 . Compression energy is generated under the action of F a1 , shear energy is generated under the action of F b1 , and bending energy is generated by a combination of F b1 and the additional bending moment M x1 . F a1 , F b1 , and M x1 can be calculated as: displacement along the direction of the force. The equivalent model of the flexible wheel is shown in Figure 8. In this figure, AM and BN are the curves of the involute profile. The single-tooth model of the involute profile is shown in Figure 9. Along the action line, F can be decomposed into axial component Fa1 and radial component Fb1. Compression energy is generated under the action of Fa1, shear energy is generated under the action of Fb1, and bending energy is generated by a combination of Fb1 and the additional bending moment Mx1. Fa1, Fb1, and Mx1 can be calculated as: According to beam theory and the Cartesian theorem, the energy stored in a single flexible tooth can be expressed as follows: where E and G represent Young's modulus and the shear modulus, respectively; h1 denotes the height of the tooth of the gear; hδ describes the thickness of the rim of a flexible wheel; A1x and I1x are the area and the area moment of inertia of the section where the distance to the bottom of the rim is x, respectively; and A1p and I1p describe the area and the area moment of inertia of the rim section of a flexible wheel, respectively (see Figure 8). To simplify the calculation, A1x, I1x, A1p, and I1p can be expressed as follows: where Sx denotes half of the thickness of a given flexible tooth where the distance to the bottom of the rim is x, SF is half of the tooth thickness at the action position K, P is the pitch of a flexible tooth, and L represents its width. Substituting Equations (12)- (14) and (16)- (19) into Equation (15), the comprehensive displacement of a single flexible tooth can be expressed as follows: ( ) 22 2 According to beam theory and the Cartesian theorem, the energy stored in a single flexible tooth can be expressed as follows: where E and G represent Young's modulus and the shear modulus, respectively; h 1 denotes the height of the tooth of the gear; h δ describes the thickness of the rim of a flexible wheel; A 1x and I 1x are the area and the area moment of inertia of the section where the distance to the bottom of the rim is x, respectively; and A 1p and I 1p describe the area and the area moment of inertia of the rim section of a flexible wheel, respectively (see Figure 8).
To simplify the calculation, A 1x , I 1x , A 1p and I 1p can be expressed as follows: where S x denotes half of the thickness of a given flexible tooth where the distance to the bottom of the rim is x, S F is half of the tooth thickness at the action position K, P is the pitch of a flexible tooth, and L represents its width. Substituting Equations (12)- (14) and (16)- (19) into Equation (15), the comprehensive displacement of a single flexible tooth can be expressed as follows: According to the characteristics of the involute curve, the angular position variable θ is introduced, and S x and S F can then be expressed as follows: where θ b describes the half tooth angle on the base circle: where α 0 denotes the pressure angle, and Z 1 is the tooth number of the flexible wheel. By substituting Equations (19) and (20) into Equation (10), the stiffness of a single flexible tooth of the involute profile can be expressed as follows: where C i1 , C i2 , and C i3 denote the relevant parameters of the tooth and can be expressed as follows:
Stiffness of the Rigid Wheel Tooth
The single-tooth model of the rigid wheel is shown in Figure 10. Different from the involute tooth profile of the flexible wheel, the tooth profile of the rigid wheel is obtained according to the envelope method (see Equation (9)). By applying the beam theory, the bending, axial compressive, and shear energies stored in a rigid tooth can be obtained as follows: where A2x and I2x are the area and the area moment of inertia of the section where the distance to the top of the rigid tooth is x, respectively. They can be expressed as follows: (2 ) 12 where hx denotes half of the thickness of the rigid tooth where the distance to the tooth top is x. Therefore, the bending stiffness (kb2), axial compressive stiffness (ka2), and shear stiffness (ks2) of the rigid tooth are given as: The orthogonal component of the action force F and the equivalent bending moment can be expressed as: By applying the beam theory, the bending, axial compressive, and shear energies stored in a rigid tooth can be obtained as follows: where A 2x and I 2x are the area and the area moment of inertia of the section where the distance to the top of the rigid tooth is x, respectively. They can be expressed as follows: where h x denotes half of the thickness of the rigid tooth where the distance to the tooth top is x. Therefore, the bending stiffness (k b2 ), axial compressive stiffness (k a2 ), and shear stiffness (k s2 ) of the rigid tooth are given as: where h F denotes half of the thickness of the rigid tooth where the distance to the tooth top is l. According to the fitting curve of the rigid wheel obtained from Equation (9) in Section 2.2, h x and h F can be expressed as follows: The fillet foundation displacement in the direction of tooth load can be obtained by Sainsot et al. [8] as: where U f and S f are as shown in Figure 11, and L*, M*, P*, and Q* are constants that differ slightly depending on the assumptions shown in Table 1. The stiffness considering the gear fillet foundation deflection can be expressed as: where Uf and Sf are as shown in Figure 11, and L*, M*, P*, and Q* are constants that differ slightly depending on the assumptions shown in Table 1.
The single-tooth stiffness of the rigid wheel can be obtained as: The parameters in Table 2 were used to model the harmonic gears. The equivalent meshing stiffness of a tooth pair in transmission can be calculated by: The single-tooth stiffness of the rigid wheel can be obtained as: The parameters in Table 2 were used to model the harmonic gears. The equivalent meshing stiffness of a tooth pair in transmission can be calculated by: The Hertz contact stiffness k h is given by Yang et al. [3] as: where ν describes the Poisson's ratio of the material of a rigid wheel. The above-mentioned procedure can be repeated when multiple pairs of teeth are in contact. The comprehensive stiffness can be expressed as: where n denotes the contact teeth number in the meshing region. The single-tooth stiffness of the disposable harmonic drive in the involute profile as obtained by the improved energy method is shown in Figure 12. The stiffness obtained by the improved energy method, proposed in this article, was compared with the stiffness calculated by the analytical method without considering the thin rim in Ref. [10], as shown in Figure 13.
Teeth width L (mm) 10 10 Pressure angle α0 (°) 20 Transmission ratio 100 The single-tooth stiffness of the disposable harmonic drive in the involute profile as obtained by the improved energy method is shown in Figure 12. The stiffness obtained by the improved energy method, proposed in this article, was compared with the stiffness calculated by the analytical method without considering the thin rim in Ref. [10], as shown in Figure 13. Compared with conventional spur gears, the extremely thin rim structure of disposable harmonic gears will have a great influence on their stiffness, which must be considered. In addition, the proportion of the teeth number involved in the meshing of the disposable harmonic HD can be close to 30%. Therefore, in order to obtain the comprehensive stiffness of the disposable HD, the influence of the remaining teeth that are meshed at the same time in a meshing cycle should also be considered.
Stiffness of Multi-Tooth Meshing
Under the action of the elliptical cam wave generator, the rim of the flexible wheel is stretched from a circle to an ellipse. Therefore, the flexible wheel is divided into a contact area and a non-contact area, and γ represents the range of the contact area (see Figure 14a). The micro-unit on the flexible rim at position φ in the contact area is acted on by the radial force qr generated by the wave generator. The schematic diagram of internal force calculation is shown in Figure 14b.
When the disposable harmonic gear is loaded, the transmission torque T acting on the flexible wheel can be expressed as: Compared with conventional spur gears, the extremely thin rim structure of disposable harmonic gears will have a great influence on their stiffness, which must be considered. In addition, the proportion of the teeth number involved in the meshing of the disposable harmonic HD can be close to 30%. Therefore, in order to obtain the comprehensive stiffness of the disposable HD, the influence of the remaining teeth that are meshed at the same time in a meshing cycle should also be considered.
Stiffness of Multi-Tooth Meshing
Under the action of the elliptical cam wave generator, the rim of the flexible wheel is stretched from a circle to an ellipse. Therefore, the flexible wheel is divided into a contact area and a non-contact area, and γ represents the range of the contact area (see Figure 14a). The micro-unit on the flexible rim at position ϕ in the contact area is acted on by the radial force q r generated by the wave generator. The schematic diagram of internal force calculation is shown in Figure 14b. Therefore, the tension of the flexible rim in the contact area can be expressed as: The tensile stress can be expressed as: According to the relationship between the bending moment and the curvature variable, the bending moment of the rim unit at the position φ of the contact area can be expressed as: where EIz is the circumferential bending stiffness of the flexible rim. The bending equation of the flexible rim in the contact area can be expressed as: When the disposable harmonic gear is loaded, the transmission torque T acting on the flexible wheel can be expressed as: where q t is the circumferentially distributed load per unit width of the flexible wheel rim, which can be expressed as: where d 1 represents the diameter of the flexible wheel index circle. According to the equilibrium equation: where q r = q tmax · q * r , and q * r indicates a dimensionless coefficient, which can be expressed as q * r = 0.375[1 − sin(ϕ/2)] Therefore, the tension of the flexible rim in the contact area can be expressed as: The tensile stress can be expressed as: At the i-th pair of meshing teeth, the circumferential displacement of the flexible rim caused by tension can be expressed as follows: According to the relationship between the bending moment and the curvature variable, the bending moment of the rim unit at the position ϕ of the contact area can be expressed as: where EI z is the circumferential bending stiffness of the flexible rim. The bending equation of the flexible rim in the contact area can be expressed as: Substituting Equation (55) into Equation (56): According to the boundary conditions: Parameters A and B can be expressed as: According to the non-elongation condition of the neutral layer, Combining Equation (54) with Equation (60), the additional tangential displacement at the i-th pair of meshing teeth of the flexible wheel in the contact area caused by multi-tooth meshing under a load can be expressed as (see Figure 15): sidering the additional displacement, the modified disposable harmonic gear meshing stiffness is shown in Figure 16. As can be seen from Figure 6, the two curves were similar in amplitudes. However, the modified meshing stiffness curve was not symmetrical, and the tooth position with the largest stiffness in the meshing region was on the left side of the center line. This was due to the additional deformation of the flexible wheel teeth caused by the multi-tooth meshing of the disposable harmonic gear. With the continuous meshing of subsequent gear teeth, the deformation of flexible teeth in the meshing region decreases slowly and then increases gradually. After considering the additional displacement, the modified disposable harmonic gear meshing stiffness is shown in Figure 16.
Meshing Stiffness Using Finite Element Model
The harmonic gear pair considered here contained two types of gears: a flexible wheel and a rigid wheel. In addition, the wave generator that caused the periodic deformation of the flexible wheels was also included in the disposable HD. The harmonic gears were modeled in 3D simulation software. The corresponding performance parameters of the three parts are shown in Table 3. Figure 16. Meshing stiffness of the disposable HD obtained by two methods.
As can be seen from Figure 6, the two curves were similar in amplitudes. However, the modified meshing stiffness curve was not symmetrical, and the tooth position with the largest stiffness in the meshing region was on the left side of the center line. This was due to the additional deformation of the flexible wheel teeth caused by the multi-tooth meshing of the disposable harmonic gear.
Meshing Stiffness Using Finite Element Model
The harmonic gear pair considered here contained two types of gears: a flexible wheel and a rigid wheel. In addition, the wave generator that caused the periodic deformation of the flexible wheels was also included in the disposable HD. The harmonic gears were modeled in 3D simulation software. The corresponding performance parameters of the three parts are shown in Table 3. The simplified model for the finite elements of the disposable HD is shown in Figure 17. The finite element analysis included two stages of assembly and loading. In order to apply boundary conditions and loads, reference points were set up at the center positions of the three components. Then, coupling constraints on the flexible internal surface, the external surface of the rigid wheel, and the wave generator with the corresponding reference points were established. The wave generator was treated as completely rigid during simulation. The FEM contained two types of contact: contact between the flexible internal surface and the external surface of the wave generator, and contact between the tooth surfaces of the two gears. The internal surface and the tooth surface of the flexible wheel were set as the slave surface, and the wave generator external surface and the rigid tooth surface were set as the master surface. In step assembly, fix the flexible wheel and then move the other two components to the position matched with the flexible wheel at a uniform translation speed. In step loading, fix the rigid wheel's external surface. Then, apply a constant rotation speed to the other two components. Additionally, apply a full load of 80 N·m to the flexible wheel.
The FEM after the assembly of the disposable HD is shown in Figure 18. Figure 18a shows the position of the three components after assembly, and Figure 18b shows the magnification of several meshing tooth pairs in the contact area. The surfaces of the rigid and flexible teeth are defined as the master surface and slave surfaces, respectively. The equivalent stress and deformation of the flexible wheel are shown in Figure 19. In step assembly, fix the flexible wheel and then move the other two components to the position matched with the flexible wheel at a uniform translation speed. In step loading, fix the rigid wheel's external surface. Then, apply a constant rotation speed to the other two components. Additionally, apply a full load of 80 N·m to the flexible wheel.
The FEM after the assembly of the disposable HD is shown in Figure 18. Figure 18a shows the position of the three components after assembly, and Figure 18b shows the magnification of several meshing tooth pairs in the contact area. The surfaces of the rigid and flexible teeth are defined as the master surface and slave surfaces, respectively. The equivalent stress and deformation of the flexible wheel are shown in Figure 19.
In step assembly, fix the flexible wheel and then move the other two components to the position matched with the flexible wheel at a uniform translation speed. In step loading, fix the rigid wheel's external surface. Then, apply a constant rotation speed to the other two components. Additionally, apply a full load of 80 N·m to the flexible wheel.
The FEM after the assembly of the disposable HD is shown in Figure 18. Figure 18a shows the position of the three components after assembly, and Figure 18b shows the magnification of several meshing tooth pairs in the contact area. The surfaces of the rigid and flexible teeth are defined as the master surface and slave surfaces, respectively. The equivalent stress and deformation of the flexible wheel are shown in Figure 19. It can be seen that the maximum stress and deformation of the flexible wheel under no-load after assembly occur at the ends of the long and short axes of the wave generator. The equivalent stress and deformation of the two gears under full load are shown in Figures 20 and 21, respectively. According to Figure 20, the maximum stress of the disposable flexible wheel did not reach the yield strength. Therefore, the disposable HD can meet the requirements of short-term operation under a full load. The maximum stresses of the two gears both appeared in the middle of the contact area. The deformation of the flexible wheel in the contact area slightly decreased and then gradually increased, which is consistent with the trend of the theoretical results in Figure 15. The maximum deformation of the rigid wheel occurred in the middle position, like the stress. Additionally, the maximum stress and deformation of the flexible wheel were higher than those of the rigid wheel. Then, the load and comprehensive displacement curves of each contact tooth pair were extracted, as shown in Figure 22. It can be seen that the maximum stress and deformation of the flexible wheel under no-load after assembly occur at the ends of the long and short axes of the wave generator. The equivalent stress and deformation of the two gears under full load are shown in Figures 20 and 21, respectively. According to Figure 20, the maximum stress of the disposable flexible wheel did not reach the yield strength. Therefore, the disposable HD can meet the requirements of short-term operation under a full load. The maximum stresses of the two gears both appeared in the middle of the contact area. The deformation of the flexible wheel in the contact area slightly decreased and then gradually increased, which is consistent with the trend of the theoretical results in Figure 15. The maximum deformation of the rigid wheel occurred in the middle position, like the stress. Additionally, the maximum stress and deformation of the flexible wheel were higher than those of the rigid wheel. Then, the load and comprehensive displacement curves of each contact tooth pair were extracted, as shown in Figure 22.
It can be seen that the maximum stress and deformation of the flexible wheel under no-load after assembly occur at the ends of the long and short axes of the wave generator. The equivalent stress and deformation of the two gears under full load are shown in Figures 20 and 21, respectively. According to Figure 20, the maximum stress of the disposable flexible wheel did not reach the yield strength. Therefore, the disposable HD can meet the requirements of short-term operation under a full load. The maximum stresses of the two gears both appeared in the middle of the contact area. The deformation of the flexible wheel in the contact area slightly decreased and then gradually increased, which is consistent with the trend of the theoretical results in Figure 15. The maximum deformation of the rigid wheel occurred in the middle position, like the stress. Additionally, the maximum stress and deformation of the flexible wheel were higher than those of the rigid wheel. Then, the load and comprehensive displacement curves of each contact tooth pair were extracted, as shown in Figure 22. As shown in Figure 22, during one engaging-in and engaging-out cycle of a tooth pair in the meshing region of the disposable HD, the load on the teeth of the gear gradually increased to the peak value and then reduced. Additionally, the load peak was distributed in the middle gear teeth in the meshing region. With subsequent teeth meshing, the previous meshing teeth did not withdraw. Thus, the superposition of the elastic displacement of the teeth of the gear led to a gradual increase in the comprehensive displacement. The results of the modified analytical method described in Section 3 and the FEM were then compared (see Figure 23). pair in the meshing region of the disposable HD, the load on the teeth of the gear gradually increased to the peak value and then reduced. Additionally, the load peak was distributed in the middle gear teeth in the meshing region. With subsequent teeth meshing, the previous meshing teeth did not withdraw. Thus, the superposition of the elastic displacement of the teeth of the gear led to a gradual increase in the comprehensive displacement. The results of the modified analytical method described in Section 3 and the FEM were then compared (see Figure 23). As shown in Figure 22, during one engaging-in and engaging-out cycle of a tooth pair in the meshing region of the disposable HD, the load on the teeth of the gear gradually increased to the peak value and then reduced. Additionally, the load peak was distributed in the middle gear teeth in the meshing region. With subsequent teeth meshing, the previous meshing teeth did not withdraw. Thus, the superposition of the elastic displacement of the teeth of the gear led to a gradual increase in the comprehensive displacement. The results of the modified analytical method described in Section 3 and the FEM were then compared (see Figure 23). According to Figure 23, at the beginning of meshing, the stiffness of the teeth of the gear increased rapidly and then decreased gradually. Considering the influence of multi-tooth meshing on the meshing stiffness of the disposable harmonic drive, the peak value and trend of the stiffness curve obtained by the modified analytical method and FEM were very close, which confirms the feasibility of the modified analytical method in calculating the meshing stiffness of the disposable harmonic gear under full load.
According to Equation (48), the comprehensive meshing stiffness of the disposable harmonic drive was approximately a straight line (see Figure 24). The curve in the range of ordinate 0-5 in Figure 24 is the superposition of the meshing stiffness in Figure 23. The simultaneous contact of multiple pairs of gear teeth during transmission could en- According to Figure 23, at the beginning of meshing, the stiffness of the teeth of the gear increased rapidly and then decreased gradually. Considering the influence of multitooth meshing on the meshing stiffness of the disposable harmonic drive, the peak value and trend of the stiffness curve obtained by the modified analytical method and FEM were very close, which confirms the feasibility of the modified analytical method in calculating the meshing stiffness of the disposable harmonic gear under full load.
According to Equation (48), the comprehensive meshing stiffness of the disposable harmonic drive was approximately a straight line (see Figure 24). The curve in the range of ordinate 0-5 in Figure 24 is the superposition of the meshing stiffness in Figure 23. The simultaneous contact of multiple pairs of gear teeth during transmission could ensure the stability of the disposable harmonic drive. In addition, the comprehensive meshing stiffness of the disposable harmonic drive was higher than that of conventional gear, which also ensured the possibility of the disposable harmonic drive achieving a short-term full load or overload transmission. According to Figure 23, at the beginning of meshing, the stiffness of the teeth of the gear increased rapidly and then decreased gradually. Considering the influence of multi-tooth meshing on the meshing stiffness of the disposable harmonic drive, the peak value and trend of the stiffness curve obtained by the modified analytical method and FEM were very close, which confirms the feasibility of the modified analytical method in calculating the meshing stiffness of the disposable harmonic gear under full load.
According to Equation (48), the comprehensive meshing stiffness of the disposable harmonic drive was approximately a straight line (see Figure 24). The curve in the range of ordinate 0-5 in Figure 24 is the superposition of the meshing stiffness in Figure 23. The simultaneous contact of multiple pairs of gear teeth during transmission could ensure the stability of the disposable harmonic drive. In addition, the comprehensive meshing stiffness of the disposable harmonic drive was higher than that of conventional gear, which also ensured the possibility of the disposable harmonic drive achieving a short-term full load or overload transmission. To simulate the failure mode of the disposable harmonic gear in case of overload, a torque of 100 N·m was applied, and the equivalent stress and plastic deformation of the disposable flexible wheel were obtained according to the material properties in Figure 3. Figure 25a shows the stress of the flexible wheel under overload. The root stresses of several flexible teeth were higher than the allowable value of the material, and the inner wall of the flexible wheel was overstressed. Figure 25b shows that plastic deformation would occur at the root of flexible teeth under overload. The load and comprehensive deformation of the meshing tooth pair under overload were extracted. It can be seen from Figure 26a that the stress of the eleventh pair of the meshing teeth exceeded the limit value of the material. Additionally, Figure 26b shows that the subsequent tooth pairs had obvious distortion. According to Figure 26, the curve of the stiffness before damage to the flexible wheel is shown in Figure 27. mation would occur at the root of flexible teeth under overload. The load and comprehensive deformation of the meshing tooth pair under overload were extracted. It can be seen from Figure 26a that the stress of the eleventh pair of the meshing teeth exceeded the limit value of the material. Additionally, Figure 26b shows that the subsequent tooth pairs had obvious distortion. According to Figure 26, the curve of the stiffness before damage to the flexible wheel is shown in Figure 27. mation would occur at the root of flexible teeth under overload. The load and comprehensive deformation of the meshing tooth pair under overload were extracted. It can be seen from Figure 26a that the stress of the eleventh pair of the meshing teeth exceeded the limit value of the material. Additionally, Figure 26b shows that the subsequent tooth pairs had obvious distortion. According to Figure 26, the curve of the stiffness before damage to the flexible wheel is shown in Figure 27.
Conclusions
To study the meshing stiffness of a disposable harmonic gear under a full load, a modified improved energy method and a FEM were proposed in this study. Compared with a conventional HD, a disposable HD is significantly different in terms of the application environment and flexible wheel structure. The stiffness of the flexible gear was calculated by using the improved energy method and considering the influence of multi-tooth meshing on the deformation. The stiffness of the rigid wheel was decomposed into bending stiffness, shear stiffness, compression stiffness, and gear foundation stiffness. Then, a comprehensive stiffness model of multi-tooth meshing of disposable HD was established. Finally, the FEM was established to verify the accuracy of the analytical model and analyze the failure form of the disposable HD under overload. The conclusions of this work can be summarized as follows: (1) Different from other gear transmissions, the calculation of disposable harmonic gears needs to be conducted separately by distinguishing the structural characteristics of the two gears. The model of the teeth that considers the thin rim of the flexible wheel can accurately describe the amplitude of the meshing stiffness of the disposable harmonic gear under full load; (2) The modified improved energy method considers the influence of multi-tooth meshing on the stiffness of the flexible gear and can accurately reflect the comprehensive stiffness of the disposable harmonic gear in the meshing region under full load; (3) The comprehensive stiffness of the disposable harmonic drive is higher than that of conventional gear drive. The disposable harmonic gear can operate under full load for a short time.
Conclusions
To study the meshing stiffness of a disposable harmonic gear under a full load, a modified improved energy method and a FEM were proposed in this study. Compared with a conventional HD, a disposable HD is significantly different in terms of the application environment and flexible wheel structure. The stiffness of the flexible gear was calculated by using the improved energy method and considering the influence of multi-tooth meshing on the deformation. The stiffness of the rigid wheel was decomposed into bending stiffness, shear stiffness, compression stiffness, and gear foundation stiffness. Then, a comprehensive stiffness model of multi-tooth meshing of disposable HD was established. Finally, the FEM was established to verify the accuracy of the analytical model and analyze the failure form of the disposable HD under overload. The conclusions of this work can be summarized as follows: (1) Different from other gear transmissions, the calculation of disposable harmonic gears needs to be conducted separately by distinguishing the structural characteristics of the two gears. The model of the teeth that considers the thin rim of the flexible wheel can accurately describe the amplitude of the meshing stiffness of the disposable harmonic gear under full load; (2) The modified improved energy method considers the influence of multi-tooth meshing on the stiffness of the flexible gear and can accurately reflect the comprehensive stiffness of the disposable harmonic gear in the meshing region under full load; (3) The comprehensive stiffness of the disposable harmonic drive is higher than that of conventional gear drive. The disposable harmonic gear can operate under full load for a short time. | 2022-04-12T15:05:34.616Z | 2022-04-10T00:00:00.000 | {
"year": 2022,
"sha1": "1a948665fc3dd737f725943271efa55017c73391",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1702/10/4/271/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a01b14868c088a3bd7c8fdce771b210ac8836bde",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
115806764 | pes2o/s2orc | v3-fos-license | Measurement of the e+e- -->b anti-b cross section between sqrt(S)=10.54 and 11.20 GeV
We report e+e- -->b anti-b cross section measurements by the BABAR experiment performed during an energy scan in the range of 10.54 to 11.20 GeV at the PEP-II e+e- collider. A total relative error of about 5% is reached in more than three hundred center-of-mass energy steps, separated by about 5 MeV. These measurements can be used to derive precise information on the parameters of the Y(10860) and Y(11020) resonances. In particular we show that their widths may be smaller than previously measured.
Υ(10860) and Υ(11020) resonances. In particular we show that their widths may be smaller than previously measured.
PACS numbers: 13.25.Hw, 14.40.Nd Recent discoveries of non-baryonic charmonium states that do not behave as two-quark states [1] call for a search for other resonances belonging to this possible new spectroscopy. Given the charmonium content of these new states, one could infer the presence of similar resonances containing b quark pairs. The observed J P C = 1 −− exotic states (Y (4260), Y (4350), and Y (4660) [2]) scaled up by the mass difference between the J/ψ and the Υ(1S) (∆M ∼ 6360 MeV/c 2 ) would be exotic bottomonium states with masses above the Υ(4S) and below 11.2 GeV. Moreover, the Υ(10860) and the Υ(11020) states, which are candidate Υ(5S) and Υ(6S) respectively, were observed in the same region [3,4].
Between March 28 and April 7, 2008 the PEP-II e + e − collider [5] delivered colliding beams at a center-of-mass energy ( √ s) in the range of 10.54 to 11.20 GeV. First, an energy scan over the whole range in 5 MeV steps, collecting approximately 25 pb −1 per step for a total of about 3.3 fb −1 , was performed. It was then followed by a 600 pb −1 scan in the range of √ s=10.96 to 11.10 GeV, in 8 steps with non-regular energy spacing, performed in order to investigate the Υ(6S) region. This data set outclasses the previous scans [3,4] by a factor > 30 in the luminosity and ∼ 4 in the size of the energy steps. Across the scan, the energy of the positron beam was kept fixed at 3.12 GeV, while the electron beam energy was varied accordingly, to set the required √ s. This produced a variation of the boost of the center-of-mass frame during the scan.
The particles produced in the collisions are detected by the BABAR detector, described elsewhere [6]. Chargedparticle tracking is provided by a five-layer silicon vertex tracker (SVT) and a 40-layer drift chamber (DCH). In addition to providing precise position information for tracking, the SVT and DCH also measure the specific ionization (dE/dx), which is used for particle identification of low-momentum charged particles. At higher momenta (p > 0.7 GeV/c) pions and kaons are identified by Cherenkov radiation detected in a ring-imaging device (DIRC). The position and energy of neutral clusters (photons) are measured with an electromagnetic calorimeter (EMC) consisting of 6580 thallium-doped CsI crystals. These systems are mounted inside a 1.5-T solenoidal super-conducting magnet. Muon identifi-cation is provided by the magnetic flux return system instrumented with Resistive Plate Chambers and Limited Streamer Tubes. The full detector is simulated, for background and efficiency studies, with a Monte Carlo program (MC) based on GEANT4 [7].
To measure R b , we count the number of events passing a selection that enriches the sample in events containing B mesons (N h ) and those passing an independent di-muon selection (N µ ) at each energy point and at a reference energy below the open beauty production threshold. Indicating with a prime the quantities at the reference energy, we write: where ǫ B is the efficiency for open b production to satisfy the hadronic selection, X represents the different background components described later, σ i represents the cross-sections for the process i, ǫ i the corresponding efficiency, and L is the integrated luminosity collected at a given value of √ s. Measurements of N µ and N ′ µ are needed in order to normalize the hadronic rates to the collected luminosities. As reference we choose the sample collected at √ s=10.54 GeV, about 40 MeV below the Υ(4S) mass, taken during 2006-2007. Special mention is made of the ISR sample, the production of Υ(nS) (n = 1, 2, 3) mesons via initial state radiation: albeit part of the signal, this process can occur at the reference energy and has an efficiency and an energy dependence of the cross-section different from the open beauty production.
Solving the system of equations one obtains: where we defined: and R i = σ i /σ 0 µµ for each process and ξ µ = σ µµ /σ 0 µµ , assumed independent of √ s. It should be noted that these equations assume that the background scales with the integrated luminosity, i.e. that the machine background is negligible, and that the di-muon selection leaves a negligible level of background.
We select the b-enriched sample by requiring at least three tracks in the event, a total visible energy in the event greater than 4.5 GeV, and a vertex reconstructed from the observed charged tracks within 5 mm of the beam crossing point in the plane transverse to the beam axis and 6 cm along the beam axis. These quantities are computed using exclusively tracks in the fiducial volume of the DCH (i.e. forming an angle with the beam axis 0.41 < θ < 2.54 rad). A further rejection of the main backgrounds, e + e − → qq, q = u, d, s, c events ("continuum" events) and e + e − → ℓ + ℓ − , ℓ = e, µ, τ events, is obtained by means of a cut on the ratio of the second and zeroth Fox-Wolfram moments [8], R 2 , calculated using only the charged tracks. After optimization of the statistical sensitivity, we require R 2 < 0.2. Events that pass this selection at the reference energy comprise 91% continuum, 2% two photon (e + e − → e + e − γ * γ * → e + e − X h ), and 7% ISR (e + e − → Υ(nS)γ ISR ) events.
To select di-muon events, we require that two tracks have an invariant mass greater than 7.5 GeV/c 2 ; their angle with the beam axis in the center-of-mass frame, θ cms , must satisfy cos θ cms < 0.7485, and the two muons must be collinear to within 10 o . To exploit the fact that muons are minimum ionizing particles, we require that at least one of them leaves a signal in the EMC, and neither deposits more than 1 GeV.
In the following we describe the method used to derive the inputs to Eq. 5 and the corresponding errors, separating correlated and uncorrelated errors. The covariance matrix for the measurements of R b at different energies is V ij = σ 2 stat (s i ) + σ 2 unc (s i ) δ ij +σ corr (s i )σ corr (s j ), where σ stat (s i ), σ corr (s i ), and σ unc (s i ) are the statistical, correlated, and uncorrelated systematic error respectively, and δ ij is the Kronecker delta.
The efficiency for the di-muon selection ǫ µ is extracted from a sample of fully simulated MC events generated with KK2f [9] at several values of √ s. Due to the change in boost this efficiency is found to change by 1.5% over the whole range and the MC statistics error we assign to the corresponding correction is 0.2%. The correlated uncertainty on the absolute scale of the efficiency is estimated to be 1% and to come primarily from uncertainties in the simulation of the trigger, of the quantities used in the selection and of the tracking efficiency. We also account for differences in the trigger configurations between the scan data and the reference data taken during the year 2007 and estimate the efficiency on the reference data to be lower by (0.5 ± 0.2)%. The same generator is consistently used to extract ξ µ = 1.48 ± 0.02, where this correlated error is due to the uncertainty on the cross-section.
The efficiency for e + e − → bb events is estimated by using EvtGen [10] as generator, separately for each possible two-body final state including B, B s , and B * s mesons, and at different values of √ s. Because we ignore the relative composition in terms of final states at each energy we consider the largest and the smallest efficiencies among the allowed final states and take their mean value as the central value and half their difference as uncorrelated error. The correlated error on the absolute scale of ǫ B is estimated by varying the selection criteria and it is found to amount to 1.3%.
The calculation of the double ratio κ σǫ requires the dependence on √ s of ǫ µ , which has already been discussed, and the cross-sections and efficiencies for the ISR and the background processes.
The ISR cross-section is computed to second-order according to Ref. [11]. The corresponding efficiency (ǫ ISR ) is estimated with MC simulation to be 41% on average. The relative efficiency change across the scan, estimated to be ∼ 5%, is used as a correlated uncertainty and it propagates to an error on R b of at most 0.7%.
The cross-section for two-photon events scales as the square of the logarithm of s, and the corresponding efficiency is considered to be flat. The product of the crosssection and the efficiency (σ γγ ǫ γγ ) before the R 2 is fitted from the distribution of the direction of the missing momentum and then multiplied by the R 2 cut efficiency. We attribute 50% uncertainty to this estimate, leading to a relative correlated error of at most 0.2%. Finally, the product of the continuum cross-section and efficiency is computed by subtracting the ISR and two photon components from N ′ h (see Eq. 2). The continuum contribution to R (R cont ) is assumed to be constant with √ s, while the corresponding efficiency (ǫ cont ) was estimated on a sample of MC events generated with JETSET [12]. No correction to account for the fact that the reference data were taken in a different data-taking period was found necessary. The relative change of ǫ cont over the whole scan range is estimated to be 3% and a 0.2% systematic error due to MC statistics is assigned to it. We also find that the distribution of R 2 is not perfectly reproduced by the MC. We therefore estimate the scaling of ǫ cont separately with and without the R 2 < 0.2 requirement and take the difference among the results as a correlated systematic error. Its contribution depends on the value of R b and it is at most 2%.
To measure √ s of each point we fit the distribution of the invariant mass of the two muons in the selected dimuon sample with a function made of a Gaussian with an exponential tail on the side below the peak mass. We then use the mean of the Gaussian as estimator of √ s and we determine a bias of (20.9±1.5) MeV for this quantity by comparing the Υ(3S) mass measured on the data taken during the ∼ 100 pb −1 scan performed by PEP-II at the beginning of the last data-taking period with the [13]. We correct for this bias, that comes from the (strongly) non linear impact of the momentum resolution in the invariant mass, and verify on simulated events that it does not depend on √ s. The resulting measurements of R b as a function of √ s are shown in Fig. 1, where the error bars represent the sum of the statistical and uncorrelated systematic errors and dotted lines show the different B meson production thresholds. The relative correlated systematic errors on R b are summarized in Table I. The numerical results for each energy point, together with the estimated ISR cross section, can be found in Ref. [14]. It is important to stress that radiative corrections have not been applied since they would require an a-priori knowledge of the resonant region. The measured R b therefore includes all final-or initial-state radiation processes.
The large statistics and the small energy steps of this scan make it possible to observe clear structures corresponding to the opening of new thresholds: dips corresponding to the B ( * ) B * and B s B * s openings and a plateau close to the B * s B * s one. It is also evident that the Υ(10860) and Υ(11020) behave differently above and below the corresponding peaks. Finally, the plateau above the Υ(11020) is clearly visible.
We fit the following simple model to our data between 10.80 and 11.20 GeV: a flat component representing bb-continuum states not interfering with resonance decays, added incoherently to a second flat component interfering with two relativistic Breit Wigner resonances: The results summarized in Table II and Fig. 1 The number of states is, a priori, unknown as are their energy dependencies. Therefore, a proper coupled channel approach [15,16] including the effects of the various thresholds outlined earlier, would be likely to modify the results obtained from our simple fit. As an illustration of the systematic uncertainties arising from the assumptions in our fit, a simple modification is to replace the flat nonresonant term by a threshold function at √ s = 2m B . This leads to a larger width (74 ± 4 MeV) and a lower mass (10869 ± 2 MeV) for the Υ(10860).
In summary, we have performed an accurate measurement of R b in fine grained center-of-mass energy steps and have shown that these measurements have the potential to yield information on the bottomonium spectrum and possible exotic extensions.
We are grateful for the excellent luminosity and machine conditions provided by our PEP-II colleagues, and for the substantial dedicated effort from the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and kind hospitality. This work is supported by DOE | 2008-09-24T08:03:09.000Z | 2008-09-24T00:00:00.000 | {
"year": 2008,
"sha1": "f17223a4d5cf49a8f33325e18846b034bc367d52",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/138263/1/577768.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f17223a4d5cf49a8f33325e18846b034bc367d52",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247019577 | pes2o/s2orc | v3-fos-license | Cooperative multi-population Harris Hawks optimization for many-objective optimization
This paper presents an efficient cooperative multi-populations swarm intelligence algorithm based on the Harris Hawks optimization (HHO) algorithm, named CMPMO-HHO, to solve multi-/many-objective optimization problems. Specifically, this paper firstly proposes a novel cooperative multi-populations framework with dual elite selection named CMPMO/des. With four excellent strategies, namely the one-to-one correspondence framework between the optimization objectives and the subpopulations, the global archive for information exchange and cooperation among subpopulations, the logistic chaotic single-dimensional perturbation strategy, and the dual elite selection mechanism based on the fast non-dominated sorting and the reference point-based approach, CMPMO/des achieves considerably high performance on solutions convergence and diversity. Thereafter, in each subpopulation, HHO is used as the single objective optimizer for its impressive high performance. Notably, however, the proposed CMPMO/des framework can work with any other single objective optimizer without modification. We comprehensively evaluated the performance of CMPMO-HHO on 34 multi-objective and 19 many-objective benchmark problems and extensively compared it with 13 state-of-the-art multi/many-objective optimization algorithms, three variants of CMPMO-HHO, and a CMPMO/des based many-objective genetic algorithm named CMPMO-GA. The results show that by taking the advantages of the CMPMO/des framework, CMPMO-HHO achieves promising performance in solving multi/many-objective optimization problems.
Introduction
There are many practical multi-objective optimization problems (MOOPs) in engineering and scientific research [21,38,47]. A number of multi-objective evolutionary algorithms (MOEAs) and swarm intelligence optimization algorithms (MOSIOAs) have been proposed to solve MOOPs for they can obtain a well-distributed and well-converged set of near Pareto-optimal solutions. Although some traditional MOEAs [15,57] and MOSIOAs [11,41,51] can effectively solve twoand three-objective problems, they show poor performance Na Yang, Zhenzhou Tang and Xuebing Cai contributed equally overall to this work. B Qian Hu huqian@wzu.edu.cn 1 College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China 2 College of Computer and Software Engineering, Anhui institute of Information Technology, Anhui 241199, China in solutions diversity and convergence when solving manyobjective optimization problems (MaOPs). The reason is that their dominance rules are not sufficient to select the elite individuals for the next generation, nor to satisfy the requirement of the population size required for MaOPs [24,31]. Some MOEAs and MOSIOAs proposed later, such as NSGA-III [14], MOMPA [8], MOP-GSO [17] etc., can obtain wellconverged solutions, however, it is difficult to maintain solutions diversity when using only one population to optimize all objectives.
It has been proved that, compared with a single population, algorithms with multiple populations can effectively improve solution diversity for both single-objective optimization problems [5,32,33] and MOOPs [13,31,42] by dividing the population into multiple co-evolutionary/cooperative subpopulations. Nevertheless, some multi-population algorithms, such as MPEA/SG [30] and CIEMO/D [40], change the number of subpopulations during evolution, which makes it a tough challenge to determine the appropriate number of subpopulations when solving MaOPs. By contrast, in the multi-population multi-objective (MPMO) framework proposed in [50], the number of subpopulations is always equal to the number of optimization objectives. Each subpopulation only optimizes one objective and contains an equal number of individuals. However, the MPMO framework used the density-based selection operator to maintain population diversity, while ignoring that the computational expense of the density-based selection operator becomes considerably expensive with the increase of the number of objectives [14,25].
Motivated by the above, this paper presents a novel cooperative multi-population multi-objective swarm intelligent algorithm, known as CMPMO-HHO. Specifically, we first propose a novel cooperative multi-population multi/manyobjective framework with dual elite selection named CMP-MO/des. CMPMO/des has the features of independently optimizing one objective in one subpopulation, the cooperation among subpopulations in the global archive, the dual elite selection, and the logistic chaotic single-dimensional perturbation (LCSDP). These promising features help CMPMO-HHO dramatically promote the convergence and diversity of the obtained near Pareto-optimal solutions.
Secondly, we leverage the novel Harris hawks optimization (HHO) algorithm [1] as the optimizer of subpopulations. HHO is a state-of-the-art single objective optimizer that has the advantages of fast convergence, easy implementation, and friendly expansion, and has been applied to many practical applications. Although this work has chosen HHO as the optimizer, it should be noted that the proposed CMPMO/des framework can work with any other single-objective optimizer without modification.
We summarize the contributions of this paper as follows:
MOSIOAs and MOEAs with a single population
There have been many state-of-the-art MOSIOAs and MOEAs with a single population proposed to solve MOOPs. To name a few, NSGA-II [15] uses non-dominated sorting and distance-based method to select elite solutions. SPEA2 [57] leverages the environment selection strategy based on the nearest neighbors and achieves a good distribution on highdimensional optimization problems. CMOPSO [46] is a multi-objective particle swarm optimization (PSO) algorithm with fixed inertia weight. However, these algorithms show poor performance on MaOPs. Therefore, some single population algorithms were proposed to deal with MaOPs. In [14], NSGA-III was proposed which adopts the reference point (RP) based method to determine elite solutions. C-MOEA/D [2] is a decomposition-based evolutionary algorithm which extends the ability of MOEA/D to deal with constraints using adaptive constraint processing. In [27], the authors proposed a new bandit-based adaptive operator selection method over MOEA/D, namely FRRMAB, to automatically select appropriate operators in an online manner. hpaEA [7] first defined the nondominated solutions exhibiting evident tendencies toward the Pareto-optimal front as prominent solutions, using the hyperplane formed by their neighboring solutions, to further distinguish among nondominated solutions. Then, a novel environmental selection strategy was proposed to balance the convergence and diversity. In PREA [49], a strategy based on the parallel distance was introduced to select individuals in the promising region to ensure the population diversity. In [10], the authors put forward an enhanced version of NSGA-III, known as ANSGA-III, which takes the advantage of the adaptive RP. ar-MOEA [48] leveraged the preference angle and reference information-based dominance to solve MOOPs and MaOPs. RPD-NSGA-II [18] used a new decomposition-based dominance relation to deal with MaOPs and a new diversity factor based on the penaltybased boundary intersection method. In PICEA-g [43], the family of decision-maker preferences and candidate solution population co-evolve to optimize the targets.
MOSIOAs and MOEAs with multiple populations
The aforementioned works only involve one population. Considering the benefits introduced by multiple populations in solving MOOPs and MaOPs, multi-population algorithms have received more and more attention. A novel multipopulation evolutionary algorithm, known as MPMMOES, was proposed in [54] for solving multimodal MOOPs. The original population should be divided into two groups of subpopulations of equal size. One is designed to search for the optimal solutions in objective space. The other subpopulation focus to obtain high-quality optimal solutions in the decision space. The optimizer in [26] uses the dynamic population strategy. In [42], a random migration strategy in MOPSO was proposed to improve the diversity of the population. In [12], the Pareto envelope-based selection algorithm II (PESA-II) was proposed which uses the hyper-grids to keep the well-distributed solutions. Manzoor et al. [34] presented a multi-objective self-adaptive multi-population based Jaya algorithm (PMO-SAMP-Jaya) to optimally schedule the energy consumption in a smart building. A grid search based multi-population particle swarm optimization algorithm (GSMPSO-MM) was presented in [29] to handle multimodal MOOPs.
To take advantage of multiple populations more effectively, other researches have focused on the co-evolutionary optimization among them. Said et al. [37] proposed an Indicator-Based version of their recently proposed Co-Evolutionary Migration-Based Algorithm (CEMBA), named IB-CEMBA, to solve combinatorial multi-objective bi-Level optimization problems. In [20], the authors presented a new multi-population hybrid genetic algorithm (MPHGA) that combines the standard genetic algorithm with the alternative location and assignment algorithm. Wang et al. [44] proposed a dual-population based evolutionary algorithm. These two populations iteratively exchange the information obtained from elite solutions during the evolution to collaboratively search for the optimal solutions of the problem. Ben Mansour et al. [3] proposed a cooperative version of the multi-objective local search algorithm (IBMOLS) based on a quality indicator, called W -CMOLS.
The above-mentioned multi-population optimization algorithms were mainly designed for MOOPs. There have also been some works reported for MaOPs. Dai et al. [13] presented an improved evolutionary algorithm for solving the multi-objective optimization problems, which uses the improved K-dominance to rank the solutions in subpopulations to generate offsprings. Similarly, Zheng et al. [55] proposed an evolutionary algorithm based on M2M population decomposition and reference distance. For each subpopulation, the projection distance to the direction vector is used to enhance the selection pressure of Pareto dominance.
Co-evolutionary or cooperation among subpopulations has been also widely used in multi-population MOSIOAs and MOEAs for MaOPs. A coevolutionary particle swarm optimization with the bottleneck objective learning (BOL) strategy was proposed in [31] where all populations are coevolutionary in a distributed manner. Naidu [35] proposed a hybrid cooperative multi-objective invasive weed optimization (IWO) based on the space transformation search (STS). Chen et al. [6] proposed a novel multi-objective ant colony system based on the multi-objective co-evolutionary multi-population framework, in which two ant colonies are used to deal with the two objectives respectively. A novel interval multi-populations multi-objective optimization method called the interval cooperative multi-objective artificial bee colony algorithm (ICMOABC) based on interval credibility was proposed in [53] where the interval credibility is selected as the interval dominant method. Rakshit et al. [36] proposed a new MaOP algorithm that solves the MaOPs using an improved DE mutation strategy and optimizing objectives in parallel. Liu et al. [30] developed a multi-population evolutionary algorithm with a single-objective guide to tackle many-objective optimization problems. It exploits the merits of both multiple populations and single-objective optimization to balance diversity and convergence of the evolution process. Table 1 summarizes the aforementioned works.
Method
In this section, the details of the cooperative multi-population multi-objective Harris Hawks algorithm for many objectives are presented.
Double-evolved cooperative multi-population framework
To improve the diversity and convergence when solving MaOPs, the novel cooperative multi-population framework, named CMPMO/des, is proposed in this paper. The features of CMPMO/des are as follows (Fig. 1).
• Intra-subpopulation migration and optimization: In CM-PMO/des, each objective is randomly assigned to one subpopulation and all individuals are equally allocated to each subpopulation. Each subpopulation performs migration and optimization towards the only objective independently. This one-to-one correspondence framework facilitates its extension to MaOPs.
Obtain E i (t) by performing NDS and RP. 6: Obtain S hho i (t) by performing one iteration of the HHO upon E i (t). 7: Obtain S chaos (t) by performing LCSDP on S hho (t). 11:
12:
Obtain A(t) by performing NDS and RP on Q(t).
13: end for
The procedure of CMPMO/des is as follows: Step 1: Initialize the parent population P(0) with the size of M × N where M is the number of objectives and N is the size of each subpopulation. Create M subpopulations with the size of N , denoted as S 1 , S 2 , . . . , S M , where M is the number of optimization objectives. Create an empty archive A(0) = ∅.
Step 2: Step 3: In iteration t, perform one iteration of the HHO algorithm (see "HHO optimizer") upon E i (t). Then we obtain a new subpopulation with the size of N , denoted as S hho i (t), and a merged population with the size of M × N as Step 4 In iteration t, perform LCSDP (see "Perturbation strategies") on S hho (t) and we obtain another population with the size of M × N , denoted as S chaos (t).
Step 5: Select N elite individuals from Q(t) by performing NDS and RP and we obtain A(t) for the next iteration.
Step 6: If the termination condition is satisfied, CMP-MO/des stops; otherwise, goes to step Step 2.
HHO optimizer
Harris Hawks optimization (HHO) is a novel heuristic algorithm which solves optimization problems by simulating the rabbit-hunting behavior of Harris Hawks [1]. The process of HHO involves two phases, namely the exploration phase and the exploitation phase. In HHO, a Harris hawk represents a where E 0 is the initial energy generated randomly in (−1, 1), t is the current iteration, and T is the maximum number of iterations. HHO adopts different strategies according to E. Specifically, HHO is in exploration phase if |E| ≥ 1, otherwise, HHO is in the exploitation phase.
Exploration phase (|E| ≥ 1). In iteration t, each Harris hawk updates its position with two equal-opportunity strategies.
where X (t) is vector of the locations of all hawks in iteration t, x rd (t) is the location of a randomly selected Harris hawk, x rb (t) represents the location of the rabbit, r 1 , r 2 , r 3 , r 4 and q are random numbers in (0,1), which are updated in every iteration, ub and lb are the upper and lower bounds of the variable, respectively, and x a (t) is the average of all hawks position vectors for current population, which is calculated as follows: where X i (t) is the location of the ith hawk and N represents population size.
Exploitation phase (|E| < 1) In the exploitation phase, all hawks update positions with different strategies based on the random number r 5 in (0,1) and rabbits' escape energy E. If r 5 ≥ 0.5, |E| ≥ 0.5, Harris hawks take the action of soft besiege as follows: where X (t) = x rb (t) − X (t) is the difference between the hawks positions and that of rabbit in tth iteration, J = 2(1 − r 5 ) is the misleading jump strength in [0,2] when prey escaping. If r 5 ≥ 0.5, |E| < 0.5, Harris hawks take the action of hard besiege as follows: If r 5 < 0.5, |E| ≥ 0.5, all hawks adopt soft besiege with progressive rapid dives to update their locations as follows: where Y and Z are respectively calculated as follows: Algorithm 2 Procedure of one iteration in HHO Update position by (4). 7: else if r ≥ 0.5 and |E| < 0.5 then 8: Update position by (5). 9: else if r < 0.5 and |E| ≥ 0.5 then 10: Update position by (6). 11: else if r < 0.5 and |E| < 0.5 then 12: Update position by (11). 13: end if and where D is the dimension of the decision variables, S is a D-dimensional random vector and L F is the Lévy flight function based on the following rule: where u and v are random numbers in (0,1), β is default constant 1.5. If r 5 < 0.5, |E| < 0.5, Harris hawks perform hard besiege with progressive rapid dives to update their location as follows: where . Similar to soft besiege with progressive rapid dives, Y and Z are retained only when better fitness values are obtained.
The detailed procedure of one iteration in HHO is summarized in Algorithm 2.
Perturbation strategies
To further improve the local search ability and speed up the convergence of HHO, the logistic chaotic single-dimensional perturbation (LCSDP) strategy is introduced in this paper. Compared with the random search, LCSDP is more evenly Furthermore, it keeps dimension information of the optimal solutions by searching for solutions disturbed in a single dimension [28]. Thus, LCSDP has good pseudorandomness, ergodicity, and sensitivity to the initial values. Specifically, for each individual in S hho (t), the value of a randomly selected decision variable is perturbed as follows.
where i is the index of the randomly selected decision variable, ub i and lb i are the the upper and lower bounds of the ith decision variable, C(t) is the logistic chaotic sequence in iteration t which is iteratively updated as follows.
where μ is the logistic chaotic sequence parameter and the initial value C(1) is a random number in (0, 1) . The complete flowchart of CMPMO-HHO is shown in Fig. 2.
Algorithm complexity analysis of CMPMO-HHO in one iteration
We analyze the time complexity of CMPMO-HHO based on the CMPMO-HHO scheme presented in Algorithm 1. In summary, the overall CMPMO-HHO time complexity is O(4(2M N + N )N 2 ).
Experimental settings
All algorithms in this paper run on MATLAB R2020b with Intel (R) Core (TM) I7-9700 CPU@3.00 GHZ and Windows 10 operating system. The experimental results of all the comparison algorithms are obtained by PlatEMO [39]. The size of each subpopulation is set to 100.
Benchmark functions and comparative algorithms
To evaluate the performance of CMPMO-HHO on MOOPs with only two or three objectives, 34 popular benchmark functions were selected from five popular benchmark suites on MOOPs, including five ZDT [56] functions, nine WFG [22] functions, seven DTLZ [16] functions, ten UF [52] functions, and three LSMOP [9] functions. The maximal number of ZDT functions evaluations is 300, that of other benchmark function is 3000. Other details about the benchmark functions are presented in Table 2.
To evaluate the performance of CMPMO-HHO on MaOPs with 5 and 10 objectives, 19 benchmark functions, i.e., seven DTLZ functions, nine WFG functions, and three LSMOP functions are used. The maximal number of evaluations is 3000. The number of objectives and dimensions follows the recommendation in [9,22,56]. Table 3 shows the detail settings.
Moreover, to quantitatively evaluate the importance of the different strategies leveraged in the CMPMO/des framework, we separately evaluated the performances of the three variants of CMPMO-HHO, namely MOHHO/SP (multiobjective HHO algorithm with single population), CMPMO-HHO/NoP (CMPMO-HHO without LCSDP) and CMPMO-HHO/SES (CMPMO-HHO with single elite selection), and CMPMO-GA which is a CMPMO/des based many-objective GA.
Performance metrics
In this paper, the inverted generational distance (IGD) [4] is adopted as the performance metric for it can concurrently quantify the convergence and diversity of MOOPs algorithms. The smaller the IGD value is, the better the convergence and distribution of the algorithm are. Assuming that an algorithm obtains a set A of non-dominated solutions and gives a uniformly sampled reference points set P on real PF, the calculation of IGD(A, P) is as follows.
where |P| is the size of set P, . 2 represents the Euclidean distance in objective space. Each benchmark function is tested 30 times independently to avoid contingency, and the average value (AVG.) and variance (STD.) are taken as the final performance indicators. The best results are shown in bold.
Besides, the Wilcoxon signed-rank test [19] at 0.05 significance level is conducted in this paper to show the statistically significant differences between CMPMO-HHO and other popular MOEAs and MOSIOAs on each benchmark function. The true hypothesis of the Wilcoxon signed-rank test is the P value. There were statistically significant differences between the two algorithms if P < 0.05, otherwise there were no significant differences. Three symbols, namely +, −, and =, indicate the results of CMPMO-HHO is significantly better, worse, or equivalent to the corresponding competitor, respectively. More specifically, if P ≥ 0.05, symbol = is used, if P < 0.05, and IGD value of CMPMO-HHO is smaller than the algorithm used for comparison, it is indicated by the symbol +, otherwise by symbol −. Tables 4,5, and 6 present the IGD values of CMPMO-HHO and the comparative algorithms on MOOPs with two or three objectives and MaOPs with 5 and 10 objectives, respectively. We ranked them according to the IGD values obtained from each test function. The results show that CMPMO-HHO ranks first on 15 of 34 functions (including six WFG functions, two ZDT functions, two DTLZ functions, four UF functions, and LSMOP2) and ranks top two on 22 of 34 functions, which indicates that CMPMO-HHO is quite competitive on MOOPs. For MaOPs, CMPMO-HHO performs the best on 6 WFG functions and 2 DTLZ functions with 5 five objectives, and 5 WFG functions, 2 DTLZ functions, and LSMOP2 with 10 objectives. Notably, although there are not so many benchmark functions in which CMPMO-HHO ranks first, CMPMO-HHO ranks top two in 11 of all 19 test benchmark functions with five objectives, and 9 benchmark functions with ten objectives.
Experimental results and analysis
We can also observe that, benefiting from the novel CMPMO/des framework, CMPMO-GA significantly outperforms NSGA-II/III, which confirms the effectiveness of the CMPMO/des framework. On the other hand, although CMPMO-GA always ranks close to CMPMO-HHO, it is inferior to CMPMO-HHO in most cases. This is merely because HHO performs much better than GA due to its much more complicated migration mechanisms [1]. Moreover, the three variants of CMPMO-HHO are inferior to CMPMO-HHO, which implies that the multi-population mechanism, dual elite selection and LCSDP can effectively improve the performance of the CMPMO/des framework. Table 7 summarizes the overall scores and the rankings of CMPMO-HHO and the comparative algorithms in terms of IGD values. The overall score of each algorithm was derived from its rankings on all benchmark functions (as shown in Tables 4, 5 and 6). Concretely, the first ranked algorithm gets a M is the number of objectives and D is the dimension of decision variables one score, the second gets two scores, and so on. The overall score of each algorithm is the sum of its scores on all benchmark functions. Finally, we rank the algorithms according to the overall score. The smaller the overall score, the higher the ranking. It can be found that CMPMO-HHO always ranks first in solving both MOOPs and MaOPs, which verifies that CMPMO-HHO has significantly high performance in optimizing multi-objective and many-objective problems. Thereafter, Wilcoxon signed-rank tests were performed to investigate whether CMPMO-HHO and the compared algorithms have statistically significant differences. The details are shown in Tables 8, 9, and 10 for MOOPs and MaOPs (five and ten objectives) experiments, respectively. Both three tables show that most of the P-values of the Wilcoxon signedrank test are less than 0.05, which demonstrates significant differences between CMPMO-HHO and other algorithms in a statistical sense. The last row of tables (highlighted in bold) summarizes the number of instances on each significance case (+/ = /−). It can be seen that the numbers of + symbols in both tables are always in the majority, indicating that CMPMO-HHO is always statistically superior to the other algorithms compared.
To visually demonstrate the differences between the PFs obtained by CMPMO-HHO and the real PFs, we draw comparisons of PFs in Fig. 3. Note that PFs of MaOPs are difficult to be visualized due to their high objectives and dimensions, therefore, only PFs on two and three objectives are presented in Fig. 3. The red diamonds represent the PF values obtained by CMPMO-HHO, and the black solid points represent the real PF which are provided by PlatEMO [39].
It can be observed the PFs obtained by CMPMO-HHO and the real PFs are highly coincident and the solutions found by CMPMO-HHO are uniformly distributed in the PFs in most of the benchmark functions. However, the PFs of UF4 and LSMOP2 obtained by CMPMO-HHO are well-distributed, but poorly converged. On the contrary, the PFs of DTLZ1, DTLZ7, UF1, UF8, UF9, and UF10 convergence well to the real ones, but their distributions are not uniform enough. In the DTLZ3, UF3, UF5, UF6, LSMOP1, and LSMOP3 functions, the performance of CMPMO-HHO is not satisfactory in both distribution and convergence. Moreover, it should be noted that the real PF of WFG3 provided by PlatEMO [39] does not cover the whole PF, since the PF of WFG3 has a nondegenerate part as well as the intended degenerate part [23]. Authors in [23] derived the PF of WFG3 which is well coincident with ours.
By analyzing the characteristics of these benchmark functions with relatively poor distribution or convergence, we can find that most of them are multimodal or with irregular PFs, which indicates that the CMPMO-HHO algorithm needs to be improved in dealing with such kind of problems in the future. Even so, we can observe that CMPMO-HHO not only ranks first overall in the tests on 34 MOOPs and 19 MaOPs (five and ten objectives, respectively), but also ranks first in benchmark problems that are multimodal or with irregular PFs, as summarized in Table 11, which strongly proves that CMPMO-HHO is considerably competitive. On the other hand, the fact that CMPMO-HHO does not achieve ideal PFs in all benchmark functions is also reasonable according to the no free lunch (NFL) theorem [45]. Fig. 3 Comparisons between the real PFs (provided by PlatEMO [39]) and PFs obtained by CMPMO-HHO. † Note that the real PF of WFG3 does not cover the whole PF, since the PF of WFG3 has a nondegenerate part as well as the intended degenerate part [23]
Conclusion
This paper presents a swarm intelligent algorithm based on HHO, named CMPMO-HHO, to deal with multi-objective and many-objective problems. To ensure scalability over optimization objectives, CMPMO-HHO uses a novel cooperative multi-population framework named CMPMO/des. CMPMO/des includes four excellent strategies, namely the one-to-one correspondence framework between the optimization objectives and the subpopulations, the global archive for information exchange and cooperation among subpopulations, the LCSDP strategy, and the dual elite selec-tion mechanism based on the fast non-dominated sorting and the reference point-based approach, which help CMPMO/des achieve considerably high performance on solutions convergence and diversity.
We have conducted extensive comparative experiments on 34 multi-objective and 19 many-objective popular benchmark problems to evaluate the performance of CMPMO-HHO. The compared algorithms include 13 state-of-the-art MOEAs/MOSIOAs, three variants of CMPMO-HHO and a CMPMO/des based many-objective GA. The experimental results show that by taking the advantages of the CMPMO/des framework, CMPMO-HHO achieves an amaz- Table 11 Rankings of all algorithms in multi-objective test problems that are multimodal or with irregular PFs | 2022-02-22T16:07:38.764Z | 2022-02-20T00:00:00.000 | {
"year": 2022,
"sha1": "4fbf71f9eb2a13f3899b2a4056897f848a6a1ce3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40747-022-00670-4.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "7bf946f844a7a0d9ccb804207c763b874629c5ee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
16989852 | pes2o/s2orc | v3-fos-license | A Prototype System for Measuring Microwave Frequency Reflections from the Breast
Microwave imaging of the breast is of interest for monitoring breast health, and approaches to active microwave imaging include tomography and radar-based methods. While the literature contains a growing body of work related to microwave breast imaging, there are only a few prototype systems that have been used to collect data from humans. In this paper, a prototype system for monostatic radar-based imaging that has been used in an initial study measuring reflections from volunteers is discussed. The performance of the system is explored by examining the mechanical positioning of sensor, as well as microwave measurement sensitivity. To gain insight into the measurement of reflected signals, simulations and measurements of a simple phantom are compared and discussed in relation to system sensitivity. Finally, a successful scan of a volunteer is described.
Introduction
Microwave imaging has been proposed as an alternative breast imaging modality [1]. The basic premise is that different tissues in the breast have different electromagnetic properties, and these differences may be exploited to create images. General approaches to active microwave imaging include microwave tomography [2] and radar-based methods [3][4][5]. Microwave tomography involves measuring signals transmitted through the breast and reconstructing images by matching measured data with signals obtained from simulated models containing iteratively updated property estimates. Microwave tomography has been tested with simulations and experimental measurements of phantoms (e.g., [6]) and simulations of realistic breast models [7]. Moreover, a research group at Dartmouth College has performed extensive patient studies with prototype systems. The resulting images have demonstrated average microwave frequency properties that increase with breast density [8], as well as agreement between features detected on microwave images and known clinical histories [9]. Radarbased microwave techniques create images by processing reflections of wideband or ultrawideband (UWB) signals from the breast. These images indicate the presence and location of significantly scattering objects. Testing of radarbased approaches has involved simulations with realistic breast models [3,10], testing with phantoms [5,11,12] and early-stage clinical investigations [13]. To date, a group at Bristol University has reported imaging of patients using a multistatic radar system. Therefore, in spite of the growing body of literature related to microwave breast imaging, there are very few reports of work with patients or volunteers. This likely reflects the significant technical challenges involved in sensor design and implementation, measurement hardware, and development of patient interfaces.
In this paper, we describe a prototype system that is based on a monostatic radar approach and has been termed the TSAR (tissue sensing adaptive radar) method. The TSAR prototype system differs from previously reported prototype systems for microwave imaging in that a single antenna is scanned around the breast in order to collect data. A multistatic system inherently collects more information than its monostatic counterpart. On the other hand, a singlesensor method can be designed to produce a focused beam increasing the reflected power from small features. Given the potential high attenuation in breast tissues, this is likely beneficial for sensing smaller malignant regions. In addition, a monostatic system allows more relaxed requirements for 2 International Journal of Biomedical Imaging the UWB sensor. A larger sensor permits using lower frequencies without limitations due to mutual coupling. The ability to place the sensor at an infinite number of locations around the breast is also very attractive in terms of adaptability to patients, as well as for image reconstruction performance. However these advantages are at the cost of a more complex positioning system and longer repositioning time compared to electronically switched antennas as in [13]. In order to assess the performance of our prototype system, a study is performed of the mechanical sensor positioning, as well as of the microwave measurement sensitivity and perturbation. This provides insight into the capabilities and limitations of the system. Next, we compare simulations and measurements of a simple phantom. While both simulations and experimental work have previously been carried out for tomography and radar-based imaging, only a few papers directly compare simulations and measurements of phantoms (e.g., [6,14]). Our phantom represents the shape of the breast in a simplified way and consists of one material with an inclusion of a different material. Although the properties of the model differ from those of breast tissues, the phantom has stability in properties and shape that permit evaluation of the repeatability of results. In addition, the reflections from the phantom are interpreted relative to the system sensitivity. After validation, the prototype system is used to collect reflections from volunteers. To gain insight into these measurements, comparison with simulations of volunteerspecific breast models is attempted.
Prototype System and Procedure
2.1. System Description. The TSAR prototype system is shown in Figure 1. The prototype consists of a padded bed placed over a cylindrical tank filled with canola oil. The woman to be scanned lies prone on the bed, and a hole in the top of the bed permits one breast to extend into the tank.
The cylindrical tank is filled with canola oil to improve the matching between the breast skin and the sensor attached to a positioning arm. The canola oil exhibits a relative permittivity of 2.5 with a conductivity below 0.04 S/m up to 12 GHz. A laser is also mounted to the positioning arm to record the breast outline. To scan the sensor around the breast, the arm moves vertically and the entire tank rotates. Dimensions of the tank and hole as well as antenna location are provided in Figure 2. The scanning region in the vertical (z) direction spans from 24 mm to 141 mm below the top of the lid. The circular opening in the lid has a diameter of 130 mm while the tip of the sensor is located 70 mm away from the center of the opening to avoid contact with the breast skin. To monitor the scan procedure, a camera is mounted on the side of the tank and transmits images to the operator.
Microwave measurements are collected with a custom antenna. The antenna utilized in this work is a balanced antipodal Vivaldi antenna with a director (BAVA-D) [15]. This antenna has a bandwidth (S 11 better than −10 dB) from 2.4 to 18 GHz. The director narrows the beam of the antenna compared to a standard BAVA design, thus focusing more energy into the breast. Measurements are acquired with a vector network analyzer (VNA) (8722ES, Agilent Technologies, Palo Alto, CA, USA). The antenna is connected to the VNA via a 3 m long cable, and a guiding system helps to move the cable in a reproducible way. The cable guiding system is indicated in Figure 1. The system is calibrated at the end of the cable where the antenna is connected.
Measurements are taken at 1601 points over the frequency range from 50 MHz to 15 GHz with a port power of −5 dBm. As discussed in Section 3, an intermediate frequency (IF) bandwidth of 1 kHz and averaging over 3 frequency sweeps International Journal of Biomedical Imaging 3 are used to reduce the system noise floor. The resulting data are transformed into the time domain after weighting with the spectrum of the differentiated Gaussian pulse given by: where V 0 is used to adjust the amplitude of the pulse, τ = 62.5 ps, and t 0 = 4τ.
Volunteer Scan Procedures.
We have scanned several volunteers with the prototype system (Study No. 21859, as approved by the University of Calgary Conjoint Health Research Ethics Board). Our study involves a TSAR scan of one breast, as well as a scan of both breasts with magnetic resonance (MR) imaging. During a TSAR scan, the antenna is physically moved to a number of locations encircling the breast at various elevations ( Figure 3). Data collected at the same elevation are termed a row. For a complete scan, data are collected at a number of rows. For the volunteer scan, the number of rows, separation between rows, and number of antenna locations in a row are initially estimated with the MR images, then updated after observing digital images of the breast in the TSAR scanner. Our experience indicates that adjustments to TSAR scan patterns designed with MR images are necessary to compensate for the changes in breast shape and extent due to the flotation of the breast in oil. We note that the rotation of the tank and the vertical movement of the arm used to scan the antenna around the breast are both automated and actuated by step motors, which are controlled by a custom software code. The process of moving the sensors and collecting measurements takes less than 30 minutes for 1 breast scanned at up to 200 antenna locations. The reflections are calibrated by performing two sets of measurements and then using responses from known objects to orient reflections in time. First, a scan is collected with the volunteer positioned in the scanner and another scan is acquired with an empty tank. To initially calibrate the data, the signals recorded with the empty tank are subtracted from signals recorded with the volunteer present. Identical antenna locations are used with both scans. Next, reflections from metal plates placed at two known distances from the antenna are collected. The differences in time of arrival of the two reflections are used to confirm the dielectric constant of the immersion medium. The known locations of the plates are also used to identify the reflection from the antenna aperture in the signals. The aperture reflection is then located in time in order to identify distances of objects relative to the end of the antenna.
Finally, the reflected signals are used to create images. First, the dominant reflections between the immersion liquid and object (e.g., oil/skin interface) are removed by approximating the reflections at a target antenna. For simple models such as the hemisphere used later in this paper, it is sufficient to use straightforward methods for this approximation. In this case, the reflections recorded at antennas located in the same row are time-shifted and scaled to match the target signal [16]. More sophisticated algorithms are typically required to deal with more complex scenarios. Next, 3D images are formed by scanning the focal point through the imaging region and using a time-shift-and-sum beamformer to identify components of the reflections at appropriate antennas that originate from the same physical location [16]. An estimate of the surface of the phantom is incorporated into this focusing procedure [17].
System Performance and Validation
As evident from the description in Section 2, the TSAR measurement system is rather complex. Many aspects of the system can alter the measurement quality, which in turn will influence the quality of the reconstructed images. We consider 3 different types of effects: (1) the positioning performance, (2) the microwave measurement sensitivity, and (3) perturbation. In this section, these different aspects are assessed or validated in order to define the overall system performance.
Positioning Performance.
Correct positioning of the sensor is critical in two aspects. First, good mechanical precision is required for repeatability of measurements. As described in Section 2, each scan is calibrated with reference measurements collected during a scan with the exact same pattern but without the volunteer or patient present (empty tank). This operation removes the unwanted effects of the environment (e.g., reflections from the tank) from the measured signals. Therefore, good positioning repeatability is needed to guarantee that the unwanted effects are reproduced between the two scans. Second, good mechanical accuracy is necessary for proper image reconstruction as the signals are spatially focused based on the antenna positions. Good agreement between the desired and actual antenna positions in the scan is therefore required. The positioning precision and accuracy are related to the mechanical play and the ability to achieve the correct displacement; both of these parameters will be evaluated. Two independent axes are used to bring the sensor into position, namely, the azimuth ( • ) (tank rotation) and the elevation (mm) (arm movement). Specifications of ±0.1 • and ±0.1 mm for the displacement tolerance with a mechanical play of maximum 0.1 • and 0.1 mm have been defined for each axis. These correspond to no more than 0.6 mm of error when identifying focal points in the worstcase scenario.
In order to validate these requirements, specific movement sequences are realized and expected positions are compared with a measurement of the actual position. For the elevation axis, the position is measured with a digital caliper attached to the moving arm. The assessment of the azimuth position is achieved by measuring the displacement on the outer edge of the rotating tank. Given the very large external diameter of the tank (520 mm), small angular displacements translate into large displacements at its outer edge. Note that the external diameter also includes a lip placed around the tank to collect excess oil, which contributes to the large difference when compared to the inside diameter given in Figure 2. This technique allows us to determine whether the azimuth movement passed or failed the specification, however no numerical values are extracted.
For the elevation axis, the validation shows that the displacement error is within tolerance with a maximum of ±0.07 mm and an average of ±0.04 mm. On the other hand, the mechanical play of the elevation axis is, in general, very close to the maximum allowed value and exceeded the limit in one of the test iterations. Therefore, an automated compensation of the mechanical play is implemented in the software used to control the TSAR prototype, showing significant improvement. The measured mechanical play results with and without software compensation are shown in Table 1.
All the azimuth tests passed the specification requirements successfully. However, movement with resolution of 0.25 • creates a consistent displacement error that accumulates and creates larger positioning error. This behavior naturally occurs due to the intrinsic angular resolution of the step motor. This behavior is avoided by allowing displacements with a minimum resolution of 0.5 • .
Microwave Measurement Sensitivity.
Since the reflections from internal breast tissues are expected to be very weak, good measurement sensitivity is a key aspect of the system. As described in Section 2, the calibrated data result from a subtraction of two successive scans: one with the volunteer present and one with an empty tank. Therefore, the sensitivity can be defined as the smallest signal that can be recovered after the subtraction operation. To assess the sensitivity of the microwave measurement system, a broadband load standard (Agilent 85052D) is connected instead of the antenna and two measured reflected signals are subtracted. Smaller differences correspond to better sensitivity.
The sensitivity is directly influenced by the measurement noise floor of the VNA receiver. Reduction of the IF bandwidth and averaging a number of measurements can significantly improve the noise level. The smallest IF bandwidth with a large amount of averaging would be ideal for sensitivity. However, these actions considerably increase the measurement time to impractical values. The maximum scan time for TSAR is set to 30 minutes for 200 measurements. Accounting for mechanical displacement time, the microwave measurement for each location has to be achieved in 8 seconds for a total of 26.6 minutes dedicated to the RF measurement. An IF bandwidth of 1000 Hz with averaging of 3 signals shows the best sensitivity among the combinations that fit the time criteria. Figure 4 shows the sensitivity that is achieved with these settings and the broadband load attached. A sensitivity below −90 dB is achieved over almost the entire frequency band. The phase variation is below 0.2 • with exception of the upper limit International Journal of Biomedical Imaging of the frequency band. This result can be considered as the best sensitivity that the system can achieve as the two measurements considered are collected in an ideal scenario in which no time elapsed and nothing moved between measurements.
The stability of the reflection measurement with respect to time will also influence the sensitivity. A 30-minute span occurs between the signals measured during the volunteer scan and the calibration scan. To evaluate the effect of this time delay on the sensitivity, 200 successive measurements of the broadband load are collected for two consecutive iterations, replicating the same time frame as a volunteer scan. As in the previous case, the system does not move. Figure 5 shows the 200 corresponding sensitivity curves, which sit mostly below −80 dB except for the extremes of the frequency band. The corresponding phase variation is below 0.5 • with an increase towards the end of the spectrum. The correlation between the phase variation and the sensitivity is obvious from Figure 5. Overall we observe that, due to the drift inherent in the VNA, a 30-minute time span between measurements decreases the microwave measurement sensitivity by roughly 10 dB.
Microwave Measurement Perturbation Immunity.
A perturbation is defined as any phenomena (internal or external) that will induce unpredictable interference in the measured signals and thus affect the measurement sensitivity. A number of perturbation sources are identified and the solutions to mitigate their effect are described.
The first perturbation arises from the change of the cable response. As the antenna is moved to various locations, the cable shape is changed which predominantly affects its phase response. To reduce the negative effect on the sensitivity, a guiding system shown in Figure 1 has been implemented.
This system helps to ensure that the cable position is repeatable when the antenna is positioned and repositioned at a certain location. Identical cable positions translate to similar electrical responses that can be removed during the calibration process. The performance of this technique is illustrated in Figure 6, which shows the sensitivity calculated when the system is moved through two full TSAR scans (200 positions) with the broadband load attached instead of the antenna. When comparing with the corresponding static sensitivity (Figure 5), we observe only a slight increase of the phase variation, which translates to a fairly limited degradation of the sensitivity. An additional set of results is generated without any cable compensation by taking the difference between the 200 measurements and one selected measurement from the second scan. In this way, the cable position is different for each of the measurements in a given pair. For this scenario, the sensitivity sits at around −70 dB, so we estimate that the cable guiding system improves the sensitivity by about 10 dB.
The other perturbations are related to the signals detected by the antenna. The reflections from the breast are of interest, while reflections from other objects or sources can be subtracted during the calibration process as long as they are stable between measurements. However, any unpredictable signals that cannot be removed with the calibration process are considered as perturbations and need to be minimized. The unwanted signal sources have been classified into three groups: (a) lab environment reflections (room, equipment, people, etc.), (b) immersion liquid movement, and (c) general electromagnetic smog. Different mechanisms are implemented to alleviate these perturbations. First, the lab environment reflection (a) is easily removed using the time gating implemented in the VNA. The measured data are gated between 0 and 3.6 ns in order to remove reflections that originate from outside of the measurement tank. The immersion liquid movement (b) is induced by the movement of the tank itself but most predominantly by the fluctuation of the tank volume due to the moving arm displacement. As the volume changes, the liquid level changes and creates reflections that cannot be replicated since these reflections are also affected by the volume of the breast itself. To minimize this effect, the tank lid is designed with additional material added around the hole through which the breast extends ( Figure 7). This keeps the liquid level constant in the vicinity of the antenna aperture, while allowing fluctuation in liquid level behind the antenna where radiation is an order of magnitude less. This additional region consists of a polycarbonate shell filled with HR10 absorber (Emerson and Cuming Microwave Products, Randolph, MA, USA).
Finally, the electromagnetic smog (c) is generated by electrical apparatus around the lab and the outside world. To increase the electromagnetic immunity, absorbers are placed at strategic locations around the measurement tank in conjunction with shielding material. Figure 8 shows the typical sensitivity of the TSAR prototype, when the previously mentioned techniques are in place, the antenna is attached and the immersion liquid is present. When compared to Figure 6, a significant decrease in magnitude sensitivity is noted, resulting in sensitivity between −50 and −60 dB, while phase variation increases slightly. The very large peaks in the phase variation happen at resonances where the phase changes drastically while being difficult to resolve by the VNA due to the weakness of the reflected signal. Overall, a sensitivity reduction of 30 dB is observed. As the reflection coefficients of the broadband matched load and antenna are around −30 and −10 dB, respectively, the phase variation intrinsically has greater impact on the sensitivity with the antenna attached. However, since the BAVA-D ringing is extremely small, the increase in reflection is mostly located in the antenna structure, as shown by the time domain representation in Figure 9. The antenna structure ends at approximately 1.5 ns in time and only the components of the signal beyond this point are significant for imaging purposes. We use a Tukey window, shown in Figure 9, to evaluate the sensitivity of the signal occurring after the antenna structure. As shown in Figure 9, the sensitivity sits overall between −70 to −80 dB. The lower frequencies are ignored since the antenna does not radiate well below 2 GHz. Based on these values, we assess that 10 dB are lost in sensitivity when the antenna is attached instead of the load (i.e., compared to Figure 6).
Overall, the TSAR prototype may be expected to have reflection sensitivity between −70 to −80 dB. The VNA itself demonstrates a sensitivity level of −90 dB and is therefore more than capable of measuring signals greater than the reflection sensitivity. Moreover, numerous technical challenges arise when consistent performance needs to be maintained while scanning around a cylindrical volume.
The TSAR system has demonstrated excellent mechanical accuracy and repeatability, and the modifications to the prototype system aimed at ensuring measurement sensitivity appear to enhance performance. This has resulted in a prototype system that demonstrates acceptable performance for our application.
Hemispherical Breast Model
The basic performance of the prototype system has been examined, however it is also of interest to validate reflections from test objects by comparing simulated and measured results. First, the hemispherical breast model used for this investigation is described. Reflection data are analyzed in relation to the previously presented performance metrics. Images created with simulated and measured data are also discussed.
The model used for this work has a relatively simple shape and composition and is described in detail in [18]. The model consists of a cylindrical section (diameter of 10 cm) attached to a hemispherical section with radius of 5 cm. A series of rings is located on the hemisphere in attempt to mimic the shape of the nipple. The model is made of a low-loss dielectric material with relative permittivity of 15. This phantom contains a cylindrical inclusion consisting of a Teflon rod of 7.9 mm diameter and 19.4 mm length. The inclusion is located in the hemispherical region at a radial distance of 25 mm from the centre of the model. The model is placed in the scanner, and the BAVA-D antenna is used to obtain measurements. For a full scan of the model, the antenna is scanned to 7 rows (vertical locations) separated by 1 cm and to 20 locations per row. A second scan of the empty tank is performed for calibration purposes. The antenna locations are the same as those used in the scan of the phantom. The reflections recorded with an empty tank are subtracted from those recorded with the model present. Reflections collected at the row of antennas located at the center of the inclusion are shown in Figure 10. Dominant reflections are expected from the oil/phantom interface and are shown to be very similar for one row of measurements. The response from the inclusion is also evident after 2 ns for antennas located closer to this object.
Next, simulations are performed in order to gain further insight into the measured data. The detailed simulation model includes aspects of the system that are expected to influence the reflected signals. Specifically, the model includes a replica of the breast phantom, a BAVA-D antenna, the top of the tank, and the immersion liquid ( Figure 11).
International Journal of Biomedical Imaging Simulations are performed using SEMCAD (SPEAG, Zurich, Switzerland), which uses a finite difference time domain (FDTD) solver, and the antenna is excited with the UWB pulse described in (1). Results obtained with the breast phantom are shown in Figure 12 for one row of antennas (also located at the level of the inclusion in the phantom). Similar to Figure 10, dominant reflections are expected from the oil/phantom interface and are shown to be very similar for one row of simulations.
To investigate the similarity between the dominant reflections with measured and simulated data, we apply Tukey windows to isolate the first reflection (mean extent of 0.83 ns and positioned relative to the maximum absolute response in each signal). Correlations between these windowed reflections for the data shown in Figures 10 and 12 Next, we examine and compare the later-time responses from simulation and measurement models by again using a Tukey window to isolate reflections occurring after the dominant reflection. Figure 13 shows results for an antenna located the closest to the inclusion, while Figure 14 shows results for an antenna at the same location but without any inclusion present in the breast model. We note that the simulated and measured data are in good agreement for the case containing the inclusion, as both time and frequency domain results are similar. When the inclusion is present, a reflection of about −40 dB is reached, which is easily detected given the sensitivity of our measurement system. Without an inclusion present (Figure 14), a lower reflection is noted in the later-time response. On average, the reflected signal without an inclusion present is 7 dB lower for the measured data and 11 dB lower for the simulated data. The signals magnitudes in Figure 14(b) are very similar between simulations and measurements while still within the sensitivity of the system. This suggests that part of these smaller responses are indeed components of the reflected signals, likely originating from subtle sources such as the late-time response from the interface between oil and the model. Therefore, the TSAR prototype system demonstrates the ability to accurately record these fine details caused by larger reflected signals.
Finally, the simulated and measured data are used to create images. Phantoms with and without the inclusion are imaged and results obtained for measured data are shown in Figure 15. Similar results are obtained for simulated data, however images are not shown as the results appear very similar to those in Figure 15. The inclusion is easily detected and localized, and maximum response of the inclusion is located 23 mm from the center of the model. The location error likely results from challenges in orienting reflections precisely in time, as well as the discrete nature of the imaging procedure. The maximum response of the inclusion is compared to the response at the same location in the inclusion-free image. For measured data, the response with the inclusion is 14.1 dB greater than the inclusion-free case, demonstrating the enhancement of the inclusion response achieved through both reduction of common reflections and coherent summation via the focusing algorithm. For simulated data, the ratio is 47.4 dB, demonstrating the higher similarity between the simulated reflections, as well as the inherent differences between measurements and simulations.
Overall, the investigation of the breast model indicates good agreement between simulations and measurements, which validates the accuracy of our measurements. The response from the inclusion is easily measured given the sensitivity of our system, and images clearly detect and localize the inclusion.
Initial Measurements with a Volunteer
The work with the hemispherical model provides an assessment of the similarities between simulated and measured data for the TSAR prototype. These results suggest that simulations of a realistic breast model may provide a means to interpret measured reflections from human volunteers, as the dominant reflections are expected to be similar for measurements and simulations. A detailed analysis of a volunteer study is performed, using TSAR and MR scans. A volunteer is scanned with the TSAR prototype using the scan pattern presented in Table 2 (note that the origin of the vertical axis is coincident with the bottom of the lid) and measurement parameters discussed in Section 2. MR images are collected with a 1.5 Tesla Siemens Sonata MR Scanner and breast coil. The scanning sequence is T1-weighted (Gradient Echo VIBE with variant SP/OSP). With this sequence, fat is suppressed and glandular tissue has higher pixel intensity in images. The pixel size is 0.4297 mm × 0.4297 mm × 1.2 mm, and 112 images are collected for this volunteer.
To permit us to compare simulated and measured data, the MR images are translated into a model suitable for use with SEMAD. Mapping pixel intensity in MR images to electromagnetic property values involves several approximations, and the procedure used to create the breast model follows that described in [10] with the breast interior represented with 16 tissues. A cross-section of the realistic model used in simulations is shown in Figure 16. The MR and TSAR scans are both collected with the volunteer in the prone position, however the extent and shape of the breast differ when comparing the two systems. The key difference is that breast also floats in the oil used as the immersion liquid in the TSAR scanner. To compensate for this effect, the voxel size in the z-direction ( Figure 16) is reduced from 0.4297 to 0.36 mm. To approximate the locations at which the measurements are collected, the nipple is used as a landmark and we assume that, at the antenna row closest to the top of the tank, the breast is centered in the scanner. Specifically, the location of the row of antennas closest to the nipple is determined from digital images collected during the TSAR scan. This information is used to position the antennas in simulation, and the scan pattern described in Table 2 is replicated. Reflections from the breast model are simulated using the pulse in (1).
The measured data from the volunteer are compared with simulations of the volunteer-specific model. Figure 17 shows normalized reflections from a simulation of the compressed breast and the corresponding experimental measurement. Figure 17 shows that the signals are in reasonable agreement with differences likely resulting from the fact that the simulated skin is modeled as a 2.14 mm layer, while the thickness of the skin approximated from the MR images ranges between 1. observed for the majority of antenna locations, as confirmed by calculating the correlation between the measured and simulated signals. For 116 out of 120 signals, the correlation is 0.9 or better, demonstrating the similarity between measured and simulated skin reflections recorded as the antenna is scanned around the breast. The outliers likely originate from areas of the model where skin thicknesses are significantly different when compared to the actual skin thickness of the volunteer. Therefore, the TSAR prototype is capable of measuring reflections from volunteers and comparison of measurements and simulations suggests that the measured reflections are reasonable. However, detailed analysis of latertime reflections is not considered, as numerous differences between the model and volunteer are present (e.g., breast shape differs from MR to TSAR and antenna locations are approximated). This makes this comparison of small latertime reflections extremely challenging.
Conclusions
In this paper, a prototype system for monostatic radarbased imaging of the breast is described. This system scans a single UWB antenna around the breast in order to collect data, therefore differing from prototype systems for multistatic radar-based imaging and tomography. The paper first focuses on evaluating the performance of the system, as this is key for gaining insight into the capabilities and limitations of the prototype. For example, the motion of the sensor impacts the system performance, so the accuracy and repeatability of sensor positioning are assessed, showing minimal errors. Microwave measurement sensitivity is defined as the differences between two reflection measurements and is used to examine the effects of timedelay between measurements, system motion and cable flex. Differences in measurements with a broadband load attached show that time delay and motion do degrade the sensitivity. By controlling cable positioning, improving measurement environment repeatability and applying techniques such as time-gating the reflections, the microwave measurement sensitivity during the TSAR scan is assessed to be between −70 and −80 dB. In addition, the metrics examined appear to be informative and may be used to evaluate performance of monostatic radar-based imaging systems.
Once the system performance is evaluated, simulations and measurements of a simple phantom are compared. Although much work with both simulations and measurements has been reported for microwave imaging systems, there are only a few reports directly comparing these results. Both early and late-time reflections recorded from a simple phantom show very good agreement. Moreover, reflections from homogeneous phantoms are compared with reflections from phantoms containing inclusions, demonstrating that the response of the inclusion is easily detected given the sensitivity of the system. In addition, the measurement of the weaker later-time reflections from the phantom correlate with simulated results, bringing confidence to the measurement accuracy. The resulting images indicate the inclusion is easily detected and localized. Finally, a scan of a volunteer is described and analysed. In order to interpret the reflections, a volunteer-specific breast model is created. The early-time reflections in simulations and measurements are in excellent agreement, given the known differences between the volunteer and model. This provides confidence that the measured signals correspond to reflections from the breast tissues.
Measurement perturbation due to breast movement induced by volunteer movement or by potential turbulence during sensor displacement were not considered in this paper. Given the length of the scan time (30 minutes), patient movement is expected. However, considering the resolution of a biomedical microwave imaging system (subcentimeter scale), small movements are not expected to significantly affect image quality. For comparison, breast MRI can take up to 40 minutes while achieving image resolution in the millimeter scale. It is also important to observe that for both modalities the patients lie in a prone position with the chest wall resting on the breast coil or the measurement tank lid. In this configuration, movement occurring during patient breathing has only a limited impact on the breast position as the breasts do not significantly move relative to the chest wall. Based on the volunteers scanned so far (12), no significant breast movements have been observed between the digital images recorded at each antenna position. Breast movement during antenna displacement or while the VNA is sweeping could not be assessed visually. However, the good correlation between the measured signals and simulated counterpart using the patient specific model suggests that movement during the VNA sweep is minimal.
Future work includes improving the agreement between the simulated and measured reflections from volunteers and patients, especially the later-time responses. For example, the laser surface measurement of the breast may be used to more accurately deform the MR-based breast model. Combined with knowledge of microwave measurement sensitivity, simulations of the realistic breast models may be used to gain insight into the ability to detect a range of tumors located at different locations in breasts containing a variety of tissue distributions. | 2018-04-03T05:31:56.967Z | 2012-04-24T00:00:00.000 | {
"year": 2012,
"sha1": "3e13930068fb6366b3b6371f226662447f21a0d9",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijbi/2012/851234.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3c1a5cda82346463e6ccdf12de4afeaaaa9b34e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
238798603 | pes2o/s2orc | v3-fos-license | The Relationship Between Instagram Social Media Intensity and Consumptive Behavior of Fashion Products Among Early Adulthood Women
This study aims to see whether there is a relationship between the intensity of the use of Instagram social media and the consumptive behavior of fashion products in early adulthood. The method used in this study uses a quantitative approach with a correlation to answer the relationship between variables. The sample used is non-probability sampling with a purposive sampling method. Data dissemination is done online via google form. This study involved 381 female respondents in the age range of 20 to 30 years, individuals who are active users of Instagram social media and individuals who have income either allowance or salary. Data processing in this study uses the SPSS for Windows version 20.0.0 program. Tests conducted using nonparametric Spearman's Rho technique because based on the results of the normality test the data distribution is abnormal. The results of this study state that there is no significant relationship between the intensity of the use of Instagram social media with the consumptive behavior of fashion products on early adult woman, with a value of R = 0.085 and P = 0.96.
INTRODUCTION
The use of technology and information is increasing rapidly at this time. The emergence of the Internet was the beginning of the development of social media. Through the internet, the public is made easier to obtain the information needed. The presence of the internet in the 1990s supported the development of digital communication and information. So that until now the digital world has become a new lifestyle in the world, including in Indonesia. Data reported by APJII [1] (Indonesian Internet Network User Association) in 2018 stated that Indonesia has a population of 264. 16, of which 171.7 million people are internet users, and continues to increase compared to the previous year. The largest is Java Island with the highest internet users compared to other islands. West Java is 16.6% superior as the highest internet user compared to Central Java and East Java (APJII, 2019). Social networking sites or social media are places where everyone can communicate with each other without having to meet. Social media provides a platform whereas people can exchange and get information from various parts of the world. The emergence of social media makes people see the various activities of other people, even though they have never met or even know each other [2]. Communicating on this application-based technology has a wide variety of variations, the best known of which are Facebook, Whatsapp, Twitter, Instagram, Line and so on Zarella [3] also states that social media has become a new media paradigm in the marketing industry. Social media also describes their respective partners, each social media has a corresponding content and content. Instagram is one of the most commonly used social media applications. Its existence attracts the attention of all people. Not surprisingly, this platform is also one of the most influential social media applications in life. Its users are very diverse, ranging from adults, parents, adolescents, to children. The main function of Instagram is to provide facilities for its users to post visual content in the form of photos or videos. As identified, image-based social media is becoming increasingly popular among young users, especially adolescents and young adults [4]. Research from NapoleonCat [5] notes that in January 2020 there were 62,230,000 million people in Indonesia using Instagram, which is 22.7% of the entire population. The majority of users are women (51%) with the largest group aged 18 to 24 years or around 23,000,000 million people. This age is the transitional age from adolescence to adulthood or referred to as early adulthood. This age is a period where individuals develop themselves by establishing broader social relationships. This need is useful for exchanging information, sharing experiences, or collaborating on certain projects or plans [6]. In their early adulthood, they are the generation where they grew up in the era of internet and digital advancement. Women at this stage are also individuals who tend to follow fashion and pay attention to their [7]. In modest dress, the early adult stages emphasize clothing as a status symbol [8]. In this early adulthood, individuals are usually economically independent, which can encourage individuals to behave consumptively [9]. This consumptive behavior can lead to a consumerist lifestyle [10]. Consumerism is the notion of consumptive life, so it can be said that people who have consumptive behavior no longer consider the use and function of an item, but rather consider the prestige inherent in that person [11]. This kind of consumption has become a culture that develops rapidly in everyday life, because the practice of consumer hedonism is very prevalent in modern society today. Initially, the function of purchasing behavior was to fulfill primary and secondary needs. With the increasing number of products and increasing types, now the consumption is gradually changing into a consumption culture [12]. Clothing is one of the needs of each individual. One of the causes of consumer behavior in this era is fashion. Fashion is a reflection of the unique social, cultural and environmental cycles at certain times in certain environments and also that is important in one's personal image [13]. Fashion is also an object that is full of images and lifestyles, individuals wear clothes not only because of their use values [14]. She also revealed that the choice of clothes for women tends to have very diverse types and models. The ease of access and diversity that exists on Instagram makes its users able to get bad influence. The emergence of fashion in this era is increasingly developing coupled with the expanding media so that fashion has its own trend, especially on Instagram. The presence of this platform has a strong influence on the development of fashion, ranging from trends in branded goods, luxury retail, clothing, and beauty [15]. The function of clothing is no longer a body protector, the development of this era has made clothing a lifestyle and trend [16]. As consumers, the large number of product choices makes them willing to spend their money to keep up with current innovations and trends, coupled with a strong aesthetic preference (17). Fromm [18] also states that humans no longer see the value in consuming goods. This phenomenon can continue to develop, due to factors that can lead to consumer behavior, one of which is lifestyle [19]. To increase self-esteem and social status, it is usually what makes individuals engage in consumptive behavior. Based on the background described, researchers are interested in further reviewing whether there is an influence of Instagram social media on the consumptive behavior of fashion products, especially in early adulthood.
Formulation of the Problem
Is there a relationship between the intensity of the use of Instagram social media and the consumptive behavior of fashion products in early adulthood women?
Research Purposes
To find out whether there is a relationship between the intensity of the use of Instagram social media and the consumptive behavior of fashion products in early adulthood women.
Research Hypothesis
H0: There is no relationship between the intensity of the use of Instagram social media on the consumptive behavior of fashion products in early adulthood women. H1: There is a relationship between the intensity of the use of Instagram social media on the consumptive behavior of fashion products in early adulthood women.
Consumptive Behavior
Lubis [12] explains that consumptive behavior is not sourced by rational thinking, but it desire don't reach irrational levels anymore. Individuals with behaviour inherent consumption, the need is no longer based on factors needs, but already based on the factor of desire (want). Fromm [18] explained that consumptive behavior is divided into several dimensions, namely as follows: (1) Fulfillment of wishes, each people will never stop feeling satisfied at some point, even tends to develop. Based on this thing, when someone consuming something, the individual will always hope to get more satisfaction to achieve its own satisfaction, even though the goods is not really needed, but it will still be done. (2) Goods out of reach, when individuals consume, their behavior changes being compulsive and unreasonable. When this happens, the individual feels himself is "incomplete" and will seek supreme satisfaction with acquire new products. Individuals will no longer seek their own needs and the goods for himself; (3) Non-productive goods, if consumption of goods becomes too much, consumption becomes not clear, and the goods become unproductive for individuals; (4) Status, individuals can be said to have consumptive behavior if they have excessive items due to status considerations. This behaviour is no longer a meaningful, humane, and useful experience, because only done to satisfy his desire for status. Characteristics of consumptive behavior according to Sumartono [12], namely: (a) Buying goods because of a special offer; (b) Purchasing goods because of its attractive appearance; (c) Purchasing for the sake of personal appearance and prestige; (d) Purchasing products based on high or low price considerations (not on the basis of benefits or uses); (e) Purchasing goods that are deemed able to maintain social status; (f) The influence of advertising / models that promote goods; (g) The emergence of self-confidence high when buying expensive items; (h) Purchasing two or more products one of a kind.
Advances in Social Science, Education and Humanities Research, volume 570
Fransisca & Suyasa [20] explain the impact of behavior consumptive as follows: (a) Causing waste, this happens because buying behavior is only to fulfill momentary pleasure not meet real needs. Buying stuff is an excuse to follow fads and desires. Funds that should be used to buy items that are needed but used for goods that are useless and can lead to cost inefficiencies; (b) Causing anxiety, consumptive behavior can cause individuals to feel anxious because he felt the need to buy the item he wanted even though useless. Purchases without financial support can result worry. Insecurity caused by consumer behaviour is a situation of excessive purchases of goods by individuals. Impact consumptive behavior is also stated by Irmasari (2010) that Behavior consumers can cause social jealousy, reduce saving opportunities, and often not paying attention to future needs.
Instagram Intensity
Intensity in the Kamus Besar Bahasa Indonesia (KBBI) [21] expressed as "a measure of intensity or state of rank". Horrigan [22] explains that there are two basic aspects to the intensity of a person's internet usage need to be careful, namely how often they use the Internet and how long they use each time they visit the Internet. Intensity can be said as a form of attention and interest based on the quality and quantity determined by the individual (Santrock, 2006). Tubb & Moss [23] intensity can be viewed from the duration spent by an individual while doing an activity and the frequency with which it is performed. According to Casdari [24], three factors influence intensity the use of social media is the following: (1) Internal needs factors, this factor is related to human psychological needs. One of them is the closeness of the relationship with other people or strangers (relatedness); (2) Social motive factors, namely factors that are influenced by the environment or people another, one of which is the integration of individuals with friends or groups; (3) Emotional factors, emotions can change the intensity of media use social. If social media makes the individual feel happy, individual it will repeat the activity using the social media. Individuals who frequently visit Instagram social media have a background by social motivation, such as hoping to be recognized and appreciated by the environment.
Research Participants
This study took a sample of 381 people. Research subjects in this study have several criteria that must be fulfilled. The first criterion is individuals aged 20 to 30 years and, female sex. The second criteria is that individuals are active users of Instagram social media. And the third criterion is that individuals have an income, either pocket money or salary. The sample in this study used nonprobability sampling and purposive sampling method.
Types of Research
This research is a non-experimental quantitative research. This research is a correlational research, which describes the relationship between the intensity of playing Instagram social media and the consumptive behavior of fashion products in early adulthood. Based on this formula, there are two variables in this study. This research wants to describe the relationship between the intensity of playing Instagram social media with consumptive behavior.
Measuring instrument
Researchers provide two scales, namely the intensity scale of Instagram use taken and compiled in the Sukmaraga thesis which is based on the aspect of frequency and duration consisting of 4 items with answers consisting of a range of 5 multiple choice points and a consumptive behavior scale taken and compiled in the Amalia thesis which is based on the consumptive behavior aspect of Sumartono [12], then adapted by the researcher so that it fits the topic that researcher want to measure. This measuring instrument has 31 items consisting of 15 favorabel items and 16 unfavorable items.
Processing and Data Analysis Techniques
The data that has been obtained from the questionnaire will be analysed quantitatively. The data that has been obtained are then input and processed using the Statistical Package for Social Science (SPSS) calculations, namely parametric statistical analysis to determine the basis of a goal data distribution, namely the relationship between the intensity of the use of Instagram social media on consumptive behavior. Testing the reliability of measuring instruments in research This is done with the help of the SPSS program version 20.0.0 with the Alpha Cronbach coefficient by dividing the items by the number the item. Testing and data analysis conducted is a reliability test a tool to measure Instagram's social media usage intensity and consumptive behavior, descriptive test of research subjects based on control data, test descriptive research data, as well as data normality test in this study using the Kolmogorov-Smirnov test. Main data analysis in research namely the correlation test used to determine the relationship between variables research. The correlation test used was Spearman's Rho non-parametric test because based on the results of the normality test, it was stated that the data was not normally distributed. Meanwhile, to analyze data additionally used correlation test between variables one with other variable dimensions. Different tests were also carried out as additional data analysis to test variables and control data of research subjects using Kurskal-Wallis. Based on these results, the researchers then discussed the results analysis. Furthermore, the researcher makes the results of the discussion and makes research conclusions.
RESULT AND DISCUSSION
The test results based on the correlation results found that there was no relationship between the intensity of using Instagram social media and consumptive behavior with a correlation value of r = 0.085 and a significance value of p = 0.096. So that the higher the intensity of using Instagram social media, the lower the consumptive behavior. Likewise, vice versa, the lower the intensity of Instagram social media use, the higher the level of consumptive behavior. Based on the results of the data analysis that has been done, it shows that there is no relationship between the intensity of social media use and consumptive behavior. These results support research from Caroline (2019) who also saw the influence of the intensity of the use of Instagram social media on consumptive behavior. That there were negative results in her research. Then there are also many other factors that can influence consumptive behavior, according to Setiadi (2010) there are several factors that influence consumptive behavior, such as cultural factors (including groups, family, and social roles and status), as well as psychological factors, namely motivation, perception, experience, and attitudes and beliefs. The results of obtaining data from subjects who participated in the study were individuals aged 20 to 30 years, most of whom were college students. So that the researchers assess that they do not have their own income and still rely on money from their parents, so that it can allow low consumptive behavior. Moningka & Adiputra [16] in their research also argue that the buying process begins with recognizing their needs, that is, the buyer recognizes a problem or need. So that the situation and conditions that are being experienced can affect purchasing behavior. With a condition where according to him other needs are more important. The results of the analysis of this study reject the results of research conducted by Rahma, in her research it has the results that there is a relationship between the intensity of the use of Instagram social media and consumptive behavior among students of SMA Muhammadiyah 1 Magelang City. There are several reasons for the differences in the results of these studies. First, there are differences in the intended subjects, Rahma's research uses high school student subjects, while the research subjects are early adult individuals. This allows age to have an effect on research results. Second, there are differences in measuring instruments and aspects used to measure the intensity variable of using social media Instagram. Rahma's research uses 4 aspects, namely attention, appreciation, duration and frequency. Meanwhile, researchers only use 2 aspects, namely frequency and duration. Third, there are differences in measuring instruments and aspects used to measure consumptive behavior variables. Rahma's research uses 3 aspects, namely impulsive buying, non-rational buying, and wastafel buying. Meanwhile, researchers used 8 aspects developed by Sumartono [12], namely buying goods because of the lure of gifts, buying goods to maintain one's appearance and prestige, buying things on the basis of high or low price considerations (not on the basis of their benefits and uses), buying goods only to maintain status symbol, wearing goods because there is an element of conformity to the model that advertises products, the emergence of an assessment that buying something at a high price will lead to high self-confidence, and trying more than two similar products (different brands). So that it can be said that each item used must be different and it allows the way of scoring in each aspect is different by giving a score on the measuring instrument used by the researcher. Fourth, the number of subjects carried out in the study included both men and women, while researchers only examined individual women. So that the population results obtained can affect the results of the study.
CONCLUSION
Based on the data analysis regarding the results of the correlation test of the intensity of the use of Instagram social media with consumptive behavior in early adulthood using Instagram, the results of the correlation are r = 0.085 with a significance value of p = 0.096. These results indicate that there is no significant relationship between the intensity variable of using Instagram social media and consumptive behavior. The higher the intensity of the use of Instagram social media, the lower the consumptive behavior, and vice versa. This shows that the intensity of Instagram social media use has no relationship with consumptive behavior. Thus, the results of this study reject H1 and accept H0. The shortcomings and limitations of this study are related to the distribution of the research sample, especially the distribution of age, occupation, income, and matters related to research variables which have unequal or uneven sample distributions, so it is not possible to summarize the results of the study. The data collection process is done using a questionnaire method, where sometimes data collection is done through a questionnaire can make the subject do not give an appropriate opinion. This can be based on differences in understanding, thoughts, situations being experienced, and honesty in filling out the questionnaire. Then there are limitations in the selection of measuring instruments, where the researcher only adapts the measuring instrument through the thesis and there are some items that are not reliable when used so that it is necessary to revise or retest the measuring instrument items so that they can be used for all research samples. Researcher's suggestion for the next research is by expanding the scope, such as adding other factors and mediators in research that can influence early adult consumptive behavior, such as culture, perceptions, etc. As well as perfecting the measuring instrument used, in order to get accurate results by adding other variable aspects. The practical suggestions from this research for readers are expected to be used as a source of knowledge and knowledge regarding the intensity of using Instagram social media and consumptive behavior in early adult women. For women who have a tendency to consumptive Advances in Social Science, Education and Humanities Research, volume 570 behavior, it is hoped that they can control themselves, change, and reduce or eliminate these habits. | 2021-08-27T16:45:42.588Z | 2021-08-08T00:00:00.000 | {
"year": 2021,
"sha1": "8a526f177625e5d323d4b326814c1e7565a20156",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.210805.055",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0ef3997f79fdb5bec3aa7aff519b41d7748808f4",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
256633607 | pes2o/s2orc | v3-fos-license | Sequential modification of bacterial chemoreceptors is key for achieving both accurate adaptation and high gain
Many regulatory and signaling proteins have multiple modification sites. In bacterial chemotaxis, each chemoreceptor has multiple methylation sites that are responsible for adaptation. However, whether the ordering of the multisite methylation process affects adaptation remains unclear. Furthermore, the benefit of having multiple modification sites is also unclear. Here, we show that sequentially ordered methylation/demethylation is critical for perfect adaptation; adaptation accuracy decreases as randomness in the multisite methylation process increases. A tradeoff between adaptation accuracy and response gain is discovered. We find that this accuracy-gain tradeoff is lifted significantly by having more methylation sites, but only when the multisite modification process is sequential. Our study suggests that having multiple modification sites and a sequential modification process constitute a general strategy to achieve both accurate adaptation and high response gain simultaneously. Our theory agrees with existing data and predictions are made to help identify the molecular mechanism underlying ordered covalent modifications. Bacterial chemoreceptors have multiple methylation sites, but whether the order of methylation matters is unclear. Here, the authors show that sequentially ordered methylation is critical for perfect adaptation and for attenuating the trade-off between accurate adaptation and high response gain.
M ost post-translational regulatory processes involve reversible covalent modifications (phosphorylation/ dephosphorylation, methylation/demethylation, etc.) of key proteins catalyzed by enzymes (kinase/phosphatase, methyltranferase/methylesterase, etc.). Instead of having only a single site of modification, many regulatory proteins such as histones, p53, RNA polymerase II, tubulin, etc. have multiple modification sites 1 . The multiple modification sites allow a single regulatory protein to have complex functions depending on combinations of different modification processes 2 . For example, the histone proteins have multiple covalent modification sites of different types (methylation, acetylation, phsophorylation, etc.) and the different combinations of the multiple modification sites are thought to code for different gene expression patterns in different cells 3 . However, how this combinatorial molecular code works, i.e., how it is encoded and decoded, remains poorly understood 4 .
One well-studied multisite regulatory protein is the cyclindependent kinase inhibitor Sic1 in Saccharomyces cerevisiae (yeast). Sic1 has more than six phosphorylation sites whose main function is regulating the timing of the G1/S transition in yeast cell cycle 5,6 . Huang and Ferrell 7 first suggested that the response sensitivity can be enhanced by having multiple modification sites. However, Gunawardena 8 pointed out that other effects such as substantial disparities in enzyme efficiency among different sites are also needed in making a sharp switch. Later work by Salazar and Hofer 9 showed that a random phosphorylation process among the different sites gives rise to a shallow but rapid response while sequential processing gives rise to a steeper but slower response. Though much progress have been made, dynamics and functions of multisite modification in Sic1 remain not fully understood.
In this paper, we focus on a relatively simpler signaling system, bacterial chemotaxis 10 , where multisite modification has an important role in adaptation 11 . Adaptation is an important general biological behavior that allows a living system to adjust its internal state in response to changes in its environment so that it can return to a set activity level after a fast response to a persistent change in the external stimulus 12 . In bacterial chemotaxis, a chemoreceptor has multiple methylation sites. The kinase activity of a chemoreceptor is determined by chemoeffector ligand concentration (external stimulus) as well as the receptor methylation level (internal state)-a higher attractant concentration leads to a lower kinase activity and a higher methylation level leads to a higher kinase activity. Adaptation in bacterial chemotaxis is achieved by a feedback mechanism in which the receptor methylation level (internal state of the receptor) is controlled by a methyltransferase CheR and a methylesterase CheB that act at a time scale much longer than the response time to a change in external stimulus (ligand concentration). The catalytic efficiencies of CheR and CheB depend on the receptor activity, which form the feedback mechanism for adaptation [13][14][15][16] . However, despite the general consensus on the importance of a negative feedback mechanism for accurate adaptation in bacterial chemotaxis, the detailed receptor methylation/demethylation kinetics among the multiple methylation sites remain unclear.
How do different methylation kinetics, random or sequential, affect adaptation accuracy and response gain? What are the benefits to have multiple modification sites? In this paper, we address these questions by systematically investigating effects of different multisite modification processes, from purely random to strictly sequential, on system-level functions such as adaptation accuracy and signal amplification. Our theoretical findings allow us to infer the multisite modification dynamics from existing experimental data. More importantly, our study leads to specific suggestions of future experiments to determine the molecular mechanism controlling the multisite modification dynamics.
Results
Modeling multisite modification dynamics. In previous modeling studies of the receptor methylation (demethylation) reactions, the microscopic methylation state of the receptor (μ) has been ignored. Here, we consider the transitions between the 2 M = 16 (M = 4 is the total number of modification sites) individual microscopic methylation states of a receptor explicitly. As shown in Fig. 1, all the states μ are grouped (column-wise) by their total methylation level m ¼ P 4 j¼1 μ j , so a methylation (demethylation) reaction moves the current state to another state in the column to the right (left). However, among the multiple states in the next column which one does it transition to? And at what rate? Here, we consider two cases, one special and one general, as shown in Fig. 1a, b, respectively.
For the special case of strictly sequential modification, which is implicitly assumed in previous models 17,18 , the methylation and demethylation processes follow the same sequence (in opposite directions) among the five states shown in Fig. 1a. Following previous work [13][14][15] , the negative feedback control is implemented by only allowing methylation (demethylation) of the inactive (active) receptors respectively. If we define k þ m and k À m as the average methylation and demethylation rates for all receptors with the same total methylation level m, this negative feedback mechanism leads to: where a h i m is the average activity of receptors with methylation level m and the kinetic rates, k + and k − , are proportional to CheR and CheB concentrations, respectively.
In the general case, when site j − 1 is methylated (μ j−1 = 1), the methylation rate for the next site in the sequence j in state-μ is given by the same sequential methylation rate k þ m as in the strictly sequential case described above. However, when site j − 1 is not methylated (μ j−1 = 0), methylation of site j can still occur via the random methylation process albeit with a smaller rate k þR where η is a parameter (0 ≤ η ≤ 1) characterizing the randomness in the methylation process. Combining these two possibilities, the site-specific methylation ratek þ j for site j can be written as: Similarly, demethylation of site j depends on whether site j + 1 is demethylated, and the site-specific demethylation ratek À j for site j can be written as: where k ± m in Eqs. (2) and (3) are the sequential methylation and demethylation rates given by Eq. (1). To describe modifications of the boundary states, i.e., the fully unmethylated state (m = 0) and the fully methylated state (m = M), we introduce a forward initiator with μ 0 = 1 for methylation of the j = 1 site in Eq. (2) and a reverse initiator with μ M+1 = 0 for demethylation of the j = M site in Eq. (3).
Despite the same feedback mechanism given by Eq. (1), dynamics of receptor methylation level depends on the degree of randomness (η) in the multisite modification process. In this paper, we investigate consequences of different multisite modification schemes, from sequential to random, by studying behaviors of the standard model of bacterial chemotaxis for different values of 0 ≤ η ≤ 1. In particular, we study how the adaptation error ξ and the response gain Γ are affected by η. Details of the full standard model framework for studying bacterial chemotaxis and the precise definition of ξ and Γ are given in the Methods section.
Sequential modification is essential for perfect adaptation. In general, the adapted activity level a h i A ð½LÞ is a function of the ligand concentration [L]. Adaptation is deemed perfect if a h i A is a constant independent of [L]. For a general case 0 < η < 1, we can determine the adapted activity by solving the full model numerically using Monte-Carlo method (see Supplementary Methods for details). However, simple analytical equations for 〈m〉 can be found for the extreme cases η = 0 and η = 1, which provide insight on a key condition for perfect adaptation: where 〈a〉 is the average receptor activity and the term ϵ comes from the boundary effects at m = 0 and m = M (see Supplementary Note 1 for details of the derivation). For the case of purely sequential methylation (η = 0), the right hand side of Eq. (4) has the remarkable property of only explicitly depending on 〈a〉 but not on 〈m〉 or [L]. As a result, the adapted activity a h i A % k þ =ðk þ þ k À Þ is independent of [L], i.e., perfect adaptation 17,18 . The independence of the methylation rate dhmi dt on m is that only one modification site is available for methylation or demethylation per receptor at any given time when modification reactions are sequential. Figure 2a illustrates the adaptation process in response to a series of step increase in ligand concentration The solid line represents the adapted activity obtained by setting the right hand side of Eq. (4) to zero. The dashed curves represent the activity as a function of 〈m〉 for different values of [L]. Upon a sudden increase of [L], say from [L] 1 to [L] 2 , the system first responds by decreasing its activity as represented by the downward arrow (blue) as illustrated in Fig. 2a. This altered activity triggers the adaptation mechanism that slowly increases m, causing the system to follow the upward arrow (green) along the dashed line for [L] 2 until it reaches the adapted activity level that is roughly independent of [L]. The fundamental reason for perfect adaptation is that a h i A is independent of 〈m〉, i.e., the solid line in Fig. 2a is flat for a large range of 〈m〉.
For the case of random methylation (η = 1), all the available modification sites are equally accessible. Therefore, the methylation and demethylation rates are proportional to (M − m) and m respectively as given in Eq. (5). As a result, the adapted activity has a simple linear dependence on adapted methylation a h i A ¼ ðM À m h i A Þ=M as shown in Fig. 2b. An increase of ligand concentration from [L] 1 to [L] 2 triggers an immediate response (a drop in activity) followed by a slow adaptation process that leads the system to a different adapted activity level. The inaccurate adaptation for η = 1 is caused by the explicit dependence of hai A on hmi A , i.e., the solid line in Fig. 2b is tilted. It is easy to see that the dependence of hai A on hmi A occurs for all η ≠ 0.
Results from direct Monte-Carlo (MC) simulations of 〈a〉 subject to a series of step increases in concentration [L] (×10 fold increase for each step) as shown in Fig. 2c for sequential and Fig. 2d for random methylation schemes support our analysis shown in Fig. 2a, b, respectively.
For sequential modification (η = 0), the adaptation error ξ (see Methods section for its definition) is proportional to the probability of receptors in the extreme (boundary) methylation states m = M & 0 and we have: where c 1 and b are constants, ξ 0 is the error from the m = 0 state, and α (<0) is the free energy change for adding a methyl group to the receptor (see Eq. (10) in Methods section for the definition of α). Equation (6) shows that ξ decreases exponentially with |α| before saturating to ξ 0 . For random modification (η = 1), the adaptation error has contributions from the whole range of methylation levels 0 ≤ m ≤ M and we have: which only decreases with |α| algebraically (see Supplementary Note 1 for details of the derivation for Eqs. (6) and (7)). The different dependence on |α| given in Eqs. (6) and (7) are verified by direct simulations (Supplementary Fig. 4a), which clearly show that sequential modification reduces adaptation error much more efficiently than random modification.
The tradeoff between response gain and adaptation accuracy. Besides the adaptation error ξ or equivalently the adaptation accuracy ξ −1 , another important property of the system is its response gain Γ, which measures the sensitivity of the system in response to a change in external signal (see Methods section for the definition of Γ).
As shown in Eqs. (6) and (7), adaptation accuracy can be increased by increasing |α|, but what happens to the gain Γ? Interestingly, increasing |α| leads to a reduced gain independent of whether the modification dynamics is sequential or random ( Supplementary Fig. 4b). The reason is that for a larger value of |α|, individual receptors in the receptor cluster in the adapted state will have activities further away from the adapted mean value 〈a〉~1/2-either closer to 0 or closer to 1-where the sensitivity (gain) is lower (see Supplementary Methods for details). It is worth noting that this dependence of Γ on |α| is due to the discrete methylation level of individual receptor, which is only captured by the Ising model [19][20][21] but not in the simplified Monod-Wyman-Changeux (MWC) model [22][23][24] .
The tradeoff or anti-correlation between response gain Γ and adaptation accuracy ξ −1 is a general property of the signaling pathway. Besides the extreme cases (η = 0 and η = 1) considered so far, this tradeoff between Γ and ξ −1 exists for all intermediate cases of methylation dynamics with 0 < η < 1. As shown in Fig. 3a, the gain is almost unaffected when we change the value of η while keeping the other parameters constant, but the corresponding adaptation accuracy ξ −1 decreases with η. On the other hand, when we tune other parameters to maintain a high accuracy (e.g., by increasing |α|), the corresponding gain goes down with η as shown in Fig. 3b. Therefore, for a more random methylation scheme (a larger value of η), the tradeoff between ξ −1 and Γ means that one is enhanced at the expense of the other.
Accurate adaptation and high response gain represent two of the most desirable but opposing properties of biological signaling systems, i.e., to resist changes in the environment by adaptation and to respond to weak signals. This accuracy-gain tradeoff is related to the fluctuation-dissipation relationship established in equilibrium systems 25 . Next, we show how this tradeoff can be , e.g., from [L] 1 to [L] 2 , the system responds quickly by decreasing its activity (blue arrow) from the old adapted state (solid red circle) to the maximum response state (hollow red circle) without changing 〈m〉. This initial response is followed by the slow adaptation dynamics (green arrow) along the dashed line until the new adapted state is reached. Direct Monte-Carlo simulation results of the average activity 〈a〉 in response to a series of step increase in methyl aspartate concentration over 7 orders of magnitudes are shown for c Sequential (η = 0) and d Random (η = 1) modification processes. The step changes in stimulus is shown in Supplementary Fig. 8. The sequential modification process leads to a much higher adaptation accuracy than the random modification process. Fig. 3 The tradeoff between the adaptation accuracy and response gain. a As η increases, the adaptation accuracy ξ −1 , red squares, decreases while the signaling gain Γ, green circles, remains roughly constant. b When parameters are tuned to keep the accuracy ξ −1 roughly constant for different values of η, the corresponding gain Γ decreases with η. The range of stimulus is set by ½L min ¼ 1 μM and ½L max ¼ 100 mM.
attenuated by having multiple modification sites and sequential modification.
Sequential modification attenuates the accuracy-gain tradeoff.
Why are there multiple modification sites in a regulatory protein or a receptor? How is the performance of the system enhanced by having multiple modification sites? Here, we investigate how having multiple modification sites affects the gain Γ and accuracy ξ −1 for different modification dynamics (sequential versus random). For the sequential modification dynamics, the adaptation error comes from the receptor populations with the extreme (boundary) methylation levels m = 0 or m = M. As the probability P M of reaching the boundary state m = M decreases exponentially with M, the adaptation error in the sequential modification model (η = 0) should decrease strongly (exponentially) with M as given in Eq. (6). For the random modification dynamics, the adaptation error comes from all methylation levels and the reduction of adaptation error with increasing M is much smaller~1/M as given in Eq. (7).
We studied the dependence of the performance of the system on M systematically by computing Γ and ξ for a random set of parameters for M = 1, 2, 3, 4 in our models with η = 0 and η = 1. The results, as shown in Fig. 4, clearly demonstrate the general accuracy-gain tradeoff, i.e., the inverse dependence of Γ and ξ −1 in all cases studied. However, there are significant differences between the sequential and random modification cases. For sequential modification (η = 0), the tradeoff curve is lifted significantly as M is increased as shown in Fig. 4a. In fact, the threshold lines (solid lines in Fig. 4), which are just fits to the highest performing points for each value of M, follow an approximate form: where C 0 (M) measures the overall performance of the system with sequential modification (η = 0). As shown in the inset of Fig. 4a, C 0 (M) increases significantly (linearly) with M. In contrast, as shown in Fig. 4b, the threshold lines in the random modification case follow a much more gradual curve: where the overall performance C 1 (M) for the random modification system (η = 1) has only a weak dependence on M (see inset in Fig. 4b).
The significantly different dependence of the accuracy-gain tradeoff relationship on M for η = 0 and η = 1 clearly shows that having multiple modification sites can ease the accuracy-gain tradeoff in general but the effect is significant only when the modification dynamics are sequential.
Comparisons with existing experiments. In this section, we discuss specific model results that can be directly compared with existing experiments. The E. coli chemoreceptor Tar has four methylation sites at residues 295, 302, 309, and 491, which are labeled by numbers 1-4, respectively. Protein methylation in eukaryotic cells is usually associate with lysine or arginine residues. However, glutamate is the most common residue for methylation in E. coli 26 , and bacterial chemotaxis receptors in general are methylated in glutamate residues; or in glutamine residues that were posttranslationally deamidate to glutamates by CheB. Residues 1-3 are seven residues apart from each other, along the same α helix, whereas residue 4 is located on another helix 27 as illustrated in Fig. 5.
Experimental results 28 indicate that methylation of sites 1, 2, and 3 depends on each other in reverse order, i.e., site 3 is methylated first, followed by site 2 and then site 1, and that residues 316 and 498 affect the methylation of site 3 and 4, respectively. Structural models 29 of the receptor modification are consistent with the methylation rate depending on a residue seven residues away in the C-terminal direction. The initiator residues, ig. 5 Illustration and notation for the Tar receptor. The methylation state of site i (=1, 2, 3, 4), green circles, is described by a binary variableμ i (1methylated; 0-unmethylated). The sequential methylation and demethylation processes among sites 1-2-3 are shown by the red and orange arrows. The two initiator sites (316 and 288), blue circles, are described by two binary numbersh The receptor methylation state is described by six binary numbers,μ 4 ðh þ 3 Þμ 3μ2μ1 ðh À 1 Þ. In our notation, a methylation site is modifiable when it is labeled by x and a specific value (0 or 1) is assigned when it is fixed by mutation. The two initiator residues are given byh þ 3 andh À 1 -whenh þ 3 ¼ 1, methylation at site 3 (μ 3 ) becomes enhanced; whenh À 1 ¼ 0, demethylation of site 1 (μ 1 ) becomes enhanced. Otherwise whenh þ 3 ¼ 0 orh À 1 ¼ 1, the initial methylation of site 3 or the initial demethylation of site 1 are controlled by the slow random methylation or demethylation processes.
We note that though the existence of theh þ 3 site is supported by experiments [28][29][30] , the initiator siteh À 1 for demethylation is introduced here hypothetically according to the close relationship between the two enzymes CheR and CheB as suggested in a study by Djordjevic et al. 31 , which stated that "structural similarity between the two companion receptor modification enzymes, CheB and CheR, suggests an evolutionary and/or functional relationship" and "the proposed receptor interaction clefts occur on different faces of the β-sheet in CheB and CheR. Topological differences in the structures of CheR and CheB may be reflective of their functionally antagonistic interactions with the receptors".
We first study the adaptation accuracy and response gain from our model and compare them with available experiments. For wild-type (wt) cells with both CheR and CheB, though there is no direct measurements of the methylation dynamics, there have been detailed experimental studies of the in vivo kinase activity dynamics in response to a wide range of stimuli [32][33][34] , which can be compared with our model to determine the response gain Γ and adaptation accuracy ξ −1 .
In ref. 32 the relative sensitivity, S r , is defined as the fractional change in the FRET signal divided by fractional change in stimulus S r ¼ ΔFret=Fret Δ½L=L $ g, where the FRET signal is proportional to the kinase activity and g is the integrand of Eq. (17). From the experimental data on S r (first peak in Fig. 3 of ref. 32 ) and our model, we can estimate the value for the gain Γ for Tar: 4.5 ≲ Γ ≲ 5.
From the measured adapted activity for different background ligand concentrations as plotted in Fig. 1B in the paper by Neumann et al. 33 , we obtained the value of adaptation accuracy for Tar in response to methyl aspartate with a maximum concentration ½L max = 5-10 mM to be roughly in the range: 2.3 ≤ ξ −1 ≤ 3.5.
These estimated values of gain and adaptation accuracy, shown as the black diamond in Fig. 4a, suggest that methylation dynamics should be mostly sequential. Quantitatively, from our model and by using the values of ξ and Γ, we can determine the range of the effective randomness parameter for Tar: 0.05 ≤ η ≤ 0.13 (see Supplementary Methods and Supplementary Fig. 2 for more details on comparison between simulation results and experiments).
Next, we study the methylation profiles in CheB − mutants by using our model and compare them with existing experiments. Different modification dynamics, random or sequential, lead to qualitatively distinct mean methylation profiles at a given time. Denote p j (t) the probability of site j being methylated at time t. For a purely random modification dynamics, all p j (t) should be the same. However, for methylation dynamics that are dominated by sequential modification, the methylation pattern among different sites follows certain distinctive patterns.
The methylation dynamics among different sites in the demethylase CheB − mutants were studied by the Koshland lab more than 20 years ago 28,30 , by mutating the residuesμ 4 ,h þ 3 ,μ 3 , μ 2 in Fig. 5, and the initiatorh þ 4 . After being methylated with tritiated SAM, the receptors were cleaved and the extent of methylation of each site was determined by high-performance liquid chromatography. The methylation rates were calculated in arbitrary units, reproduced here in Table 1. As the absolute values are not available, we can only analyze the relative methylation ratios of the different sites and mutants.
It is useful to compare simulations with k − = 0 with the CheB − mutants to isolate the effects of sequential methylation. Specifically, these mutants are, besides the wild-type receptor (EEQE), the mutant receptors EEDE and EEEE. In these strains, site 3 (E309) can be methylated when occupied by Glutamate (E) residues, but behave as permanently methylated or demethylated when occupied, respectively, by aspartate (D) or glutamine (Q). In addition, substitution ofh þ 3 by asparagine (N) in mutant E(N)EEE is also informative as it partially impairs the methylation of site 3, which would correspond to a partial mehthylation state in our model. (0) Normalized methylation rates for different CheB − mutants reproduced from refs. 28,30 . The four letters (D for aspartate, E for glutamate, N for aspargine, and Q for glutamine) in the first column are the residues, respectively, in the methylation sites 1, 2, 3, and 4. The four middle columns are the methylation rates of each site in arbitrary unit. Simulations mimicking each mutant were performed with their configurations shown in the last column, with x representing modifiable sites. We fixedμ 4 ¼ 0 due to the low methylation rate of site 4. We studied the methylation dynamics of these CheB − mutants by using a sequential-dominant model with a small value of η (= 0.1). As shown in Fig. 6a, the sequential methylation of a completely demethylated receptor (0(1)xxx(0)) begins by methylating site 3, afterwards proceeds to site 2, and to site 1. This order, i.e., p 3 (t) > p 2 (t) > p 1 (t), persists throughout the methylation process consistent with experiments results. We also studied the methylation dynamics when the starting siteμ 3 is fixed to beμ 3 ¼ 0 andμ 3 ¼ 1 to mimic the EEDE and EEQE receptors, respectively. As shown in Fig. 6b, when we fixμ 3 ¼ 1, the order of methylation for site 2 and site 1 still persists, i.e., p 2 (t) > p 1 (t), which is again consistent with experiments.
Finally, the most informative and also most stringent test of our theory comes from the mutant receptor EEDE. Besides a much slower methylation rate, an inverted behavior p 2 < p 1 was observed experimentally in EEDE (Table 1). Remarkably, these behaviors in particular the inversion also appear in our model as shown in Fig. 6c. The reason for this inversion is that sequential modification is broken when site 3, the starting site in the sequence, cannot be methylated. As a result, the downstream sites (site 2 and site 1) have to be methylated (at least initially) by the random methylation process, which has no a priori preference between site 1 and site 2. Once site 2 becomes methylated, it will enhance the methylation rate at site 1 due to the sequential methylation process but not the other way around. Thus the sequentiality between site 2 and site 1 leads to the observed inversion. Consistent with this argument and with the role played by the initiator, the partial methylation ofh þ 3 in EEE(N)E reduces the ratio p 3 /p 2 , when compared to EEEE. Quantitatively, the CheB − data lead to an estimate for a lower bound for the random methylation parameter η ≥ 0.047 (see Supplementary Discussion for more details), which is consistent with the estimated range of η for Tar from the wt data above.
Overall, our model results, together with existing experimental data, suggest that the methylation process for sites 3, 2, and 1 are mostly sequential and are affected by the initiatorh þ 3 , but there is a small but finite random component.
Testable predictions for future experiments. Our model can be used to predict the methylation level profile for different mutants, which can be tested by future experiments. As the reference, the methylation levels of the wild-type cell (0(1)xxx(0) receptors in the presence of CheR and CheB) decrease monotonically from site 3 to 1 as shown in Fig. 7a. We first study the mutant withh þ 3 ¼ 0, which inhibits sequential methylation of site 3. As shown in Fig. 7b,h þ 3 ¼ 0 brings down the methylation of site 3, leading to site 2 being more methylated than sites 1 and 3. To explore the inhibition of sequential demethylation of site 1, we next study the mutant withh À 1 ¼ 1. As shown in Fig. 7c, site 2 is less methylated than sites 1 and 3 in the steady state. Finally, we study the mutant withh À 1 ¼ 1 andh þ 3 ¼ 0 in which both methylation of site 3 and demethylation of site 1 are inhibited. As shown in Fig. 7d, the steady state methylation profile monotonically increases from 3 to 1, which is exactly the inverse of the wt profile (Fig. 7a).
We can also predict the effects of mutating the key methylation sites (μ 1 andμ 3 ) on adaptation dynamics. We first studied effects of mutating site 1 or site 3 to be permanently unmethylated by fixing either μ 3 = 0 or μ 1 = 0 in our model. We found that adaptation still works in μ 3 = 0 mutant [0(1)xx0(0)] but is severely impaired in the μ 1 = 0 mutant [0(1)0xx(0)] as shown in the Supplementary Fig. 7a. We next studied effects of mutating site 1 or site 3 to be permanently methylated by fixing either μ 3 = 1 or μ 1 = 1 in our model. We found that response to decrease in attractant concentration remains intact in theμ 3 ¼ 1 mutant [1(1)1xx(0)], but is severely impaired in theμ 1 ¼ 1 mutant [1(1)xx1(0)] as shown in Supplementary Fig. 7b. These predictions can be tested by measuring the kinase activity dynamics in vivo in these mutants by using FRET 32 .
Discussion
Multisite regulatory proteins are ubiquitous in biology, yet their functions are not well understood. Here, we studied effects of ordering among the multiple modification sites and possible benefits of having multiple sites in the context of bacterial chemotaxis. We discuss the two main findings below.
First, we found that sequential modification is crucial for perfect adaptation. Previous study 14,35 showed that perfect adaptation can be achieved by an integral control mechanism where dynamics of the controller (receptor methylation level) only depend on receptor activity. Here, we showed that sequential modification is another important ingredient for the integral control mechanism as it guarantees the methylation/demethylation rates to be independent of the receptor methylation level, Eq. (4). As a direct consequence of sequential modification, the adapted activity is independent of the receptor methylation level (or the stimulus strength), i.e., perfect adaptation.
We note that there may be other possible scenario for the methylation/demethylation rates to be independent of the available modification sites. For bacterial chemoreceptors, the binding and unbinding of CheR to the receptor are faster than its catalytic rate and the dissociation constant K D is relatively small 36 . If the enzyme binds to all available active sites randomly with equal probability, the number of available sites effectively changes the substrate concentration. Given that the substrate concentration is much higher than the Michaelis-Menten constant K M ≈ K D , the methylation reaction rate, which is limited by the slow catalytic reaction, would be independent of the substrate concentration and thus independent of the number of available methylation (demethylation) sites. However, the binding rate (k on ) of the enzyme in this scenario would depend on the substrate concentration and the available modification sites, which seems to be inconsistent with the recent in vitro measurements of the k on rates for CheR binding to Tar(EEEE) and Tar(QQQQ) receptors 36 (see Supplementary Note 1 for more details). Furthermore, the random methylation pattern predicted by this scenario is inconsistent with the observed sequentiality among the different methylation sites in in vivo experiments 28,30 .
Second, we found that there is a tradeoff between response gain and adaptation accuracy. We showed that this tradeoff can be improved significantly by having more modification sites but only with the sequential modification process. Taken together, our study suggests a general two-pronged strategy to enhance chemotaxis performance by having multiple modification sites to extend the dynamic range of high gain, and a sequential modification process to maintain adaptation accuracy. Direct comparison with existing experiments confirms our theory and reveals that the methylation process for methylation sites 3, 2, and 1 of Tar is mostly sequential with a small but finite random component 0.05 ≤ η ≤ 0.13. The confirmed importance of sequential receptor methylation begs the question of the underlying molecular mechanism responsible for maintaining specific ordering in multisite modification. Previous mutant studies showed that methylation of a given site is affected by a residue seven amino acids to the C terminus 28,30 , which is exactly how sites 1, 2, and 3 are arranged (Fig. 5). Also, methylation of site 3 is affected by a residue 7 amino acids to the C terminus, even though that residue itself is not a methylation site 28 . Indirect evidence of sequential demethylation by cheB can also be found in refs. 37,38 (see Supplementary Discussion for details).
The existing experiments mentioned above suggest a chain reaction scheme for the sequential methylation process. However, it is not clear whether the preceding site in the sequence increases the binding affinity of CheR to the receptor or the catalytic rate or both. It is also not clear whether and how different receptors in the closely packed receptor cluster compete for the limited CheR molecules in the cluster. We believe that a detailed biochemical model that incorporates key steps such as binding/unbinding and catalytic reactions in the methylation/demethylation processes together with quantitative in vitro measurements of the methylation/demethylation rates for wt and mutant receptors are needed to address these questions. The same strategy should also be used to study the much less known demethylation process. In addition to searching for possible molecular mechanisms for ordered modification, another interesting question is what are the thermodynamic costs of implementing such ordered modification mechanisms for accurate control 24 . Finally, it is worth pointing out that even though the detailed molecular mechanism of the methylation and demethylation reactions still remains open, our conclusions regarding the general properties of the system such as response gain, adaptation accuracy, and their tradeoff and their dependence on the level of sequentiality (η) of the underlying multisite modification process should hold true.
Our work serves as a successful case study of multisite protein modification by using a modeling approach in combination with knowledge of the underlying biochemical pathway and quantitative data. This combined approach provides a powerful general framework that can be applied to other signaling systems to understand the mechanisms of multisite signaling proteins and their biological functions.
Methods
The standard model for bacterial chemotaxis. We briefly describe a previously developed general mathematical framework-the standard model for studying bacterial chemotaxis signaling pathway dynamics (see ref. 17 for a recent review).
In the standard model for bacterial chemotaxis, each receptor has two key state variables-its kinase activity (a) and its methylation state (μ). For kinase activity, a receptor can be either active (a = 1) or inactive (a = 0). For methylation state, as each receptor has M(≥1) modification (methylation) sites, there are a total of 2 M possible modification states characterized by a M − dimensional binary vector μ = (μ 1 , μ 2 , … , μ M ), where the binary number μ j = 0, 1 respectively represents the unmethylated and methylated state of site j(=1, 2, … , M). The total modification level of a receptor is given by: m k μ k ¼ P M j¼1 μ j . The receptor kinase activity dynamics is fast relative to its methylation dynamics. Here, we use the standard two-state model to describe the receptor kinase activity dynamics, where the active and inactive states are separated by a free energy difference Δf. When the fast ligand-receptor binding/unbinding process is averaged out, Δf(m, [L]) depends on the receptor's total modification level m and the ligand concentration [L]. From previous studies on bacterial chemotaxis 18,35 , the free energy Δf can be written as: where K I and K A are the dissociation constants of the ligand binding to the inactive and active conformations of the receptor, α(<0) is the free energy change due to adding (or removing) one methylation group to the receptor, and m 0 determines the average modification level in the absence of any stimulus ([L] = 0). Another important phenomenon in bacterial chemotaxis is that bacterial chemoreceptors form polar clusters [39][40][41] . The receptors and their kinase activities are coupled with each other in the cluster. Following previous work 19,42 , we model the receptor cluster by using an Ising-type model with nearest neighbor interaction with strength C.
Dynamic Monte-Carlo simulations of the Ising-type model (see Supplementary Methods for details of the Monte-Carlo simulations) is used to obtain the distribution of receptors, P aμ , in a given state (aμ), which describes the statistical properties of the receptor cluster. From the full distribution function P aμ , distribution of the microscopic methylation state μ can be obtained by summing over the fast variable a, P μ ¼ P 1 a¼0 P aμ , and the probability of the total modification level m is given as: From these distribution functions, average properties of the receptor cluster can be obtained. For example, the average methylation level is According to Eq. (10), kinase activity of a receptor a h i m only depends on its total methylation level m, which can be expressed as: and the average activity for all receptors is: These distribution functions and average receptor properties are used here to understand the response gain and adaptation accuracy in bacterial chemotaxis quantitatively. In particular, we focus on investigating how different modification schemes (random or sequential) affect the adaptation accuracy and response gain in this paper.
Characterizing the performance of the chemotaxis signaling pathway. The performance of the chemotaxis signaling pathway can be characterized by two key system-level properties: the integrated response gain (amplification) Γ and adaptation accuracy ξ −1 , which we define in the following.
At a given background ligand concentration [L], the adapted methylation levels of all receptors in the system (receptor cluster) are represented by m A ð½LÞ (vector m = (m 1 , m 2 , … , m N ) contains the methylation levels of all the receptors in the receptor cluster, m i is the methylation level of receptor-i (1 ≤ i ≤ N) and N is the number of receptors in the cluster) and the average adapted activity of the system is given by: a h i A ð½LÞ a h iðm A ð½LÞ; ½LÞ. Upon a sudden change of ligand concentration from [L] to [L] + δ[L], the system first responds by a change of activity δ〈a〉, which can be written as: where the negative sign is due to the fact that increase of attractant concentration leads to decrease of receptor activity in bacterial chemotaxis. To describe the system's ability to amplify the input stimulus over a broad range of background stimulus concentration, we define the overall gain Γ as the integral To characterize the adaptation accuracy over the wide range of backgrounds, we define an overall adaptation error ξ by integrating ϵ over the stimulus concentration in log-scale (natural base is used here for convenience): The overall adaptation accuracy is defined as the inverse of the adaptation error, ξ −1 . Details of the Monte-Carlo simulations and of computing Γ and ξ are discussed in Supplementary Methods.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
All data used to support the findings of this work are available upon request.
Code availability
The code used to perform the simulations is available at https://github.com/ bernardomello/chemotaxis. | 2023-02-08T15:48:24.508Z | 2020-06-08T00:00:00.000 | {
"year": 2020,
"sha1": "827dbed76a35a4bdcae43acdbc3cee1ebd53ca0b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-16644-4.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "827dbed76a35a4bdcae43acdbc3cee1ebd53ca0b",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
} |
208175562 | pes2o/s2orc | v3-fos-license | Fundamental groupoids for simplicial objects in Mal'tsev categories
We show that the category of internal groupoids in an exact Mal'tsev category is reflective, and in fact a Birkhoff subcategory of the category of simplicial objects. We then characterize the central extensions of the corresponding Galois structure, and show that regular epimorphisms admit a relative monotone-light factorization system in the sense of Chikhladze. We also draw some comparison with Kan complexes. By comparing the reflections of simplicial objects and reflexive graphs into groupoids, we exhibit a connection with weighted commutators (as defined by Gran, Janelidze and Ursini).
Introduction
Categorical Galois theory, as developed by G. Janelidze ([25, 29, 3, 27]), is a general framework that allows the study of central extensions or coverings of the objects of a category. A large collection of examples has been given, ranging from the Galois theory of commutative rings of Magid ([32,10]) and the theory of coverings of locally connected spaces to the central extensions of groups, Lie algebras, or more generally exact Mal'tsev categories [28].
The main ingredient of this theory is the notion of Galois structure, which is defined as an adjunction, with the right adjoint often taken to be fully faithful, and a class of morphisms in the codomain of the right adjoint, satisfying suitable conditions, in particular admissibility, which amounts to the preservation by the left adjoint of certain pullbacks. For example, the inclusion of any Birkhoff subcategory of an exact Mal'tsev together with the class of regular epimorphisms always forms an admissible Galois structure ( [28]).
In [8], Brown and Janelidze used this theory to describe what they called second order coverings for simplicial sets, using the adjunction given by the nerve functor and the fundamental groupoid, and the class of Kan fibrations. In fact, they restriced their analysis to Kan complexes, as this condition implies the admissibility of these objects for the corresponding Galois structure. Later Chikhladze introduced relative factorization systems, and showed that the induced relative factorization system for Kan fibrations is locally stable, so that the Galois structures induces a relative monotone-light factorization ( [15]).
On the other hand regular Mal'tsev categories were characterized in [11] as the categories in which the Kan condition holds for every simplicial object, thus extending a theorem of Moore stating that the underlying simplicial set of a simplicial group is always a Kan complex. Moreover, regular epimorphisms in the category of simplicial objects then coincide with Kan fibrations. This suggests that the inclusion of groupoids into simplicial objects in any exact Mal'tsev category might induce an admissible Galois structure.
The main objective of this paper is to show that this is indeed the case, and more precisely that the category of groupoids in an exact Mal'tsev category is always a Birkhoff subcategory of the category of simplicial objects. The paper is organised as follows : we begin with some preliminaries, to fix notations and provide the background notions. We then construct the reflection of the category of simplicial objects into the subcategory of internal groupoids. Next, we give a characterization of the central extensions for the Galois structure. In the next section we compare our construction with the homotopy relations for the simplices in a Kan complex, which are used to define its homotopy groupoid. Then we prove that the Galois structure admits a relative monotone-light factorization system. We end the paper with a discussion of reflexive graphs, seen as truncated reflexive graphs.
1. Preliminaries 1.1. Simplicial objects. Let ∆ denote the category of finite nonzero ordinals, with monotone functions as morphisms. For a given category C, the category Simp(C) of simplicial objects in C is the category of functors ∆ op → C. Equivalently, an object X of Simp(C) is a collection of objects (X n ) n∈N together with face maps d i : X n → X n−1 for all n > 0 and 0 ≤ i ≤ n, and degeneracy maps s i : X n → X n+1 for n ≥ 0 and 0 ≤ i ≤ n, satisfying the following simplicial identities, whenever they make sense: When necessary, we will write d X i or s X i to distinguish the face or degeneracy maps of different simplicial objects. A morphism f : X → Y in Simp(C) is then a collection of morphisms f n : X n → Y n that commute with face and degeneracy maps, in the sense that d Y i f n+1 = f n d X i and s Y i f n = f n+1 s X i for all i, n. If X is a simplicial object, we will denote Dec(X) the simplicial object (X n+1 ) n≥0 , whose face and degeneracies are the same as those of X, without the last faces d n+1 : X n+1 → X n and last degeneracies s n : X n → X n+1 for all n ≥ 1. The simplicial identities imply that the maps d n+1 : X n+1 → X n form a morphism of simplicial objects ǫ X : Dec(X) → X. Since all these maps are split (and thus regular) epimorphisms, ǫ is a regular epimorphism in Simp(C), although it need not be a split epimorphism. Notice that Dec defines an endofunctor of Simp(C), and ǫ is a natural transformation from Dec to the identity endofunctor.
∆ is a skeleton of, and thus equivalent to, the category of non-empty finite totally ordered sets. In particular, since this category contains the poset P f,n.e. (N) of non-empty finite subsets of N (ordered by inclusion) as a subcategory, there is a canonical functor Φ : P f,n.e. (N) → ∆ that maps any set with n + 1 elements to {0, . . . , n} and any inclusion map to an injective morphism in ∆.
For a given simplicial object X, and for every n ≥ 2, one can consider the restriction of Φ to the poset of proper subsets of {0, 1, . . . , n}; taking the opposite functor and composing with X : ∆ op → C gives a diagram in C. The limit of this diagram is the n-th simplicial kernel of X, and denoted K n (X). In particular, we have maps µ i : K n (X) → X n−1 for i = 0, . . . , n, satisfying d i µ j = d j−1 µ i for all 0 ≤ i < j ≤ n, and the maps µ i are universal with this property. Thus the face maps d 0 , . . . , d n : X n → X n−1 induce a canonical map κ n : X n → K n (X). Following [18], we say that X is exact at X n−1 if κ n is a regular epimorphism, and exact if it is exact at X n for all n ≥ 1.
Moreover, for every n ≥ 2 and 0 ≤ k ≤ n, we can also restrict Φ to the poset of proper subsets of {0, . . . , n} that contain k, and then compose the opposite functor with X. The limit of this diagram is the object of (n, k)-horns Λ n k (X), and it is equipped with maps ν i : Λ n k (X) → X n−1 for 0 ≤ i ≤ n and i = k that satisfy the identities d i ν j = d j−1 ν i for all 0 ≤ i < j ≤ n and i = k = j, and are universal with this property. There is then also a canonical arrow λ n k : X n → Λ n k (X) induced by the face maps d i : X n → X n−1 for i = k, and X is said to satisfy the Kan property if all these maps are regular epimorphisms. Moreover, a map f : X → Y between simplicial objects is called a Kan fibration if for all n and k the canonical arrow θ n k in the diagram (where the inner square is a pullback) is a regular epimorphism. For every n ≥ 1, we denote ∆ n the full subcategory of ∆ consisting of the ordinals with n + 1 elements or less, and Simp n (C) the category of functors ∆ op n → C, whose objects we called n-truncated simplicial objects. The inclusion ∆ n ֒→ ∆ then induces by precomposition the truncation functor Simp(C) → Simp n (C).
An internal reflexive graph in C is simply a 1-truncated simplicial object. A multiplicative graph is then a reflexive graph endowed with a partial multiplication m : X 1 × X 0 X 1 → X 1 that is unital and compatible with the domain and codomain maps ( [13]), and an internal category is a multiplicative graph whose multiplication is associative. All these conditions can be summarized by saying that an internal multiplicative graph is an object of Simp 2 (C), such that the square is a pullback. Internal functors are also the same thing as (restricted) simplicial morphisms. Moreover, any internal category can be extended to a simplicial object by simply taking its nerve. From now on we will thus consider Cat(C) and Grpd(C) as full subcategories of Simp(C); more precisely, a simplicial object X is an internal category if and only if the commutative square is a pullback for all n ≥ 2.
1.2.
Mal'tsev categories and higher extensions. A finitely complete category C is called a Mal'tsev category if every internal reflexive relation is an equivalence relation [11,12,13,2]; in a regular category, this condition is equivalent to the fact that the composition R•S of two equivalence relations R, S on the same object X is an equivalence relation. When this is the case, R • S is in fact the join of R and S in the poset of equivalence relations of X. Accordingly this poset is a lattice. In fact this is a modular lattice ( [12]), i.e. we have the identity for all equivalence relations R, S, T on X such that R ≤ T . An important property of Mal'tsev categories is that the inclusion of the category Grpd(C) of internal groupoids into the category MRG(C) of multiplicative reflexive graphs is an isomorphism, and that the truncation functor MRG(C) → RG(C) is fully faithful ( [13]).
For a variety, this is also equivalent to the existence of a ternary operation p satisfying the equations p(x, y, y) = x and p(x, x, y) = y. In particular, the categories of groups, R-modules, rings, Lie algebras and C * -algebras are all examples of Mal'tsev categories; other examples include the category of Heyting algebras, the dual of any topos [5] or any additive category.
In any regular category, a commutative square We can also define a triple extension as a commutative cube for which all faces, as well as the induced commutative square are double extensions. Triple extensions satisfy the same properties as in the previous proposition : in particular, a split cube between double extensions is always a triple extension.
1.3. Categorical Galois theory and monotone-light factorization systems. We recall some definitions from [28,29]. A Galois structure Γ = (C, X , I, U, F ) consists of a category C, a full reflective subcategory X of C, with reflector I : C → X and inclusion U : X → C and a class F of morphisms of C containing all isomorphisms, stable under pullbacks and composition, and preserved by I. We will call the morphisms in F extensions. Let us write, for any object B of C (resp. of X ), C ↓ B (resp. X ↓ B) for the full subcategory of the slice category C/B (resp. X /B) consisting of extensions f : X → B. Then any arrow p : E → B induces a functor p * : C ↓ B → C ↓ E defined by pulling back. If p is an extension, this functor has a left adjoint p ! defined by composition with p; the extension p is said to be of effective F -descent, or simply a monadic extension, if the functor p * is monadic.
Moreover, the reflector I induces for every B a functor I B : C ↓ B → X ↓ I(B) which maps f : X → B to I(f ) : I(X) → I(B); and every such functor has a right adjoint U B : X ↓ I(B), defined for any g : Y → I(B) by the pullback The object B is then said to be admissible if U B is fully faithful, which is equivalent to the reflector I preserving all pullback squares of the form above. A Galois structure Γ is said to be admissible if every object is admissible. Given an admissible Galois structure, an extension f : X → B in C ↓ B is said to be • trivial if it lies in the replete image of U B , or equivalently if the square is a pullback; • central, or alternatively a covering, if there exists a monadic extension p : E → B such that p * (f ) is trivial; • normal, if it is a monadic extension and if f * (f ) is a trivial extension (that is, if the projections of the kernel pair of f are trivial).
Example 1.
If C is an exact Mal'tsev category and X is any Birkhoff (i.e. full reflective and closed under quotients and subobjects) subcategory of C, and F is the class of regular epimorphisms, then the Galois structure Γ is admissible, and moreover every extension is monadic and every central extension is also normal ( [28]). When C is the category of groups and X the subcategory of abelian groups, the central extensions in this sense are exactly the surjective group homomorphisms whose kernel is included in the center of the domain ( [28]). More generally, in any exact Mal'tsev category with coequalizers, the central extensions of the Galois structure defined by the subcategory of abelian objects are the extensions such that the Smith-Pedicchio commutator [Eq[f ], ∇ X ] is trivial ( [22]).
If Γ is a Galois structure where F is the class of all morphisms in C, admissibility is equivalent to the reflector I being semi-left-exact in the sense of [14]. In that case any morphism f : X → B in C induces a commutative diagram .
Since the reflector I preserves the pullback in this diagram I(f ′ ) is an isomorphism, and f ′′ is a trivial extension by definition. Moreover in that case the classes E of morphisms inverted by I and the class M of trivial extensions are orthogonal to one another, and thus the two classes form a factorisation system (E, M) in C ( [14]). The trivial extensions are then stable under pullbacks, but the class E does not have this property in general.
In order to obtain a stable factorization system, one can localize M and stabilize E, as in [9]; this means that we replace E by the class E ′ of maps for which every pullback along a monadic extension is in E, and M by the class M * of maps f that are locally in M, in the sense that there exists a monadic extension p such that p * (f ) ∈ M. In the context of Galois Theory these are precisely the central extensions. The two classes E ′ and M * are orthogonal, but in general they do not form a factorization system. When this is the case, the resulting factorization system is called the monotone-light factorization system In the case where F is no longer the class of all morphisms in C, it need not be true that every morphism admits a (E, M)-factorization. Nevertheless, this is still true for extensions; it is then natural to extend the notion of factorization system to the case where only some morphisms have a factorization. This was done by Chikhladze in [15] : If C is a category and F a class of morphisms of C containing the identities and closed under composition and pullbacks, a relative factorization system for F consists of two classes E and M of maps such that Then any admissible Galois structure Γ = (C, X , I, F ) yields a relative factorization system for F with E and M consisting of the maps inverted by I and the trivial extensions, respectively. When moreover this factorization system can be stabilized, then the stable factorization system (E ′ , M * ) is called a relative monotone-light factorization system for F .
Example 2.
If C is the category of simplicial sets, X the category of groupoids, I the fundamental groupoid functor, and F the class of Kan fibrations, then every Kan complex is an admissible object, and the central extensions were called second order coverings in [8].
This Galois structure admits a relative monotone-light factorization system, as shown in [15].
Example 3. In a finitely complete category, any object X has a corresponding discrete internal groupoid. This defines a fully faithful functor D : C → Grpd(C). If C is exact, then this functor admits a semi-left-exact left adjoint π 0 : Grpd(C) → C ( [4]). When C is moreover Mal'tsev, C is in fact a Birkhoff subcategory of Grpd(C), and the central extensions of the Galois structure (Grpd(C), C, π 0 , F ) (where F is the class of regular epimorphisms) are precisely the regular epimorphic discrete fibrations ( [21]). This Galois structure admits a relative monotone-light factorization system ( [16]).
The reflection of simplicial objects into groupoids
Convention. From now on, C will denote a regular Mal'tsev category. For a given simplicial object (X n ) n≥0 with face maps d i : X n → X n−1 for n ≥ 1 and 0 ≤ i ≤ n, we will denote D i the kernel pair of d i .
Note that Simp(C), being a functor category, is also regular Mal'tsev. Lemma 1. If X is a simplicial object in C, all the commutative squares given by Proof. If i < j − 1, the two squares in the diagram commute. On the other hand, if j = i+1, then at least one of the inequalities 1 ≤ j ≤ n+2 is strict, hence at least one of the squares will commute; in any case, the commutative square is a double extension.
Moreover, any morphism f : X → Y of simplicial objects has to commute with the face and degeneracies; hence, when f is a regular epimorphism, every square is a double extension. The resulting cube will then always be a split epimorphism between double extensions, hence a triple extension.
Remark. The pullback X 1 × X 0 X 1 of d 0 along d 1 coincides with the object of (2, 1)-horns Λ 2 1 (X), and similarly the other two pullbacks X 1 × X 0 X 1 , which define the kernel pairs of d 0 and d 1 , coincide with the objects of (2, 2) and (2, 0)-horns, respectively. In particular, the previous lemma shows that every simplicial object satisfies the Kan property and that every regular epimorphism is a Kan fibration for 2-horns. The proof for the higher order horns can be done in the same way, using n-fold extensions for n ≥ 3, as in [18].
As a consequence we have Proof. By the previous lemma f : X → Y induces a triple extension. In particular the square Moreover, d k can be identified with a component of the regular epimorphism ǫ X : Dec(X) → X, and thus the cube are all double extensions, which implies the desired equalities.
Lemma 2. For any simplicial object X, the following equivalence relations in X 1 are all equal : Proof. We prove the first identity; the other one is obtained in a similar way. Since Definition 2. We will call H 1 (X) this equivalence relation.
Proposition 2. Let X be a simplicial object in C. Then for all n ≥ 2 the following conditions are equivalent : Moreover, X is an internal groupoid if and only if it satisfies these conditions for all n ≥ 2.
We first consider the case where n = 2; for this case it is enough to prove that Assuming now that the condition holds for n, we prove that it holds for n + 1. Assume that D i ∧ D j = ∆ X n+1 ; then taking images by d k (for k / ∈ {i, j}) on both sides shows that D i ′ ∧ D j ′ = ∆ Xn for some i ′ , j ′ , and thus, by the induction hypothesis, for all i ′ , j ′ . In particular, for any 0 ≤ r < s ≤ n + 1, we have for some r ′ , s ′ Now X is an internal groupoid if and only if the squares are all pullbacks. Since we know already that they are all double extensions, this is equivalent to the fact that the pair d 0 , d n is jointly monic, and this is equivalent to (2).
Thus any internal category always satisfies the second condition, and conversely any simplicial object satisfying the first one is an internal category where the square is a pullback. This condition is equivalent to the internal category being a groupoid.
Note that in the above proof we only needed to know that X was an internal category to prove that it satisfied the conditions; so this gives us a new proof of the fact that any internal category in a regular Mal'tsev category is an internal groupoid. Proof. All the intersections that characterize internal groupoids in the previous proposition are preserved by regular epimorphisms of simplicial objects, which shows that groupoids are closed under quotients. Moreover they are also closed under subobjects where the horizontal sides are monomorphisms and the right-hand vertical side is an isomorphism, and thus the left-hand vertical side is a monomorphism. Since it is also a regular epimorphism (by Lemma 1), this means d i , d j is an isomorphism, hence X is an internal groupoid.
Remark. In fact the previous corollary also characterizes Mal'tsev categories among the regular (or even finitely complete) ones : indeed a reflexive relation R ֒→ X × X is just a subobject of the reflexive graph (X × X, X, π 1 , π 2 , δ X ), and by taking iterated simplicial kernels, one can extend this to a monomorphism in Simp(C), whose codomain is just the nerve of the indiscrete equivalence relation/groupoid on X. Thus every reflexive relation is a subobject of a groupoid, and a relation is a groupoid if and only if it is an equivalence relation. Accordingly : Convention. We now assume that the category C is also exact.
In this setting, we have Proof. Note first that since by definition H 1 (X) ≤ D 0 ∧ D 1 , d 0 and d 1 : X 1 → X 0 both factor through the coequalizer η 1 of H 1 (X), and their factorizations have a common section η 1 s 0 , which we will also denote s 0 , so that we get a morphism of reflexive graphs Let us then form the pullback H 1 is a regular epimorphism, and as a consequence so is which we will denote η 2 . We also define H 2 (X) = Eq[η 2 ]. Now we need to show that η 1 d 1 : X 2 → X 1 H 1 (X) factorizes through η 2 ; for this it is enough to show that η 1 d 1 (H 2 (X)) = ∆ X 2 , which is equivalent to d 1 (H 2 (X)) ≤ H 1 (X). Since d 0 and d 2 are jointly monic by construction, we find that graph, which is then automatically a groupoid, which we denote I(X). We also denote η X : X → I(X) the morphism of simplicial objects induced by 1 X 0 , η 1 and η 2 . We can show that η n is is a regular epimorphism for all n, by iterating the argument showing that η 2 is a regular epimorphism.
The only thing that remains to be checked is that this construction is universal. For this we must prove that for every morphism f : X → Y to a groupoid Y, there exists a factorization of f n through η n : X n → I(X) n for all n (note that such a factorization is unique, as every η n is a regular epimorphism). The case n = 0 is trivial as η 0 is the identity. For n = 1, it is enough to prove that Eq[f 1 ] ≥ H 1 (X), or equivalently f 1 (H 1 (X) This shows that the truncation of f to a morphism (f 1 , f 0 ) of reflexive graph factors through the groupoid X 1 /H 1 (X), with a factorization (g 1 , g 0 = f 0 ); applying the nerve functor allows us to extend this factorization to higher levels, resulting in morphisms g n : I(X) n → Y n . Then the factorizations f n = g n η n for n ≥ 2 can be obtained from the universal property of the pullbacks defining each X n and Y n . Then since each η n is a regular epimorphism, the morphisms g n define a morphism of simplicial objects.
Let us denote H n (X) the kernel pair of η n . We have proved already that . For the next section, it will be useful to prove a similar formula for H n (X) for n ≥ 3 : Proof. We prove the result by induction on n. The case n = 2 was done in the proof of Theorem 1. Now let us assume that it holds for n; since by construction the square is a pullback, so that the two maps d 0 , d n+1 are jointly monic, we have for n + 1
Moreover, by the induction hypothesis we have the identities
Combining all these, we get the identity From there we already see that For the converse inequality, first note that and thus, since the lattice of equivalence relations of X n+1 is modular, we have
Now to conclude the proof it is enough to prove that
for all m ≥ 1, which we will do by induction. The case where m = 1 is trivial, so let us now assume that (3) holds for some m. Then we have and as a consequence we have It follows that the left-hand side must be equal to Now since 0<j≤m+1 (D 0 ∧ D j ) ≤ D 0 , using again the modularity law, we find that and this is smaller than D m+1 ∨ 0≤i<j<m+1 (D i ∧ D j ) , which concludes the proof.
Remark. If the category C is not only exact Mal'tsev but also arithmetical ( [36]), then the category Grpd(C) coincides with the category of equivalence relations, which is thus a Birkhoff subcategory of Simp(C). Note that in that case, H 1 (X) = d 0 (D 1 ∧D 2 ) = D 0 ∧D 1 , since direct images preserve intersections of equivalence relations (by Theorem 5.2 of [7]). Accordingly our reflection becomes a reflection of Simp(C) into Eq(C).
Corollary 4. An exact Mal'tsev category is arithmetical if and only if Eq(C) is a Birkhoff subcategory of Simp(C).
Remark. Note that, by contrast with the Smith-Pedicchio commutator, whose quotient gives a left adjoint of the forgetful/inclusion functor Grpd(C) → RG(C), we don't need to assume the existence of any colimits to define H 1 (X).
Characterization of central extensions
Being a Birkhoff subcategory of the exact Mal'tsev category Simp(C), Grpd(C) is admissible in the sense of categorical Galois theory, when F is the class of all regular epimorphisms. In this section we will characterize the central extensions with respect to this reflection.
Convention. If f : X → Y is a map in Simp(C), we will denote F n the kernel pair of the corresponding map f n : X n → Y n , for all n ≥ 0. Similarly, for maps g : Z → W and f ′ : X ′ → Y ′ in Simp(C), we will denote the corresponding kernel pairs G n and F ′ n (for n ≥ 0), respectively.
First, we note that Proposition 4.2 of [28] implies, in our case, that trivial extensions f : X → Y are characterized by the property that F n ∧ H n (X) = ∆ Xn for all n ≥ 0, that is : Our characterization of central extensions is then obtained simply by "distributing" the intersection with F n appearing in these equations with the join or image. In other words we have and for all n ≥ 2.
To prove this we will need a couple of lemmas.
be a pullback square of regular epimorphisms in Simp(C), and let n ≥ 2 and 0 ≤ i < j ≤ n. Let us denote d ′ i the face maps of the simplicial object P, and D ′ i their kernel pairs. Then Proof. Since pullbacks in Simp(C) are computed "levelwise" in C, for all n the square is a pullback. Since moreover limits commute with limits, in the cube the top and bottom faces are pullbacks; one can then show that the square In particular, For the converse, the equation (6) shows already that if F n ∧ D i ∧ D j = ∆ Xn , then Since it is also smaller than F ′ n , and since f ′ n and g ′ n are jointly monic by construction, we have F ′ n ∧ D ′ i ∧ D ′ j = ∆ Pn . Lemma 4. Let f : X → Y be a split epimorphism, with section s : Y → X, and let A, B be two equivalence relations on X, with respective coequalizers q A , q B . Assume that we have a diagram where the vertical downwward arrows are split epimorphisms, and the upward and downward squares commute. Then the following conditions are equivalent : Proof. First of all, we have the inequality which immediately proves that the first condition implies the second.
For the converse, we can complete the diagram (7) by taking the pushouts of the top and bottom spans. This yields a cube which is a split epimorphism between double extensions, hence a triple extension. In particular, the square The first equality implies that q A , f is a mono, hence an iso; then so is γ in the diagram above, and thus the left and right faces of the cube are pullbacks. Similarly, the second equality B ∧ Eq[f ] implies that the top face is a pullback as well, and then so is the square Proof of Theorem 2. Let us consider the diagram Now assume first that f is a central extension, so that the left-hand square is a pullback. Since by construction I(X × Y X) is an internal groupoid, (5) holds for I(π 1 ), and then by Lemma 3 it also holds for π 1 and thus for f .
Assuming now that (5) holds, then again by Lemma 3 it also holds with π 1 : X × Y X → X, so that Eq[(π 1 ) n ] ∧ D ′ i ∧ D ′ j = ∆ Xn× Yn Xn ∀0 ≤ i < j ≤ n. But π 1 is a split epimorphism in the category of simplicial objects of C. Thus in particular, for all 0 ≤ i < j ≤ n, (π 1 ) n and D ′ i ∧ D ′ j satisfy the assumptions of Lemma 4, and thus we have This implies that the left-hand square is a pullback; thus π 1 is a trivial extension, and f is a central extension.
The equivalence relation F 2 ∧D 0 ∧D 1 is the kernel pair of the arrow θ 2 2 : is a pullback. The triviality of F 2 ∧ D 0 ∧ D 2 and F 2 ∧ D 1 ∧ D 2 can be interpreted in the same way with the horn objects Λ 2 1 and Λ 2 0 . Moreover, the higher order conditions F n ∧D i ∧D j = ∆ Xn imply that all the morphisms θ n k for n ≥ 2 are isomorphisms, and thus that all squares are isomorphisms. One can prove that the converse is true as well.
Comparison with simplicial sets
As noted before, the left adjoint to the nerve functor between groupoids and simplicial sets is the fundamental groupoid functor [19]. For a simplicial set X which satisfies the Kan condition, also called a quasigroupoid, this left adjoint can alternatively be described as the homotopy groupoid (see [1,31]). One defines the homotopy relation on X 1 by saying that two elements (or 1-simplices) f, g ∈ X 1 are homotopic if and only if there exists α ∈ X 2 such that d 0 (α) = f , d 1 (α) = g and d 2 (α) = s 0 d 1 f = s 0 d 1 g. This is always a reflexive relation (since for a given f one can take α = s 0 f ), and using the Kan condition one can then prove that this is actually an equivalence relation. The homotopy groupoid is then the groupoid whose objects are just the elements of X 0 , arrows are homotopy classes of 1-simplices, identities defined by the classes of degenerate 1-simplices, and composition defined by the existence of fillers for (2, 1)-horns (with two sided inverses defined by the existence of fillers for the outer horns).
This relation can be interpreted in any regular category as follows : first take the pullback and then factorize the map (d 0 , d 1 )π 1 : X 0 × X 1 X 2 → X 1 × X 1 as a regular epimorphism q : P → R followed by a monomorphism r = (ρ 1 , ρ 2 ) : R → X 1 × X 1 , so that R is a relation on X 1 . As in the case of sets, this is a reflexive relation; indeed, the simplicial identities imply that This relation is in fact equal to d 0 (D 1 ∧ D 2 ) whenever X satisfies the Kan condition, as we shall now see. In fact it will be helpful to prove a slightly more general result: Given any regular epimorphism f : X → Y between two simplicial objects, if we take the pullback The case where f is the morphism ǫ X : Dec(X) → X gives the desired identity.
Proof. Consider the diagram where the top and bottom faces of the cube are pullbacks. Since all the vertical arrows are split by a degeneracy map s 0 , and the horizontal maps commute with these sections, the dotted arrow is a split epimorphism as well. In particular, the image factorization of (d X 0 , d X 1 )π 1 is the same as that of (d X , which would concludes the proof.
Since we have a decomposition of f 2 given by the diagram we can rewrite the top pullback in (9) as the upper rectangle in the following diagram : , the composition λ 2 0 s 1 is a monomorphism, and thus so is ϕ 1 m. Since λ 2 0 is the regular epimorphism in the factorization of (d X 1 , d X 2 ) : On the other hand, the right-hand rectangle above coincides with the left-hand square in the rectangle Since the two squares are pullbacks, the whole rectangle is one as well. But this is the same as the outer rectangle in where the two squares are again pullbacks. Thus P coincides with the intersection D 1 ∧F 1 , which concludes the proof.
Remark. If one sees a Kan complex as a quasigroupoid or ∞-groupoid, then the left adjoint to the nerve or inclusion functor Grpd → Kan is in a sense a "strictification", which turns quasigroupoids into actual groupoids.
The equivalence relation d 0 (D 1 ∧ D 2 ∧ F 2 ) which appears in our characterization of central extensions admits an alternative construction, similar to that of H 1 (X). More precisely, if we take now L to be the limit of the lower part of the diagram (with the dotted arrows forming the limit cone) then Indeed, the limit in diagram (10) can also be obtained as the pullback Moreover, we have Thus the pullback square above factorises as a rectangle and one can easily show that the right-hand square is a pullback, and as a consequence so is the left-hand side square. But this square is exactly the pullback that appears if we apply Lemma 5 to the induced map ǫ X , Dec(f ) : Dec(X) → X × Y Dec(Y), which is a regular epimorphism between simplicial objects because the square is a double extension in C for all n. Thus Lemma 5 implies that the two constructions are equal.
The relative monotone-light factorization system
In order to prove that our Galois structure admits a relative monotone-light factorization system, we use the following criterion, due to Carboni, Janelidze, Kelly and Paré in the absolute case and to Chikhladze in the relative case : Proposition 4 ( [9,15]). Let (C, X , I, F ) be an admissible Galois structure. The class F admits monotone-light factorization if for each object B of C there is an effective Fdescent morphism p : C → B where C is a stabilizing object, i.e. an object such that if h = me is the (E, M)-factorization of any morphism h : X → C, then any pullback of e along a map in F is still in E.
We will prove that, in our case, the shifting Dec(X) of a simplicial object X is always stabilizing. For this it suffices to prove that exact objects are stabilizing since we have : Proposition 3.9). Any simplicial object that is contractible and also satisfies the Kan condition is exact.
As a consequence, if X satisfies the Kan condition, then its shifting Dec(X) is exact. We will need the following characterization of images in regular categories: Proposition 6 ( [11]). Let f : X → Z and g : Y → Z be two morphisms in a regular category C. Then g factors through the regular image of f if and only if there exist an object W of C with a morphism h : W → X and a regular epimorphism q : W → Y such that f h = gq.
Note that this equality means that an extension whose codomain is exact is trivial if and only if it is central.
Proof. The inequality always hold. To prove the converse, we consider the inclusion ϕ = (ϕ 1 , ϕ 2 ) of the equivalence relation d 0 (D 1 ∧ D 2 ) ∧ F 1 into X 1 × X 1 . Since this relation is smaller than d 0 (D 1 ∧ D 2 ), by the characterization given in Proposition 6 and the alternative construction of d 0 (D 1 ∧ D 2 ) given in Section 4, there must exist a regular epimorphism p : Z → d 0 (D 1 ∧ D 2 ) ∧ F 1 and a morphism α = α 1 , α 2 : Z → X 2 × X 1 X 0 such that d 0 α 1 = ϕ 1 p and d 1 α 1 = ϕ 2 p; and since moreover it is smaller than F 1 we have Now consider the maps One can check that the identity d i y j = d j−1 y i holds for all 0 ≤ i < j ≤ 3, so that these maps determine a map y from Z to the third simplicial kernel K 3 (Y), and we can consider the pullback Y being exact at Y 2 means that κ 3 is a regular epimorphism, and as a consequence so is p ′ . Consider now the maps One can check that the identity d i x j = d j−1 x i holds for all i < j and i = 2 = j, thus they determine a map x : Z ′ → Λ 3 2 (X) ; and moreover we have Since θ 3 2 is a regular epimorphism, so is p ′′ , and by construction we have d i α ′′ = x i p ′′ for i = 0, 1, 3 and f 3 α ′′ = α ′ p ′′ . Now the map d 2 α ′′ : Z ′′ → X 2 is such that (10)) such that ρ 1 β = α 2 p ′ p ′′ , ρ 2 β = d 2 α ′′ and ρ 3 β = f 1 d 1 α 1 p ′ p ′′ . Now we can check that Lemma 7. If Y is exact, then Y is stabilizing : given any morphism f : X → Y, the induced map f, η X : X → Y × I(Y) I(X) is stably in E.
Proof. To simplify the diagrams, we denote P = Y × I(Y) I(X). Let us consider a pullback square with g a regular epimorphism in Simp(C). We need to prove that I(h) : I(Q) → I(Z) is invertible. Since it is a map between internal groupoids, it is enough to prove that I(h) 0 and I(h) 1 are invertible. Note that the functor I leaves the objects X 0 unchanged, and thus f 0 , η 0 is an isomorphism, and thus so are h 0 and I(h) 0 . So we only need to prove is that I(h) 1 is an isomorphism.
Since Grpd(C) is a Birkhoff subcategory of Simp(C) and h is a regular epimorphism, the square is a double extension in Simp(C), and thus the square is a (regular) pushout in C. This already proves that h 1 = I(h) 1 is a regular epi. Now if there exists a map t : Z 1 → Q 1 /H 1 (Q) such that th 1 = (η Q ) 1 , then using the universal property of the pushout (12) we can construct a retraction for h 1 , which proves that it is an isomorphism. So we are left to prove that such a map t exists ; since h 1 is a regular epimorphism, it is enough to prove that Eq[h 1 ] ≤ H 1 (Q).
To prove this, we denote ψ 1 , ψ 2 : Eq[h 1 ] → Q 1 the two projections of the kernel pair. Then the commutativity of (11) (or rather, the corresponding commutative square involving h 1 in C) implies that where the last equality is the preceding lemma. As a consequence, we know that there must exist an arrow α : A → L and a regular epimorphism p : We now prove that f 2 , η 2 ρ 2 α factors through a degeneracy of P. More precisely, we prove that Since the degeneracy map s P 0 is induced by those of I(X) and Y, it is enough to prove that f 2 ρ 2 α and η 2 ρ 2 α factorize in the same manner through s Y 0 and s I(X) 0 respectively. By construction we must have By construction the two maps d are jointly monic, and thus these equalities implies that and this in turn implies that (13) hold. From this we find that Since Q 2 is the pullback of g 2 along f 2 , η 2 , there exists a unique map α ′ : A → Q 2 such that h 2 α ′ = s Z 0 h 1 ψ 1 p and g ′ 2 α ′ = ρ 2 α. From this, we find that 1 and h 1 are jointly monic, we have d Q 0 α ′ = ψ 1 p, and similarly d Q from the definition of Q 1 it suffices to check that the identity holds after composition with h 1 and g ′ 1 . We have α ′ Thus α ′ factorizes through the pullback of s Q 0 along d Q 2 , and thus (ψ 1 , ψ 2 )p = (d Q 0 , d Q 1 )α ′ factorizes through the inclusion of H 1 (Q) in Q 1 × Q 1 , which concludes the proof.
As a consequence, we then have Theorem 3. The Galois structure (Simp(C), Grpd(C), I, U, F ) admits a relative monotonelight factorization system (E ′ , M * ), where E ′ is the class of maps stably inverted by I and M * is the class of central extensions of this Galois structure.
Truncated simplicial objects and weighted commutators
For all n ≥ 2, the inclusion Grpd(C) ֒→ Simp n (C) factorizes through Simp(C), and the characterization of groupoids in truncated simplicial objects is identical. Moreover the construction of the equivalence relations H n (X) does not depend on the objects X m for m > n. Thus Grpd(C) can also be seen as a Birkhoff subcategory of Simp n (C), with the reflection defined in the same way, in the sense that the reflectors commute with the truncation functor. The characterization of central extensions also extends in the same way.
The inclusion Grpd(C) ֒→ Simp 1 (C) = RG(C) also factors through Simp(C), as every reflexive graphs admits at most one groupoid structure ( [13]). On the other hand, this time the reflection does not commute with the truncation, as the construction of H 1 (X) is dependent on X 2 and the face maps X 2 → X 1 . In fact, the reflection RG(C) → Grpd(C) is obtained by taking the quotient of X 1 by the Smith-Pedicchio commutator [D 0 , D 1 ] SP ( [35]). The central extensions of reflexive graphs in exact Mal'tsev categories (with coequalizers) with respect to this adjunction have been characterized in [17]. Note that this commutator is preserved by regular images, and is always smaller than the intersection; as a consequence, we always have the inequalities It turns out that this reflection can also be obtained by applying our results, at least when the category C is finitely cocomplete. Indeed, in that case the truncation functor Simp(C) → RG(C) has a left adjoint G, defined by taking left Kan extensions along the inclusion ∆ op 2 → ∆ op . Now since the inclusion Grpd(C) → RG(C) is the composition of the nerve functor and the truncation T , the functor IG must be a left adjoint to this inclusion. Thus our work can be used to give an alternative description of the Smith-Pedicchio commutator as the equivalence relation H 1 (G(X 1 , X 0 , d 0 , d 1 , s 0 )).
Let us make this construction explicit. The object X 2 = (G(X 1 , X 0 , d 0 , d 1 , s 0 )) 2 is the pushout X 1 + X 0 X 1 of s 0 : X 0 → X 1 along itself, with s 0 and s 1 the two canonical maps X 1 → X 1 + X 0 X 1 . In order to satisfy the simplicial identities we must then define d 0 to be the unique map for which d 0 s 0 = 1 and d 0 s 1 = s 0 d 0 , which we denote [1, s 0 d 0 ] : X 1 + X 0 X 1 → X 1 ; similarly, we must have d 1 = [1, 1] and d 2 = [s 0 d 1 , 1].
In the case where C is not only exact Mal'tsev but also semi-abelian ( [30,2]), there is for every object an order-preserving bijection between equivalence relations and normal subobjects, which is also compatible with regular images. Accordingly, our results can be easily translated in terms of normal subobjects, by replacing every kernel pair by the kernel of the corresponding morphism.
Proof. Let us denote K i ≤ X 1 the kernel of d i : X 1 → X 0 (for i = 0, 1). We recall from [23] the construction of the weighted commutator [K 0 , K 1 ] X 0 : we first define ψ as the map making the diagram | 2019-11-20T15:55:30.000Z | 2019-11-20T00:00:00.000 | {
"year": 2021,
"sha1": "5641f6d640845e5bf8c96d20e5380a33757620b4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.08986",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5641f6d640845e5bf8c96d20e5380a33757620b4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
16742248 | pes2o/s2orc | v3-fos-license | Chinese Medicines Induce Cell Death: The Molecular and Cellular Mechanisms for Cancer Therapy
Chinese medicines have long history in treating cancer. With the growing scientific evidence of biomedical researches and clinical trials in cancer therapy, they are increasingly accepted as a complementary and alternative treatment. One of the mechanisms is to induce cancer cell death. Aim. To comprehensively review the publications concerning cancer cell death induced by Chinese medicines in recent years and provide insights on anticancer drug discovery from Chinese medicines. Materials and Methods. Chinese medicines (including Chinese medicinal herbs, animal parts, and minerals) were used in the study. The key words including “cancer”, “cell death”, “apoptosis”, “autophagy,” “necrosis,” and “Chinese medicine” were used in retrieval of related information from PubMed and other databases. Results. The cell death induced by Chinese medicines is described as apoptotic, autophagic, or necrotic cell death and other types with an emphasis on their mechanisms of anticancer action. The relationship among different types of cell death induced by Chinese medicines is critically reviewed and discussed. Conclusions. This review summarizes that CMs treatment could induce multiple pathways leading to cancer cell death, in which apoptosis is the dominant type. To apply these preclinical researches to clinic application will be a key issue in the future.
Introduction
Cancer is one of the leading causes of death in the world. GLOBOCAN data revealed that approximately 12.7 million new cases of cancers have been diagnosed and 7.6 million deaths were attributed to cancers in 2008 [1]. In these lifethreatening cancers, the causes are diverse and complex and are only partially understood; the reasons why they are difficult to cure might be due to the complicated cancer hallmarks: sustaining proliferative signaling, resisting cell death, inducing angiogenesis, enabling replicative immortality, activating invasion and metastasis, evading growth suppressors, irregulating cellular energetic, genome instability, and mutation as well as tumor-promoting inflammation, and avoiding immune destruction, among which resisting cell death is the intracellular or external factors-triggered tumor action to escape from insult [2].
Cell death has conventionally been divided into three types: apoptosis (Type I), autophagy (Type II), and necrosis (Type III) [3,4]. Apoptosis, Type I programmed cell death (PCD), is a normal response of a physiological process; it becomes a pathological trait in many diseases including cancers when apoptosis is irregulated. It is also the major type of cell death induced by most of the frontline chemotherapeutic agents [3,5,6]. In the process of apoptotic cell death, cells have altered morphology such as blebbing, cell shrinkage, nuclear fragmentation, and chromatin condensation. Morphological features of Type II cell death are different from those of apoptosis, in which formation of autophagosome and cytolysis of autophagosome-lysosome 2 BioMed Research International fusion involve the degradation of the components in cancer cells through the lysosomal machinery [7]. Type III cell death is a necrotic process whose typical characteristics of necrosis include disruption of plasma membrane and induction of inflammation that have been conventionally regarded as an accidental, uncontrolled cell death. However, recent studies found that necrosis could be under control as it shared the same stimuli (cytokines, pathogens, ischemia, heat, and irradiation), signaling pathways (death receptors, kinase cascades, and mitochondrial), and protective mechanisms (Bcl-2/Bcl-x, heat shock protein) as apoptosis [5,8]. Besides these three types of cell death, several other cell death pathways have been elucidated [4,[9][10][11][12]. Since these distinct cell deaths have different subroutines, the Nomenclature Committee on Cell Death (NCCD) has proposed a set of recommendations to define cell deaths based on the biochemical and functional condensation in 2012 [9].
Since many of the clinical anticancer drugs are originally from natural sources, such as vinca alkaloids and taxanes, up to date, some studies have focused on the herbal medicinal products, especially Chinese medicines (CMs, including plants, animals, and minerals) [13][14][15][16][17][18]. Natural products are important sources of anticancer lead molecules. Many successful anticancer drugs come from natural products. More are still under clinical trials. The aim is to develop novel anticancer drugs derived from natural products, especially from CMs. More critical systematic studies on cellular and molecular therapeutic principle of anticancer natural products from CMs in cancer cell deaths need to be conducted.
In this review, we retrieved the relevant publications from PubMed and other databases to summarize the actions of CMs involved in inducing cancer cell death in vitro and in vivo. Besides clinical applications, other novel cell death pathways and the relevance of CMs in these fields are also discussed here.
CMs Induce Apoptotic Death in Human Cancer Cells.
Both intrinsic and extrinsic pathways involve activation of apoptosis by CMs in human cancer cells. The CM-initiated apoptotic cell death is mainly dependent on the activation of caspase cascade. There are two types of apoptotic caspases: initiator (apical) caspases and effector (executioner) caspases. Initiator caspases (e.g., CASP2, CASP8, CASP9, and CASP10) cleave inactive proforms of effector caspases, thereby activating them. Initially, caspases are cysteine-aspartic proteases or cysteine-dependent aspartate-directed proteases in inactive forms. They are cleaved by interacting special molecules such as Apaf-1 (apoptotic protease-activating factor-1), Fas/CD95, or tumor necrosis factor (TNF ) when apoptosis is induced in cells [9,132]. Extrinsic apoptosis depends on caspase activation, while intrinsic apoptosis is either in caspasedependent or -independent manner [9,133]. CMs can activate cancer cell death extrinsically, intrinsically, or both; therefore the mechanisms of CMs inducing cancer apoptotic cell death have been more diversified. [32]. UA binding with oleanolic acid could elevate the caspase-3 activity in human liver cancer cells, Huh7, HepG2, Hep3B, and HA22T [35]. Its antitumor effect was also observed in xenograft model. The results of positron-emission tomography-computed tomography (PET-CT) imaging indicated that proliferation of tumor cells declined after UA treatment in vivo [34,134]. Generally, the mechanism of CMs to cause intrinsic cell death in cancer is caspase-dependent. CMs induced the release of cytochrome c from mitochondria [23], which facilitated the activation of apoptotic protease-activating factor-1 (Apaf-1) and forms Apaf-1 apoptosome that bound to caspase-9 through CARD-CARD (caspase recruitment domain) interactions to form a holoenzyme complex [135,136]. The complex cleaved caspase-3 to produce a caspase cascade resulting in cell death [94,136]. The mechanisms of some representative CMs inducing cancer intrinsic cell death are illustrated in Figure 1.
Apart from caspase-dependent cell death, CMs could initiate apoptosis in both caspase-dependent and caspaseindependent manners. The main biochemical pathway of caspase-independent cell apoptosis was elucidated as the results of release of mitochondrial intermembrane space (IMS) proteins and inhibition of respiratory chain. In this context, apoptosis-inducing factor (AIF) and endonuclease G (Endo G) relocated to the nucleus and mediate largescale DNA fragmentation. The serine protease, a high temperature requirement protein A2 (HTRA2), cleaved many cellular substrates including cytoskeletal proteins as well [9]. Gypenosides (Gyp), derived from Gynostemma pentaphyllum (Thunb.) Makino (Chinese name: Jiaogulan), could suppress the growth of WEHI-3 cells in vitro and in vivo through caspase-dependent and -independent apoptosis. Gyp inhibited Bcl-2, increased Bax, and induced the release of cytochrome c and depolarization of mitochondrial membrane potential (Δ ) and stimulated the activities of caspase-3 and caspase-8, suggesting that Gyp triggered caspasedependent cell death. Gyp also induced the generation of ROS and stimulated the release of AIF and Endo G, Apoptosis [108,109]; autophagy [110,111]; necrosis [112]; anoikis [113] Camptothecin Camptotheca acuminate Decne./Xishu Apoptosis [114] Tetrandrine; fangchinoline Stephania tetrandra S. Moore/Fangji Tetrandrine: apoptosis [50,115]; fangchinoline: autophagy [34] Matrine; oxymatrine Sophora flavescens Ait./Kushen Matrine: apoptosis [116,117]; autophagy [118][119][120]; oxymatrine: apoptosis [121] Herbal [131] resulting in caspase-independent cell death [66]. Silibinin (from Shuifeiji, silybum marinaum (L) Gaenrt) was reported to stimulate the release of HTRA2 and AIF in bladder carcinoma cell line 5637 as well as cytochrome c and activate caspase-3. Thus silibinin could induce bladder cell death in both caspase-dependent and -independent manners [100] ( Figure 1, Table 1). There are some relationships between CMs and intrinsic death stimuli, for example, Scutellaria, one of the most popular CM herbal remedies, used in China and several oriental countries for treatment of inflammation, bacterial, and viral infections, and it has been shown to possess anticancer activities in vitro and in vivo in mouse tumor models [137,138]. The bioactive components of Scutellaria were confirmed to be flavonoids [138,139]. Chrysin is a natural flavone commonly found in honey that has been shown to be an antioxidant and anticancer agent [140]. Several studies showed that Chrysin and Apigenin could potentiate the cytotoxicity of anticancer drugs by depleting cellular GSH, an important factor in antioxidant defense [141][142][143]. A 50-70% depletion of intracellular GSH was observed in prostate cancer PC-3 cells after 24 h of exposure to 25 M Chrysin or Apigenin [141,144].
CMs Induce Apoptosis Extrinsically.
Since extrinsic apoptosis of cancer cells is initiated by binding of death receptors and their ligands, the death receptors may function as signaling gateway in which Fas/CD95 ligands (FasL/CD95L) and some cytokines such as TNF and TNF superfamily member 10 (TNFSF10, also known as TRAIL) play great roles in inducing apoptosis. These lethal cytokines activate Fas-associated protein with a "death domain" (FADD) and thereby activate caspase-8/10, caspase-3, caspase-6/7 to a cascade apoptosis response. Matrine, an alkaloid purified from Sophora flavescens Ait. (Chinese name: Kushen), induces the apoptosis of gastric carcinoma cells SGC-7901. A study using MTT assay showed that matrine inhibited SGC-7901 cells proliferation in dose-and time-dependent manners. Furthermore, the levels of both Fas and FasL were found to be upregulated after matrine treatment, which resulted in apoptotic cell death by the activation of caspase-3 [116]. Other CMs involved in the induction of extrinsic apoptosis included oridonin (from Donglingcao, Rabdosia rubescens (Hemsl.) Hara) [44], polyphenols from green tea [88,89], and glycyrrhizin (from gancao, Glycyrrhiza glabra L.) [81], as listed in Table 1.
CMs Induce Both Intrinsic and Extrinsic Apoptosis.
Some of CMs exhibit a complex nature by inducing both intrinsic and extrinsic apoptosis. Kim et al. found that UA induced the expression of Fas and cleavage of caspase-3 and caspase-8 as well as caspase-9 and decreased its Δ . Other effects, such as Bax upregulation, Bcl-2 downregulation, and the release of cytochrome c to the cytosol from mitochondria, were caused by UA treatment [31] (Figure 1, Table 1).
CMs Induce Autophagic Cancer Cell Death.
Autophagic cell death is characterized with a massive cytoplasmic vacuolization resulting in physiological cell death, which is particularly induced when cells are deficient in essential apoptotic modulators such as Bcl-2 family and caspases. Some of the CMs induce autophagy via several signaling pathways that mediates the downregulation of mammalian target of rapamycin (mTOR) and upregulation of Beclin-1 [4, 5, 12] ( Figure 2). We previously reported that fangchinoline (isolated from Fangji, Stephenia tetrandra S Moore) triggered autophagy in a dose-dependent manner on two human hepatocellular carcinoma cell lines, HepG2 and PLC/PRF/5. Blocking fangchinoline-induced autophagy process would alter the pathway of cell death leading to apoptosis; thus cell death was an irreversible process induced by fangchinoline [34]. Cheng et al. reported that the exposure of murine fibrosarcoma L929 cells to oridonin led to the release of cytochrome c, translocation of Bax, and generation of ROS. Additionally, oridonin induced autophagy in L929 cells through p38 and NK-B pathways. Autophagy occurred after oridonin treatment and blocking autophagy caused apoptosis [39,40]. These observations suggested that autophagic cell death governed the cell fate upon CMs treatment. General information of CMs inducing autophagic cell death is summarized in Table 1. Figure 2 further illustrates the mechanisms of some representative CMs inducing autophagic cell death.
CMs Induce Necrotic Cancer Cell
Death. Necrosis is classified as nonprogrammed cell death in the absence of morphological traits of apoptosis or autophagy. This phenomenon gives rise to "uncontrolled" cell death, loss of ATP, and membrane pumps [4]. In contrast to these features, recent study showed that necrosis exhibited its regulated characteristic, in other words, necroptosis [9]. This process involved alkylating DNA damage, excitotoxins, and ligation of death receptors under some conditions, which depended on the serine/threonine kinase activity of RIP1, target of a new cytoprotective agent, necrostatins. Others that affected the execution of necroptosis were named cyclophilin D, poly (ADP-ribose) polymerase 1 (PARP-1), and AIF [145]. Several researches on CMs have focused on the study of necrosis or necroptosis. Shikonin, a component extracted from Lithospermum erythrorhizon Siebold & Zucc. (Zicao), has been found to induce necrotic cell death in MCF-7 and HEK293. Han et al. reported that cell death pathway of shikonintreated cells was different from either apoptosis or autophagic cell death in which loss of plasma membrane integrity was one of the morphology of necrotic cell death, but loss of Δ and elevation of ROS did not critically contribute to cell death due to the protection by necrostatin-1 [106,107]. ROS and Ca 2+ elevated permeability transition pore complex-(PTPC-) dependent mitochondrial permeability transition (which was also induced by RIP1), while necrostatin-1 specifically prevented the cells from necroptosis. In summary, shikonin could induce cancer cells into necroptosis. Arsenic trioxide, another popular CM (Chinese name: Pishuang), also induced necrosis in the dose of 1 mg/kg accompanied by a sharp decrease of proliferation index in HCC cells [126]. Mercer et al. reported that treatment of artesunate (50 m, 48 h), an artemisinin from Artemisia annua L. (Chinese name: Qinghao), induced 24 ± 9% of necrotic/late apoptotic in HeLa cells and 67 ± 21% necrotic in HeLa 0 cells. These data suggested that induced necrosis was associated with low levels of ATP and defective apoptotic mechanisms in some cancer lines [21]. Table 1 shows general information of CMs-induced necrotic cell death. Figure 3 illustrates the mechanisms of some representative CMsinduced necrotic cell death.
Discussion
As one of the typical cancer hallmarks, cell death has attracted great attention in recent years and the study of this biological process with intervention of CMs will explore a novel way to treat cancers clinically. However, many CMs have not been approved for clinical use yet. To further investigate the efficacy and toxicity of CMs, further researches and clinical trials are necessary. In addition, a lot of CMs have been directly used as composite formula in cancer clinics according to Chinese medicine's theories for centuries. However, limited composite formula-induced anticancer action via cell death pathway is known and only few researches have been conducted from in vitro study, for example, Huang-lian-jie-dutang (Japanese name: oren-gedoku-to) induced apoptotic cell death in human myeloma cells [146], HepG2, and PLC/PRF/5 cells [147]. More studies on composite Chinese medicine formula with good quality control would be needed at the molecular and cellular level.
As mentioned above, CM may exhibit integrated or additive anticancer effect through two or more subpathways. Triptolide (from Leigongteng, Tripterygium wilfordii Hook. f.) could induce both caspase-dependent and -independent apoptotic cell death by activating caspase-3, caspase-8, and caspase-9 and Bax but decreasing Bcl-2 [36-38, 113, 148-152]. These studies indicated that CMs might function on multiple modes in cancer cells which need further studies [12,153] ( Figure 1). With regard to cell deaths, through integrated or additive effect, we have conducted a study to explore how berberine (from Huanglian, Coptis chinensis Franch) induced cell death in human liver cancer cells, HepG2, and MHCC97-L. We found that the chemical induced both apoptosis and autophagy, in which autophagy accounts for 30% of berberine-induced HepG2 cell death, while apoptosis was responsible for the most contribution to liver cancer cell death. With regard to the underlying mechanism of berberine-induced autophagic and apoptotic cell death, our data demonstrated it could induce Bax activation, formation of PTPC, reduction of Δ , and release of cytochrome c and Beclin-1 [111]. Similar to apoptosis, autophagy and necrosis/necroptosis affect PTPC, ROS, Ca 2+ , Bcl-2, Bax, AIF, PARP, and other cytokines during programmed cell death; it was reported that berberine induced necrosis in B16 cells [112]. But it is unknown whether berberine can induce programmed necrosis in HepG2. The cross talk among the three cell death pathways may lead to therapeutic implications. For instance, the selective inhibition of necrosis or apoptotic cell death may defend inflammation and thereby reduce subsequent tissue damage. Besides, it may serve as a novel therapeutic strategy by inducing necrotic cell death on apoptosis resistant cancer cells [109,145].
The effectiveness of cancer chemotherapy significantly depends on apoptosis in cancer cells, while the significance of autophagy and necrosis in cancer therapy needs to be further clarified. Several reports showed that some CMs induced autophagy and inhibited cell apoptosis [30,37,[45][46][47][48]. In contrast, some may induce autophagy leading to apoptosis [36,41,111]. In this context, autophagy might act as a housekeeper which eliminated abnormal proteins and recycles materials during cell starvation [7,154]. Cell death pathway could switch to apoptosis or necrosis by inhibiting autophagy [4,9]. However, the molecular mechanism between apoptosis and programmed necrosis (or necroptosis) is still unclear.
In addition to the above three types of cell death, there are other new types of cell death. Ginsenoside Rh2 (From Renshen) exhibited significant effects on cell death in colorectal cancer cells, HCT116 and SW480. Besides inducing apoptosis through activation of p53 pathway, Ginsenoside Rh2 also increased visible cytoplasmic vacuolization in HCT116 cells, which were blocked by cycloheximide (CHX), a protein synthesis inhibitor. Due to the characteristic of paraptosis as visible cytoplasmic vacuolization without disruption of the cell membrane [155,156], Ginsenoside Rh2 was proposed as a paraptosis-like cell death inducer [42,58,59]. Berberine and a modified Chinese formula, Yi Guan Jian, might induce cancer cell anoikis [113,149,157]. Pharicin A (from Xiangchacai, Isodon amethystoides (Benth.) H. Hara) [123] and casticin (from Manjing, Vitex rotundifolia L.f.) [124] initiated mitotic catastrophe in cancer. Apart from the above-mentioned cell death, several other cell death pathways such as cornification, entosis, netosis, parthanatos, and pyroptosis have also been discussed elsewhere [4,[9][10][11][12]. However, to the best of our knowledge, none of the CMs is found to be involved in these novel pathways.
In summary, this paper reviewed 45 pure compounds and extracts from CMs which can induce different cancer cell death and the underlying mechanisms. The overview of the flow chart is shown in Figure 4. Apparently, cell death is not only one mechanism of all these pure compounds and extracts for cancer therapy, but also via other mechanisms such as antiproliferation, anti-invasion, anti-angiogenesis, and anti-inflammation [15]. Since the natural sources of CMs are raw or processed materials focusing on low-or nontoxic dosages, while all these CMs in this review are pure single compounds or extracts which induce cell death by cytotoxic dosage, we should pay attention to careful explanation of the results of all these CMs. Basically, CM practitioners do not use pure compounds to treat diseases, but CM practitioners begin to integrate traditional use with results derived from modern research including characteristics of CMs inducing cell death for cancer therapy in recent years. For example, berberine, a main active compound of huanglian, is not directly used in CM clinical practice, but the various effects of berberine in cancer cell models will bring some new insight into clinical usage of huanglian when CM practitioners use huanglian combined with other herbs to treat cancer Tang et al., [158]. Usually, huanglian was used in low dosage 2-5 g to treat diseases, while high dosage of huanglian at 15-30 g was also suggested for use in recent years because we found that berberine could inhibit cancer cell migration in low dosage, while berberine could induce cell death in high dosage with safety Tang et al., [15,111,158]. For the high dosage of huanglian, it needs further validation by clinical study. On the other hand, limited composite formula-induced anticancer action via cell death pathway is known and only few researches have been conducted from in vitro study; more studies on composite Chinese medicine formula with good quality control would be needed at the molecular and cellular level and clinical studies.
Conclusions
This review showed that CMs treatment could induce multiple cancer cell death pathways including apoptosis, autophagy, necrosis, and other kinds of cell death, in which apoptosis is the most dominant type. How to apply these preclinical researches to clinical application will be a key issue in the future. The summary about CMs inducing cell death in this systematic review may offer insight into future development of cancer drug discovery from CMs and clinical application of CMs in cancer treatment.
Conflict of Interests
The authors declare there is no conflict of interests regarding the publication of this paper. | 2016-05-12T22:15:10.714Z | 2014-10-14T00:00:00.000 | {
"year": 2014,
"sha1": "d4f74306e6ef78a3263f3b23c8b724d264666cc6",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/530342.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6faa6a17ae33ac88ef90f69fb0b89916580db38",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255523592 | pes2o/s2orc | v3-fos-license | Myddosome clustering in IL‐1 receptor signaling regulates the formation of an NF‐kB activating signalosome
Abstract IL‐1 receptor (IL‐1R) signaling can activate thresholded invariant outputs and proportional outputs that scale with the amount of stimulation. Both responses require the Myddosome, a multiprotein complex. The Myddosome is required for polyubiquitin chain formation and NF‐kB signaling. However, how these signals are spatially and temporally regulated to drive switch‐like and proportional outcomes is not understood. During IL‐1R signaling, Myddosomes dynamically reorganize into multi‐Myddosome clusters at the cell membrane. Blockade of clustering using nanoscale extracellular barriers reduces NF‐kB activation. Myddosomes function as scaffolds that assemble an NF‐kB signalosome consisting of E3‐ubiquitin ligases TRAF6 and LUBAC, K63/M1‐linked polyubiquitin chains, phospho‐IKK, and phospho‐p65. This signalosome preferentially assembles at regions of high Myddosome density, which enhances the recruitment of TRAF6 and LUBAC. Extracellular barriers that restrict Myddosome clustering perturbed the recruitment of both ligases. We find that LUBAC was especially sensitive to clustering with 10‐fold lower recruitment to single Myddosomes than clustered Myddosomes. These data reveal that the clustering behavior of Myddosomes provides a basis for digital and analog IL‐1R signaling.
Introduction
Signaling pathways can give digital or "switch-like" responses that are invariant (Shah & Sarkar, 2011) or alternatively give analog responses that are proportional to the amount of stimulatory input (Nunns & Goentoro, 2018).For instance, in the innate immune system, IL-1 activation of NF-kB has both an invariant component and a response proportional to the stimulating dose (DeFelice et al, 2019;Son et al, 2021).Critical to these responses is the temporal and spatial control of reactants within a signaling pathway.Protein effectors must be brought together at a precise point within the cell to ensure accurate signal transduction.One way in which signaling pathways seem to achieve this is through the formation of signalosomes: subcellular compartments containing clusters of receptors and signaling effectors.Signalosomes are found in multiple receptor signaling systems such as tyrosine kinases, immune receptors, and Wnt receptors (Case et al, 2019).In the case of IL-1 signaling, an NF-kB signalosome assembles in response to stimulation (Tarantino et al, 2014).Thus, a key question is how signalosomes, such as that associated with NF-kB activation, can activate both invariant and proportional responses.
A critical component of many signalosomes is protein scaffolds that can bind and concentrate multiple signaling effectors (Wu, 2013;Jaqaman & Ditlev, 2021).The ability of protein scaffolds to oligomerize or self-assemble plays a crucial role in signalosome formation and downstream signaling (Ditlev et al, 2018).In the immune system, oligomeric protein scaffolds serve a central role in tuning the intensity and duration of signaling responses (Wu, 2013).The Myddosome is an oligomeric complex, composed of MyD88, IRAK4, and IRAK1 (Lin et al, 2010a), that is crucial for IL-1R signal transduction and an inflammatory innate immune response.The Myddosome activates the generation of K63-ubiquitin linked (K63-Ub) chains via directly interacting with the E3 ligase TNF Receptor-Associated Factor 6 (TRAF6; Ye et al, 2002).Myddosomes can also activate the generation of M1-Ubiquitin linked (M1-Ub) chains via the linear ubiquitin chain assembly complex (LUBAC; Tokunaga et al, 2009).K63-Ub and M1-Ub chains recruit the IjB kinase (IKK) complex that activates NF-kB signaling and results in the translocation of the RelA NF-kB subunit to the nucleus (Wertz & Dixit, 2010;Iwai, 2012).This signaling pathway can encode both digital and analog outputs as defined by downstream readouts such as RelA dynamics or transcriptional responses (Tay et al, 2010;Hughey et al, 2015;Cheng et al, 2021).However, whether and how the Myddosome encodes both invariant and proportional outputs upstream of NF-kB has not been investigated.
Here, we address this problem using live-cell imaging to visualize Myddosome formation and downstream signal transduction in response to IL-1 stimulation.We observe that Myddosomes reorganize into clusters or regions of the plasma membrane that contain a high density of complexes.Physically limiting Myddosome clustering with extracellular barriers diminishes NF-kB activation.We find that Myddosomes function as scaffolds that nucleate a signalosome containing K63/M1-Ub chains and markers of NF-kB activation.Single Myddosomes can nucleate the formation of this signalosome, suggesting it is an invariant or digital signaling output of the complex.However, this NF-kB signalosome preferentially formed at clusters and the degree of Myddosome clustering proportionally increases the size of this signalosome.In particular, clustering amplifies the production of M1-Ub.Live-cell imaging revealed that the ubiquitin ligases TRAF6 and LUBAC are preferentially recruited to Myddosome clusters.Restricting clustering diminished TRAF6 recruitment and severely perturbed HOIL1 recruitment.We conclude that clustering is an important determinant of E3 ubiquitin ligase recruitment, and this dynamic encodes a signaling output that is proportional to the nanoscale density of complexes within the cluster.These results suggest a mechanism for how Myddosomes can encode both digital and analog responses upstream of NF-kB in IL-1R signaling.
Myddosomes dynamically reorganize into clusters
Understanding how the IL-1R and Myddosomes encode digital and analog outputs requires understanding where these differences arise within the signaling network.Therefore, to uncover the link between the spatial organization of Myddosomes and the production of downstream signaling outputs, we used a supported lipid bilayer (SLB) system functionalized with IL-1 (Deliz-Aguirre et al, 2021) to visualize the dynamics of IL-1R-Myddosome signal transduction.We pipetted EL4 cells expressing MyD88-GFP into chambers containing SLBs and imaged them as they land on this surface and bind to IL-1.We found that MyD88-GFP assembles into puncta at the cell surface (Fig 1A).Initially, MyD88-GFP puncta are spatially segregated, but over time MyD88-GFP puncta move and coalesce, forming brighter puncta and eventually larger dense patch-like structures at the cell-SLB interface (Fig 1A;Movie EV1).While many Myddosome clusters were highly stable, in some cases, we observe that Myddosome clustering was not unidirectional, with discrete Myddosomes undergoing merging and subsequent splitting (Fig EV1A).This unstable nature of a subset of clusters might be because they are composed of discrete Myddosomes confined to separate membrane protrusions and contact zones with the SLB (Fig EV1B), thus allowing them to split and move apart.Alternatively, more stable Myddosome clusters might be confined to a continuous contact zone with the SLB (Fig 1B).These clusters are similar to large Myddosome structures observed in Toll-like receptor 4 signaling that are associated with stronger NF-kB responses (Latty et al, 2018).We decided to investigate how these dynamic Myddosome clusters are involved in IL-1R signal transduction.
The dynamic clustering of MyD88 puncta within the plane of the cell membrane suggests that Myddosomes are tethered to the inner leaflet of the plasma membrane.This tethering is likely due to heterotypic TIR domain interactions between MyD88 and the IL-1R/IL-1RAcP complex (Nimma et al, 2017).We predict, if this model is correct, extracellular barriers that restrict the diffusion of SLBtethered IL-1 would restrict Myddosome mobility and clustering (Fig 1B).To test this model, we used coverslips nano-printed with chromium barriers arranged into multiple 0.5 mm square grids.Within these square grids, chromium grid lines were printed into 1 ▸ Figure 1.Myddosomes are tethered to the cell surface and extracellular barriers inhibit Myddosome coalescence and diminish NF-kB activation.
A Timelapse TIRF microscopy images showing an EL4-MyD88-GFP cell interacting with a IL-1 functionalized SLB.MyD88-GFP assembles into puncta that cluster and coalesce at the cell:SLB interface.Scale bar, 2 lm.B Schematic illustrating a working model for Myddosomes being tethered to the plasma membrane via interaction with IL-1R bound to IL-1.Based on this model, we predict physical barriers (on grid) would restrict the diffusion of IL-1 on the SLB and limit Myddosome clustering.With no external barriers present (off grid), MyD88 puncta can merge to form larger multicomplex assemblies.C TIRF and bright-field microscopy images of EL4-MyD88-GFP cells incubated for 30 min with IL-1 functionalized SLBs formed off grid and on 1 and 2.5 lm grids.In the presence of a 1 and 2.5 lm grids, Myddosomes only coalesce within individual corrals and do not form multicomplex clusters.Scale bar, 5 lm.D Time series showing Myddosome formation in an EL4 cell interacting with both continuous and 2.5 lm gridded partitioned SLBs.t = 0 s denotes the start of cellular observation.Scale bar, 5 lm.E Kymographs from panel (D) showing the coalescence of MyD88-GFP puncta off grid and the restricted movement of MyD88-GFP puncta on 2.5 lm grids.Scale bar, 1 lm.F Quantification of MyD88-GFP puncta maximum fluorescence intensity normalized to GFP from cells stimulated off and on 2.5 or 1 lm grids, at a ligand density of 10 IL-1/lm 2 .Violin plots show the distribution of average max puncta intensities from individual cells across replicates.Data points superimposed on the violin plots are the averages from independent experimental replicates.The average max MyD88 puncta intensity (mean AE SEM): for off grid is 7. or 2.5 lm square corrals (Fig EV1C).The chromium grid lines function as physical barriers and create an array of corralled SLBs with uniform dimensions.We confirmed SLB formation within these grids and that IL-1 ligands are freely mobile within corrals, but diffusion between corrals is restricted (Fig EV1D and E).In cells that landed on nanopatterned grids, MyD88 puncta were confined to individual corrals (Fig 1C).We imaged cells that straddled the boundary between the 2.5 lm grid and continuous coverslip, so we could analyze Myddosome dynamics on/off grids within the same cell (Fig 1D;Movie EV2).Kymograph analysis reveals that in the same cell only off-grid MyD88 puncta clustered.In contrast, MyD88 puncta on the 2.5 lm grid were confined to individual corrals and did not merge with puncta in adjacent corrals (Fig 1E).We conclude that Myddosomes are biochemically coupled to extracellular IL-1 via IL-1R, and extracellular barriers limit the diffusion of complexes within the plasma membrane.
To determine whether clustering regulates downstream signaling requires tools that can isolate a single Myddosome for comparative analysis to clustered Myddosomes.We analyzed the size distribution of MyD88 puncta on/off grids (Fig 1F).Off-grid MyD88 puncta had a broad size distribution and a mean MyD88 copy number of 7.5 AE 1.8 MyD88s (Fig 1F ), suggesting a mix of clusters and single complexes.However, on 2.5 or 1 lm grids, the mean MyD88 copy number was 5.2 AE 1.7 or 2.3 AE 0.1 MyD88s (Mean AE SEM, Fig 1F, also see Materials and Methods).Only 9.3 AE 1.6% of MyD88 puncta on 1 lm grids had an intensity consistent with ≥ 1 Myddosome complexes and 1.8 AE 0.6% puncta had an intensity consistent with ≥ 2 Myddosome complexes (see Materials and Methods and Fig 1F).Thus, the majority of MyD88-GFP puncta on 1 lm grids are small transient MyD88 assemblies or single Myddosome complexes (Deliz-Aguirre et al, 2021), and nanopatterned IL-1 functionalized SLBs can spatially isolate single Myddosomes.In conclusion, nanopatterned coverslips are an effective tool to assay how the spatial organization of Myddosomes is functionally connected to digital and analog signaling outputs.
Inhibiting Myddosome clustering diminishes RelA translocation to the cell nucleus
We tested whether inhibition of clustering perturbed NF-kB signaling by measuring RelA translocation to the nucleus.In unstimulated cells (incubated with unfunctionalized SLBs), RelA staining is limited to the cytosol and depleted within the cell nucleus (Fig 1G).In cells incubated with IL-1 functionalized SLBs without grids, RelA translocates from the cell cytosol to the nucleus, resulting in stronger nuclear staining (Fig 1G).For cells on grids, a mixture of both events was observed (2.5 and 1 lm grids, Fig 1G).When we quantify RelA translocation, we found that the nucleus-to-cytoplasm ratio of RelA staining significantly decreased on 1 or 2.5 lm grids (normalized RelA nucleus-to-cytoplasm ratio of 0.45 AE 0.01 off grid versus 0.34 AE 0.02 and 0.32 AE 0.02 on 2.5 and 1 lm grids, respectively, mean AE SEM, Fig 1H).We conclude that the inhibition of Myddosome clustering impacts NF-kB activation and RelA translocation to the nucleus.The implication of these results is that Myddosome dynamics and spatial density at the cell surface are linked to the production of signaling outputs required for NF-kB activation.
Myddosomes colocalize with an NF-kB-activating signalosome composed of K63-Ub/M1-Ub polyubiquitin chains, phospho-IKK, and phospho-p65 Innate immune signaling complexes are proposed to function as signaling scaffolds that recruit and activate downstream effectors (Wu, 2013).We speculated that the spatial organization of a signaling complex could regulate its scaffolding function, and this could be the basis for invariant or proportional signaling responses.This scaffolding model suggests spatial colocalization between Myddosomes and biochemical signaling reactions.We examined the colocalization of Myddosomes with IL-1 signaling outputs such as K63-Ub, M1-Ub, phosphorylated IjB kinase (pIKK) complex, and phosphorylated RelA subunit p65 (pp65) using immunofluorescence and TIRF microscopy (Fig 2A;Appendix Fig S1A).We found these signaling outputs had a punctate staining pattern that colocalized with dense patches of clustered MyD88-GFP puncta (Fig 2A).Detailed analysis of these Myddosomes patches shows that MyD88 was organized into heterogenous puncta of different sizes and irregular shapes (Fig 2B).While these puncta of K63-Ub, M1-Ub, pIKK, and pp65 staining did not uniformly coat MyD88 patches, these structures were clearly associated with MyD88 clusters.These results confirm that downstream signaling outputs are generated at cell surface Myddosomes.
We used structured illumination microscopy (SIM) to image the spatial organization of pIKK and MyD88 puncta with higher resolution and within the entire cellular volume (Fig 2C;Appendix Fig S1B).Consistent with our TIRF studies, we found that pIKK punctate structures colocalized with MyD88-GFP puncta at the cell surface (Fig 2C).In some instances, SIM revealed that pIKK puncta partially overlapped or were adjacent to MyD88-GFP puncta (inset,Fig 2C).Z-stack analysis revealed that pIKK puncta localized to the cell-bilayer interface and were rarely found deeper within the ▸ Figure 2. Myddosomes colocalize with a NF-kB signalosome composed of K63-Ub and M1-Ub chains and phospho-IKK and phospho-p65.
A TIRF images of fixed EL4-MyD88-GFP cells off grids and stained with antibodies against K63-Ub, M1-Ub, pIKK, and pp65.Cells were activated on IL-1 functionalized SLBs for 30 min before fixation.Scale bar, 5 lm.B A magnified view of the large patch-like Myddosome clusters from the highlighted region of interest in panel (A) (yellow box on merge images).Scale bar, 1 lm.C Structured illumination microscopy images of Myddosome clusters stained with anti-pIKK.Top row right, insets show the detail of Myddosome staining with anti-pIKK.Inset taken from regions of interest overlaid the merge image (yellow boxes 1 and 2).Bottom row, x-z view slice taken from yellow line overlaid on the merge image (top row).Myddosome and pIKK staining localize the cell-SLB interface.Blue dashed line defines the nucleus volume determined from the DAPI stain.Scale bar in main image and Z projection, 1 lm; scale bar inset, 0.5 lm.D Schematic showing working model for how Myddosome clustering could enhance the generation of K63/M1-Ub, pIKK, and pp65 and a NF-kB signalosome.We hypothesize that the Myddosome clustering creates regions with a high density of complexes, and this will lead to enhanced production of signaling intermediates such as K63-Ub and M1-Ub chains, pIKK and pp65.
Source data are available online for this figure. .Therefore, the signaling output of Myddosomes increases proportionally with the degree of clustering: the higher the density of complexes within a Myddosome cluster, the greater the intensity of pIKK and pp65.We used SLBs formed on 1 and 2.5 lm grids to inhibit the formation of Myddosome clusters (Fig 1F) and assayed how this impacted pp65 and pIKK staining.We found that cells on grids still assembled MyD88 puncta that colocalized with pp65 and pIKK (Fig 3C and D;Appendix Fig S1C and D).Scatter plot analysis of MyD88 puncta assembled revealed that, similar to off-grid (Fig 3A and B), there was a linear relationship between puncta intensity and associated pp65/pIKK staining (Fig 3E and F).Similar to above (Fig 3A and B), these data suggest a linear relationship between the density of Myddosome complexes and pIKK and pp65 production.However, restricting the degree of clustering proportionally reduced pp65 and pIKK production.
We compared the mean pp65/pIKK intensity of puncta classified as single or clustered Myddosomes with puncta formed on 2.5 and 1 lm grids (Materials and Methods and Appendix Fig S1E and F).We found that Myddosome clusters had a 5-and 10-fold greater mean pp65 and pIKK staining intensity compared with MyD88 puncta on 1 and 2.5 lm grids that were most likely single complexes (Fig 3G and H).Interestingly, puncta classified as single Myddosomes off grid had statistically greater mean pp65/pIKK intensity than single Myddosomes formed on grids.We noticed that off grid some of these single Myddosomes with high pp65/pIKK staining intensity were closely associated with Myddosome clusters; this suggested Myddosome clusters could enhance signaling output for adjacent complexes.Alternatively, these off-grid single Myddosomes with high staining intensity could be linked or connected to Myddosome complexes deeper in the cell, which are not illuminated by the TIRF field, and this could also explain the greater staining intensity.The greater concentrations of pp65 and pIKK at Myddosome clusters suggest they are hot spots for NF-kB signaling, and that the localized production of these outputs is proportional to the degree of complex density within clusters.
We examined the relationship between MyD88-GFP puncta intensity and staining with antibodies against K63-Ub and M1-Ub chains .As above, we found a linear correlation between MyD88 puncta intensity and K63/M1-Ub staining intensity (R = 0.75 and 0.73 for K63-Ub and M1-Ub staining intensity, scatter plot, Fig 4A and B).MyD88 puncta that formed on 1 and 2.5 lm grids still colocalized with punctate K63/M1-Ub structures but overall had lower staining intensities (Fig 4C and D).However, similar to off-grid, there was a correlation between MyD88 puncta and K63/M1-Ub intensity on grids (Fig 4C -F).We found that Myddosome clusters had a 4-fold greater mean K63-Ub intensity and a 3-fold greater mean M1-Ub intensity than single Myddosomes and puncta on 1 and 2.5 lm grids (Fig 4G and H,Appendix Fig S1I and J).Similar to pp65 and pIKK staining, single Myddosomes off In summary, we found that signaling outputs such as pp65, pIKK and K63/M1-Ub colocalize with single and clustered Myddosome complexes.This suggests that a single Myddosome can activate NF-kB signalosome formation and this is an invariant signaling output encoded by Myddosome assembly.However, we found Myddosome clusters colocalized with > 3-fold larger NF-kB signalosomes defined by greater amounts of pp65, pIKK, and K63/M1-Ub.Furthermore, the degree of clustering led to a proportional increase in signalosome size and the robust incorporation of M1-Ub into this NF-kB signalosome.The impact of clustering on signal transduction is apparent when we calculate the K63/M1-Ub, pp65, and pIKK staining intensity per single Myddosome within clusters and compare that to isolated Myddosome complexes.We conclude that the signaling output of single Myddosome complexes increases when organized within clusters (Fig EV2A -D).This suggests that the spatial organization of the Myddosome encodes an analog signaling response.
Larger Myddosome clusters have enhanced TRAF6 and LUBAC recruitment
One limitation of immunofluorescence analysis (Figs 3 and 4) is that the spatial-temporal relationship between Myddosome formation, clustering, and NF-kB signalosome formation cannot be resolved.Therefore, having shown that Myddosomes can generate invariant and proportional outputs, we examined how these outputs arose from the dynamics of Myddosome formation, clustering and NF-kB signalosome assembly.Our data (Figs 2-4), along with published studies (Tarantino et al, 2014;Du et al, 2022), suggest that NF-kB activation occurs in condensate cellular compartments that contain K63-Ub/M1-Ub chains.We generated two CRISPR double knock-in EL4 cell lines that expressed MyD88-GFP and either the K63-Ub E3 ligase TRAF6 or the M1-Ub E3 ligase LUBAC subunit HOIL1 labeled with the mScarlet (Appendix Figs S2A-C and S3A-D).When we imaged these cell lines, we found that a subset of MyD88-GFP puncta recruited mScarlet-TRAF6 (Fig 5A ) or mScalet-HOIL1 (Fig 5B).We found that TRAF6 or HOIL1 appeared after the formation of the MyD88 puncta (Fig 5A and B,Movies EV3 and EV4).Both MyD88-GFP and mScarlet-TRAF6 or mScarlet-HOIL1 puncta were initially dim and grew in intensity (Fig 5A and B).In some instances, we observed that TRAF6 was transiently recruited to Myddosomes, with this transient TRAF6 recruitment often preceding the stable association of TRAF6 with MyD88 (Fig EV3A).This dynamic suggests that Myddosome served as a scaffold for the nucleation of TRAF6 assemblies (Yin et al, 2009).In summary, we found that Myddosomes recruit and assemble punctate structures of the ubiquitin ligases TRAF6 and LUBAC.The molecular dynamics of MyD88 and the E3 ligases TRAF6 or LUBAC are consistent with Myddosomes functioning as an inducible scaffold and focal point for the activation of K63/M1-Ub generation.
We asked whether MyD88-GFP puncta clustering and lifetime enhanced TRAF6 recruitment.We observed that TRAF6-positive MyD88 puncta had an average size of 10.4× MyD88s (Fig EV3B and C).Based on structural studies of 6× MyD88s per Myddosome (Lin et al, 2010a), the average MyD88 copy number in TRAF6-positive puncta suggested they contain on average one or more Myddosome complexes.In total, 15.4 AE 2.4% of MyD88 puncta colocalize with TRAF6 (mean AE SEM, from six replicates, When we quantified multicomplex puncta containing > 1 or ≥ 2 Myddosomes, we found that the percent of HOIL1-positive recruitment increased to 14.9 AE 1.9% and 25.9 AE 5.0%, respectively (mean AE SEM, Fig 5D).
Finally, we plotted the percent of MyD88 puncta that colocalized with TRAF6 or HOIL1 as a function of the number of Myddosome complexes per puncta (Fig 5E).We found that the percent of TRAF6 and HOIL1 colocalized puncta increased as the number of Myddosomes per MyD88-GFP puncta increased (Fig 5E).We observed a dramatic change from single Myddosomes to small clusters, estimated to contain 2-4 complexes, which increased the probability of TRAF6 and HOIL1 recruitment by 5-and 10-fold, respectively.The probability of TRAF6/HOIL1 recruitment continued to increase with the increasing density of Myddosomes per puncta.Therefore, Myddosomes organized into clusters have a greater probability of recruiting E3 ubiquitin ligases TRAF6 and HOIL1.We conclude that this enhanced recruitment is the mechanistic basis for why K63-Ub/ M1-Ub responses scale with the density of Myddosome complexes within clusters (Fig 4).
Myddosome clustering triggers the sequential recruitment of TRAF6 and LUBAC
If clustering is a driver for TRAF6 and HOIL1 recruitment, we expect that formation of clusters would precede the recruitment of both ligases.Therefore, we asked whether the formation of Myddosome clusters occurs before or after the recruitment of TRAF6 and HOIL1.We analyzed the size of Myddosomes at the time point when TRAF6 and HOIL1 are recruited.We defined this time point as the TRAF6/ HOIL1 landing size (Fig 5F ) and quantified the number of Myddosomes per puncta at this time point.We found that the average landing size for TRAF6 was 1.4 AE 0.3 Myddosome complexes and for HOIL1 the landing size was 6.2 AE 1.1 Myddosome complexes per puncta (Fig 5G).This suggests that on average, the formation of Myddosome clusters precedes the recruitment of TRAF6 and LUBAC.
We analyzed the recruitment time of TRAF6 and HOIL1, which we defined as the time interval from the nucleation of a MyD88 puncta to the recruitment of mScarlet-TRAF6 or mScarlet-HOIL1 (Fig 5H).We found that the average recruitment time for TRAF6 was 67.5 AE 21.6 s (mean AE SD, Fig 5H).In contrast, HOIL1 had an average recruitment time of 118.8 AE 30.6 s (mean AE SD, Fig 5H).We conclude that TRAF6 and HOIL1 are recruited to clusters of Myddosomes and that the recruitment of these two ubiquitin ligases is staggered temporally: TRAF6 is recruited first followed by HOIL1.In contrast to TRAF6, HOIL1 is recruited to puncta composed of a greater density of Myddosome complexes.In conclusion, Myddosome signaling outputs are kinetically controlled by spatial organization.Specifically, the analog production of signaling outputs is encoded by Myddosome density within clusters as this regulates the probability of TRAF6 and HOIL1 recruitment.
TRAF6 and LUBAC have enhanced recruitment and lifetime at Myddosome clusters
We set out to assay how the combination of nanopattern grids and ligand density affected Myddosome clustering and TRAF6/HOIL1 recruitment.If clustering regulated the probability of TRAF6/LUBAC recruitment (Fig 5E-G) and this was the basis of digital and analog Myddosome signaling outputs, we reasoned inhibiting clustering and isolating single complexes should reduce the recruitment of both E3 ligases.We predicted that increasing IL-1 density within individual 1 lm 2 corrals would restore TRAF6/HOIL1 recruitment, as single corrals would contain sufficient IL-1 to trigger the assembly of multiple Myddosomes that could merge into clusters.
We characterized the formation of Myddosome clusters in cells on and off 1 lm grids stimulated by SLBs with 1 and 10 IL-1/lm 2 (Fig EV4A and B).We confirmed that on 1 lm grids, Myddosome cluster formation was fourfold greater at the higher ligand density (4.7% versus 1.2% of puncta classified as clusters at 10 and 1 IL-1/lm 2 , Fig EV4A and B).As observed previously (Fig 1F), we found that MyD88 puncta size in cells stimulated with 1 lm grids is smaller than those stimulated with SLBs off grid.Live-cell imaging and kymograph analysis at 1 IL-1/lm 2 showed that, in contrast to off-grid (Fig 6A ), MyD88-GFP puncta on 1 lm grids did not coalesce and cluster (Fig 6B).However, a portion of these puncta still recruited mScarlet-TRAF6 (Fig 6B , Movie EV5).Analysis revealed a twofold difference off and on grids in the frequency of TRAF6 recruitment at 1 IL-1/lm 2 (16.2 AE 2.5% versus 6.1 AE 1.5% TRAF6positive MyD88 puncta per cell off and on grids, respectively, mean AE SEM, Fig 6C and Appendix Fig S5A).At a higher ligand density of 10 IL-1/lm 2 , the percentage of TRAF6-positive Myddosome on 1 lm grids increased by a factor of 2 (12.7 AE 1.8% versus 6.1 AE 1.5% TRAF6-positive MyD88 puncta per cells at 10 and 1 IL-1/lm 2 , Fig Movie EV6).Thus, increasing the number of IL-1 per 1 lm 2 corral can rescue the perturbation of TRAF6 recruitment.These data reveal that single Myddosomes can trigger a TRAF6 response, consistent with an invariant signaling output.However, Myddosome clusters have an increased probability of TRAF6 recruitment, suggesting a mechanism for how clusters can generate a proportionally greater signaling output than single complexes.
To examine the role of Myddosome clusters on HOIL1 recruitment, we applied the same strategy of using 1 lm grids and a high and low ligand density to change the frequency of cluster formation (Fig EV5A and B).At a ligand density of 10 IL-1/lm 2 , we observed the dynamic coalescence and clustering of MyD88 puncta (Fig 6G, We examined the lifetime of TRAF6 and HOIL1 recruitment to single and clustered Myddosomes from on-and off-grid data (Fig 7A and B).We find that a greater density and number of Myddosome complexes within clusters correlates with a greater lifetime of TRAF6 and HOIL1.We conclude that Myddosome clusters increase the stability of TRAF6 and HOIL1 at Myddosomes and that this is the possible basis for why clusters have increased signaling output
Discussion
Here, we used high-resolution microscopy to visualize and quantify the signaling output of Myddosomes.We find single Myddosomes can recruit TRAF6 and HOIL1 (Fig 5) and form a signalosome that promotes the localized production of K63/M1-Ub, pIKK and pp65 (Figs 3 and 4).This suggests that Myddosomes function as a scaffold to stimulate the formation of a NF-kB activating signalosome (Fig 2A and B).We conclude that NF-kB signalosome formation is a digital signaling response of Myddosomes (Fig 7C).However, the probability of activating this response at single complexes is low, but we find that this signaling response can be amplified by increasing local density of Myddosomes within cell surface clusters.We and others have observed that Myddosomes cluster (Latty et al, 2018).Here, we use extracellular nanoscale barriers to reveal that Myddosomes are tethered to the cell surface via direct interaction with the IL-1R bound to extracellular IL-1 (Fig 1B and C).Using this technology, we discover the reorganization of Myddosomes into clusters has functional consequences: These Myddosome clusters increase the nucleation frequency and signaling output of this NF-kB signalosome (Fig 2).Myddosome clustering dramatically enhances the recruitment and incorporation of LUBAC into these signalosomes.These results suggest that the spatial organization of Myddosomes can encode responses proportional to the amount of IL-1 stimulation.
C
Quantification of percentage of MyD88-GFP puncta that colocalized with TRAF6 off grids and on 1 lm grids at a ligand density of 1 IL-1/lm Previous studies have found that Myddosomes form large aggregate structures after TLR or IL-1 stimulation (Latz et al, 2002;Latty et al, 2018;Deliz-Aguirre et al, 2021).In macrophages stimulated with TLR4 agonist LPS, the formation of large Myddosome clusters correlated with higher doses of LPS stimulation enhanced NF-kB activation and gene expression (Latty et al, 2018).These results are Myddosome clusters may have other functional roles beyond enhancing NF-kB signalosome formation.Like previous studies in macrophages (Latty et al, 2018), we observe that Myddosomes tend to form a large central focal point (Figs 1A and 2A).As experiments with the nano grids demonstrate (Fig 7 ), Myddosome clusters can form on grids at higher IL-1 densities, and these clusters can recruit HOIL1 and TRAF6.Despite this rescue of TRAF6/HOIL1 recruitment, the grid still prevents the formation of this large patch-like structure.This structure observed in cells off grid (Fig 1A ) possibly plays a role in other downstream signaling processes, such as the internalization of IL-1R-Myddosome complexes by endocytosis or terminating signal transduction.How the spatial organization of Myddosomes regulates other signaling reactions and cellular processes is an avenue for future investigations.
How does clustering enhance TRAF6 and HOIL1 recruitment?The Myddosome has a fixed stoichiometry (Motshwene et al, 2009;Lin et al, 2010a), and with 4× IRAK1 monomers per complex, it has a maximum of 12× TRAF6-binding motifs per complex (Ye et al, 2002).Therefore, clustering might be a dynamic mechanism to increase the avidity of TRAF6-binding sites at a focal point on the plasma membrane.We show that single Myddosomes can still recruit TRAF6 to the cell surface (Fig 6A), although at a lower probability than clusters of Myddosomes.TRAF6 is predicted to form a 2D lattice (Yin et al, 2009), with the trimeric C terminus making contact with the Myddosome (Ye et al, 2002).Myddosomes clustering might stabilize higher-order assemblies of TRAF6 that promote its ubiquitin ligase activity (Yin et al, 2009).LUBAC component HOIP recognizes K63-Ub (Emmerich et al, 2013), and thus, its recruitment depends on the amount of K63-Ub chains.We find less K63-Ub associated with single Myddosomes than clustered Myddosomes (Fig 4).We also find HOIL1 and M1-Ub are especially sensitive to Myddosome clustering (Figs 4 and 6).Therefore, Myddosome clustering might lead to larger TRAF6 assemblies, enhanced ubiquitin ligase activity, a greater production of K63-Ub, and enhanced HOIL1 recruitment as well as formation of M1-Ub.Thus, the K63-Ub output of TRAF6 will scale proportionally with the density of Myddosomes within clusters.
In conclusion, Myddosomes function as a plasma membraneassociated scaffold that assembles an NF-kB activating signalosome.We show that the spatial density of the Myddosome regulates the assembly and size of this NF-kB activating compartment.This mechanism might explain how the IL-1 signaling pathway can create invariant and proportional NF-kB responses (DeFelice et al, 2019;Son et al, 2021).Other innate immune signaling pathways, such as inflammasomes and STING, use the clustering of signaling complexes to control the formation of specialized signaling compartments (Magupalli et al, 2020;Yu et al, 2021).It is possible clustering is a unifying mechanism across innate immune signaling to transmit switch-like responses and analog information such as the amount and duration of a stimulus.An important future direction is quantifying the spatial organization of other innate immune signaling complexes and how this connects to digital versus analog signaling responses.The approach we establish here that combines live-cell microscopy with technologies that enable spatial control of signaling complexes provides a powerful strategy to study how the dynamics of signaling pathways shape signaling outputs.Model showing how a single Myddosome has a digital signaling output.However, the amplitude of this output increases proportionally as the density of Myddosome complexes increases within clusters.The increased amplitude of the M1/K63-Ub, pIKK, and pp65 signaling output is likely due to the increase in stabil- ity of LUBAC and TRAF6 at larger clusters.As Myddosomes are biochemically coupled to extracellular IL-1, this mechanism examines how IL-1 signaling can generate both digital and analog signaling responses that are proportional to the stimulating dose of IL-1.
Source data are available online for this figure.EV1).
Generation of CRISPR/Cas9 engineered cell lines EL4.NOB1 cells were electroporated with a pX330 Cas9/gRNA expressing vector and the pMK vector encoding the HDR template with the Neon Transfection System.EL4 cells were electroporated with the following conditions: voltage (1,080 V), width (50 ms), and number of pulses (one).For double editing of MyD88/TRAF6 or MyD88/HOIL1 gene loci, 1.5 lg of sgRNA-Cas9 and HDR template plasmids (in equal molar ratio) were electroporated simultaneously.After electroporation, cells were plated in RPMI culture medium without antibiotics for 24 h.For the selection of TRAF6 and HOIL1 edited alleles, 6 lg/ml blasticidin was added to the cell culture medium 24 h after electroporation.EL4 cells were selected in blasticidin for 48 h.Monoclonal cell lines were generated by fluorescence-activated cell sorting (FACS).Cells were sorted using BD FACS Aria II at Deutsches Rheuma-Forschungszentrum Berlin, Flow Cytometry Core Facility.To isolate gene-edited EL4 cells, we first performed a bulk sorting of double-positive cells.This population was expanded, and single cells were sorted into 96-well plates containing culture medium with 15% EL4.NOB-1 conditioned RPMI medium.
The gene-edited clonal cell lines were verified using PCR, sequencing, and western blot analysis.First, genomic DNA was isolated from selected monoclonal cell lines using QuickExtract DNA Extraction Solution (Epicentre).To test for gene editing and correct insertion of mGFP/mScarlet-i cassette, PCR primers were designed to amplify a DNA fragment that contained the junctions between mGFP/mScarlet-i open reading frame, the 3 0 or 5 0 homology arm and the gene locus.To check whether single-cell clones were homozygous or heterozygous, we designed PCR primers that amplified a fragment containing mGFP/mScarlet-i cassette, the entire 3 0 or 5 0 homology arms and the junction between the homology arms and the gene locus (see Table EV1).PCR products were analyzed on a 0.8-1% agarose gel, gel extracted using Monarch Nucleic Acid Purification Kits (NEB) and submitted for Sanger Sequencing.Analysis of EL4 HOIL1-mScarlet/MyD88-GFP genomic DNA showed heterozygous editing of the HOIL gene locus and homozygous editing of the MyD88 locus.
To confirm the presence of mEGFP/mScarlet-i fusion protein, the cell clones were analyzed by western blot using specific antibodies against MyD88, TRAF6, HOIL1, and GFP or mScarlet-i (RFP).Insertion of fluorescent tags resulted in a 25 kDa increase of molecular weight in comparison with nontagged protein.As expected from the sequencing result, the HOIL1 edited cell line was expressing mScarlet-i-HOIL1 and nontagged HOIL1.The TRAF6 edited cell line was expressing mScarlet-i-TRAF6 (Appendix
Assay of IL-2 release in WT and gene-edited EL4 cells
To measure IL-2 release, we used the Mouse IL-2 DuoSet ELISA kit (R&D Systems; DY402-05) following the manufacturer's protocol.First, 10 6 cells in 150 ll medium per well were seeded into a 48-well plate and allowed to settle for 30 min.Cells were then stimulated with in 50 ll medium per well at a final concentration of 10 ng/10 6 cells.For unstimulated controls, 50 ll medium only was added.After 24 h, plates were centrifuged (300 g for 5 min), and supernatants were transferred to a new plate.Supernatants were stored at À80°C until IL-2-ELISA analysis.Absorbance readings were acquired on a VersaMax Microplate Reader (Molecular Devices) at 450 nm.IL-2 release was assayed on three independent days in triplicate.The obtained results were normalized based on the EL4 WT IL-2 release (Appendix Fig S2C).
Chromium nanopatterned coverslips
Chromium nanopatterned coverslips with the design and specification described (see Fig EV1C-E) were produced by ThunderNIL Srl (Trieste, Italy).Coverslips were fabricated by the pulsed nanoimprint lithography method (Lin et al, 2010b) and printed with a master design which contained multiple nanopatterned chromium grids containing square corrals with 2.5 or 1 lm 2 dimensions.Chromium gridlines were 100 nm thick and 5 nm high and were printed on no.1.5 coverslips with a diameter of 25 mm.
To prepare SLBs on 96-well glass bottom plates (Matrical), the plates were cleaned for 30 min with a 5% Hellmanex solution containing 10% isopropanol heated to 50°C, then incubated with 5% Hellmanex solution for 1 h at 50°C, followed by extensive washing with pure water.Ninety-six-well plates were dried with nitrogen gas and sealed until needed.To prepare SLB, individual wells were cut out and base etched for 15 min with 5 M KOH and then washed with PBS.To form SLBs, SUV suspension was deposited in each well or coverslip and allowed to form for 1 h at 45°C.After 1 h, wells were washed extensively with PBS.SLBs were incubated for 15 min with HEPES buffered saline (HBS: 20 mM HEPES, 135 mM NaCl, 4 mM KCl, 10 mM glucose, 1 mM CaCl 2 , 0.5 mM MgCl 2 ) with 10 mM NiCl 2 to charge the DGS-NTA lipid with nickel.The SLBs were then washed in HBS containing 0.1% BSA to block the surface and minimize nonspecific protein adsorption.After blocking, the SLBs were functionalized by incubation for 1 h with His10-IL-1b.The labeling solution was then washed out, and each well was completely filled with HBS with 0.1% BSA.For SLBs set up on 96well plates, the total well volume was 630 ll (manufacturers specifications), and 530 ll was removed leaving 100 ll of HBS 0.1% BSA in each well.Each SLB was functionalized with 100 ll His10-Halo-IL-1b of twofold desired concentration for 1 h, and excessive ligands were washed away with HBS.
To prepare SLBs on normal or nanopatterned coverslips, the coverslips were cleaned by bath sonication for 30 min in MilliQ H2O.After sonication, coverslips were immersed in freshly prepared piranha solution (sulfuric acid:hydrogen peroxide, 3:1) for 15 min, rinsed in MilliQ water 20 times, and finally dried with nitrogen gas.To form SLBs, we sandwiched 30 ll of a SUV suspension between a petri dish and a coverslip.After a 5-min incubation, the petri dish was immersed in MilliQ water bath.The coverslip was removed from the petri dish and washed in the MilliQ water to remove excessive SUVs.Coverslips were assembled in Attofluor Chamber (Thermo Fisher).The MilliQ water in each chamber was slowly replaced with PBS and incubated with 10 mM NiCl 2 for 15 min, followed by incubation with 0.1% BSA for 30 min.Finally, each SLB was functionalized with His10-Halo-IL-1b for 1 h and excessive ligands were washed away with 20 ml HBS.
Protein expression, purification, and labeling
To functionalize the SLBs with active mouse IL-1b, we expressed and purified fusion protein of His10-Halo-IL-1b as previously described (Deliz-Aguirre et al, 2021).This protein was produced from two separate expression plasmids: pET28a-MmIL1b-Spytag and pET28a-His10-Halo-Tencon-SpycatcherV2.We expressed IL-1b-Spytag and His10-Halo-Tencon-SpycatcherV2 in BL21-DE3 Rosetta E. coli (Novagen) grown in Terrific Broth media.After an overnight induction with IPTG, the bacterial culture was pelleted and the cell pellets were resuspended in the lysis buffer (50 mM TRIS pH 8.0, 250 mM NaCl, 5 mM Imidazole with protease inhibitors, Lysozyme 100 lg/ml) and lysed using sonication.To covalently couple His10-Halo-Tencon-Spycatcher to MmIL1b-Spytag, the cleared lysates were mixed and incubated with mild agitation for 1 h at 4°C.To ensure complete Spycatcher-Spytag conjugation, the lysates were mixed with 2:1 ratio (vol:vol, based on starting bacterial culture volume) of MmIL1b-Spytag to His10-Halo-Tencon-Spycatcher.After the conjugation, the His10-Halo-Tencon-Spycatcher-IL-1b-Spytag was purified by Ni-NTA resin.Conjugation was monitored by mobility shift using SDS-PAGE.After elution, the protein was desalted with HiTrap desalting column into 20 mM HEPES and subject to anion exchange chromatography with a MonoQ column.This was followed by gel filtration over Superdex 200 26/600 into storage buffer (20 mM HEPES, 150 mM NaCl).The protein was snap-frozen with the addition of 20% glycerol in liquid nitrogen and placed at À80°C for long-term storage.In text, this protein is referred to as His10-Halo-IL-1b.Following purification, the His10-Halo-Tencon-Spycatcher-IL-1b-Spytag protein was either snap-frozen and stored at À80°C or directly used for HaloTag-labeling.To label the HaloTag, a 2.5× molar excess of JF646-HaloLigand was mixed with the protein and incubated at room temperature for 1 h followed by an overnight incubation at 4°C.Postlabeling, the protein was gel filtered over a Superdex 200 26/600 into storage buffer and snap-frozen with the addition of 20% glycerol in liquid nitrogen and placed in À80°C for storage.The degree of labeling was calculated with a spectrophotometer by comparing 280 nm and 640 nm absorbance (usually 85-95% labeling efficiency was achieved).
For microscopy calibration of mScarlet single-molecule intensity, we used His10-mScarlet-IL-1b (previously described here, Deliz-Aguirre et al, 2021).For mEGFP single-molecule intensity, His10-mEGFP was expressed from a pET28a vector and purified with Ni-NTA resin followed by gel filtration.Frozen aliquot of both proteins was stored at -80C.
Immunofluorescence staining and widefield microscopy of RelA nuclear translocation
To analyze the nuclear translocation of RelA in IL-1b-stimulated EL4 cells with or without inhibition of Myddosome coalescence (Fig 1G ), IL-1b-functionalized SLBs were prepared on coverslips without chromium grid lines (off grid) or on coverslips with 2.5 or 1 lm grid lines.Nonfunctionalized SLBs served as unstimulated controls.EL4 was incubated for 30 min with IL-1b-labeled SLBs functionalized with 100 IL-1b molecules/lm 2 .Cells were then fixed with 3.5% (wt/vol) PFA containing 0.5% (wt/vol) Triton X-100 for 20 min at room temperature.Cells were washed with PBS and blocked with PBS 10% BSA (wt/vol) at 4°C overnight.
We acquired widefield microscopy images of RelA nuclear translocation on an inverted microscope (Nikon TiE) equipped with Lumencor Spectra-X illumination.Fluorescent images were acquired with Nikon Plan Apo 40× 0.95 NA air objective lens and projected onto a Photometric Prime 95 camera and a 1.5× magnification lens (calculated pixel size of 181.41 nm).Image acquisition was performed with NIS-Elements software.
Immunofluorescence staining of phospho-p65, phospho-IKK, K63-Ub, and M1-Ub To analyze the colocalization of phospho-IKK, K63-Ub, and M1-Ub with MyD88-GFP (Fig 2), EL4 cells were stimulated with IL-1bfunctionalized SLBs for 30 min and then fixed with 3.5% (wt/vol) PFA containing 0.5% (wt/vol) Triton X-100 for 20 min at room temperature.Staining was then performed with a traditional two-step staining method.After fixation, cells were washed with PBS and then blocked in PBS 10% (wt/vol) BSA containing 4% normal goat serum for 1 h at room temperature.Fixed cells were labeled with primary antibodies diluted in PBS 10% (wt/vol) BSA containing 0.1% Triton X-100 at 4 C overnight.The next day, cells were washed five times with PBS and labeled with secondary antibodies (goat anti-rabbit/ human conjugated to Alexa Fluor 647; 1:1,000; Invitrogen, #A21246/ A21445) and FluoTag-X4 anti-GFP conjugated to Atto488 (1:500, Nano Tag Biotechnology, #N0304-At488-L) for 1 h at room temperature.Finally, cells were washed five times in PBS before imaging with TIRF or SIM microscopy.To analyze the colocalization of phospho-p65 with MyD88, we used a one-step staining protocol detailed below.To image pIKK with SIM (Fig 2C ), coverslips were mounted in Prolong Glass Antifade Mountant (Thermo, #P36980).
TIRF microscopy data acquisition
Imaging of MyD88-GFP, mScarlet-TRAF6, and mScarlet-HOIL1 recruitment was performed on an inverted microscope (Nikon TiE) equipped with a Nikon fiber launch TIRF illuminator.Illumination was controlled with a laser combiner using the 488-, 561-, and 640nm laser lines at $ 0.35, $ 0.25, and $ 0.17 mW laser power, respectively (laser power measured after the objective).Fluorescence emission was collected through filters for GFP (525 AE 25 nm), RFP (595 AE 25 nm), and JF646 (700 AE 75 nm).All images were collected using a Nikon Plan Apo 100× 1.4 NA oil immersion objective that projected onto a Photometrics 95B Prime sCMOS camera with 2 x 2 binning (calculated pixel size of 150 nm) and a 1.5× magnifying lens.Image acquisition was performed using NIS-Elements software.All experiments were performed at 37°C.The microscope stage temperature was maintained using an OKO Labs heated microscope enclosure.Images were acquired with an interval of 4 s using exposure times of 60-100 ms.
Imaging EL4 cells endogenously expressing MyD88-GFP, mScarlet-TRAF6, or mScarlet-HOIL1 on IL-1b functionalized SLBs with TIRF microscopy His10-Halo-JF646-IL-1b-functionalized SLBs were set up as described above.To quantify the density of IL-1b on the SLB, wells were prepared that were functionalized with identical labeling protein concentration and time, but with different ratios of labeled to unlabeled His10-Halo-IL-1b.Before application of cells, SLBs were analyzed by TIRF microscopy to check formation, mobility, and uniformity.Short time series were collected at wells containing a ratio of labeled to unlabeled His10-Halo-IL-1b (e.g., < 1 His10-Halo-JF646-IL-1b molecule/lm 2 ) to calculate ligand densities on the SLB based upon direct single-molecule counting.By controlling the concentration of His10-Halo-JF646-IL-1b in the labeling reaction, we could label SLB with final IL-1b densities ranging from 1 to 200 molecules/lm 2 .
Before each imaging experiment, we acquired calibration images using recombinant mEGFP and His10-mScarlet-IL-1b previously described here (Deliz-Aguirre et al, 2021).To image a single GFP/ mScarlet-i fluorophores, the recombinant purified proteins were diluted in HBS and adsorbed to KOH-cleaned glass.Single molecules of GFP/mScarlet-i were imaged using identical microscope acquisition settings to those used for cellular imaging.To image live cells, EL4 cells were pipetted onto supported lipids bilayers functionalized with His10-Halo-JF646-IL-1b. EL4 cells expressing MyD88-GFP, mScarlet-TRAF6, or mScarlet-HOIL1 were sequentially illuminated for 60-100 ms with 488-nm and 561-nm laser line at a frame interval of 4 s (Fig 5).Diffraction-limited punctate structures of MyD88-GFP, mScarlet-TRAF6, or mScarlet-HOIL1 were detected and tracked using the Fiji TrackMate plugin (Tinevez et al, 2017).
Structured illumination microscopy data acquisition
We acquired 3D structured illumination microscopy of fixed EL4 cells on a Zeiss Elyra 7 microscope equipped with 405, 488, 561, and 642 nm laser lines for excitation.Image acquisition was performed with a 63×, NA 1.46 oil objective, and images were captured on pco.edge 4.2 sCMOS camera.We acquired Z stacks of fixed cells using the 3D leap acquisition plugin using a 200 nm Z-axis step size with 13 phases.We performed postprocessing image reconstruction in Zeiss Zen software.
Quantification and statistical analysis
All data are expressed as the mean AE the standard deviation (SD) or mean AE the standard error of the mean (SEM), as stated in the figure legends and results.The exact value of n and what n represents (e.g., number of cells, MyD88-GFP puncta, or experimental replicates) is stated in figure legends and results.Means of experimental replicates were compared using an unpaired two-tailed Student's t-test implemented in R studio.Data distribution was assumed to be normal based on density plots, but this was not formally tested.We performed no blinding of the data for any data analysis performed.
Quantification of immunofluorescence staining of RelA nuclear localization
We quantified widefield microscopy images of RelA nuclear localization in an analysis pipeline implemented in FIJI and Cell Profiler.First, we performed background subtraction from the MyD88-GFP and RelA (Cy3 channel) immunofluorescence staining micrographs in FIJI.Background was removed in two steps: First, we subtracted a dark field image from each image.Second, we estimate cytosolic background by generating median blur from each micrograph.We then subtracted this median blur from the parent micrograph.
We then performed segmentation and quantification using a custom Cell Profiler pipeline that allowed images to be processed in batch.We segmented the cell nucleus using the DAPI channel.Selected nuclei retained for analysis had to have a diameter between 30 and 60 pixels, this excluded small DAPI-stained objects that corresponded to cell fragments and apoptotic cells.Segmentation of the 488-phalloidin staining channel identified the total cell volume.Both segmentation steps were performed using an Otsu threshold.The volume corresponding to cellular cytoplasm is identified by subtracted the total cell volume minus nucleus volume.The RelA staining intensity of the cell nucleus and cytoplasm was extracted, and the ratio calculated.RelA nucleus-to-cytoplasm ratio from images acquired on 2.5 and 1 lm grids and unstimulated negative controls were normalized to the intensity of RelA nucleus-tocytoplasm ratio from off-grid data.We normalized intensity using the following equation: Norm.Int = (Intensity À quantile(0.05) off grid )/ (quantile(0.95) off grid À quantile(0.05) off grid ).Finally, we performed data visualization of normalized RelA nucleus-to-cytoplasm ratio in ggplot2 (Fig 1H).
Quantification of immunofluorescence staining and analysis of phospho-p65, phospho-IKK, K63-Ub, and M1-Ub We quantified TIRF microscopy images of pp65, pIKK, K63-Ub, and M1-Ub immunofluorescence staining in an analysis pipeline implemented in FIJI and Cellprofiler (McQuin et al, 2018).First, we performed background subtraction from the MyD88-GFP and immunofluorescence staining TIRF micrographs in FIJI.Background was removed in two steps: First, we subtracted a dark-field image from each image.Second, we estimate cytosolic background by generating median blur from each TIRF micrograph.We then subtracted this median blur from the parent TIRF micrograph.
Next, we segmented MyD88-GFP puncta and quantified fluorescence intensity using a custom Cell Profiler pipeline that allowed images to be processed in batch.We segmented MyD88-GFP puncta using Otsu threshold.Only segmented MyD88-GFP puncta were retained that has a diameter between 3 and 30 pixels.After image segmentation and object detection, the integrated intensity and mean intensity of the MyD88-GFP and immunofluorescence staining channel were extracted for each segmented puncta.We performed
EMBO reports
Fakun Cao et al manual inspection of the segmented images and objects to verify correct processing and remove incorrectly segmented puncta.Data normalization and visualization were performed using R. To compare MyD88-GFP puncta size and staining intensity across different replicates acquired on different days, we normalized puncta fluorescence intensities.Fluorescence intensities of MyD88-GFP and immunofluorescence staining from images acquired on 2.5 lm and 1 lm grids were normalized to the intensity of those from off-grid data.We normalized intensity using the following equation: Norm.Int = (Intensity À quantile(0.01) off grid )/(quantile (0.99) off grid À quantile(0.01) off grid ).
We used the following criteria to classify MyD88 puncta in fixed cells as single Myddosomes or clusters of Myddosomes (Figs 3G and H,and 4G and H).We observed that MyD88 puncta that formed on grids rarely had a fluorescent intensity greater than 0.5 (normalized integrated intensity,Figs 3E and F,and 4E and F).In contrast, we found that off-grid, between 3 and 5% of MyD88 puncta were classified as clusters (Appendix Fig S1E,F,I and J).This was in agreement with our live-cell measurement of MyD88 puncta size (Fig 1F).Based on these observations and previous measurement that nanopatterned grids disrupted cluster formation (Fig 1F ), we defined MyD88 puncta that were clusters as puncta with an intensity ≥ 0.5 and single Myddosomes as puncta being below this threshold.To calculate the per Myddosome staining intensity for clusters (Fig EV2), we divided the MyD88-GFP normalized integrated intensity by this value (0.5), thereby giving an estimate of the number of Myddosomes within a puncta.We then divided the puncta pp65/ pIKK/K63-Ub/M1-Ub normalized integrated staining intensity by the number of Myddosomes per puncta, thereby calculating the per Myddosome intensity for each complex within the cluster (Fig EV2 ).Finally, we performed data visualization of MyD88-GFP puncta size and immunofluorescence staining intensity in ggplot2 and GraphPad Prism (Figs 3A,. Quantification and analysis of MyD88-GFP puncta and colocalization and recruitment of mScarlet-TRAF6/mScarlet-HOIL1 To quantify the dynamics of MyD88-GFP, mScarlet-TRAF6, and mScarlet-HOIL1, we used an image analysis pipeline described previously (Deliz-Aguirre et al, 2021) and briefly described here.First, images in each channel were processed in Fiji to remove background fluorescence.Background subtraction was performed in two steps.First, we subtracted a dark frame image (acquired with no light exposure to the camera, but identical exposure time to experimental acquisition) to remove noise intrinsic to the camera.Then, we subtracted a median-filtered image (generated in Fiji from a median blurred image generated with a radius of 25 pixels) to remove the background associated with cytosolic fluorescence.Next, individual cells were segmented in Fiji according to a maximum projection of MyD88-GFP fluorescence channel.After segmentation, we tracked MyD88-GFP and mSclarlet-TRAF6/ mScarlet-HOIL1 puncta in each cell using the Fiji Trackmate plugin (Tinevez et al, 2017).
Tracking coordinates generated by Trackmate were imported into MATLAB, and the fluorescence intensity of MyD88-GFP puncta was measured from a 3 × 3 pixel region.To quantify colocalization between MyD88-GFP and mScarlet-TRAF6/HOIL1 puncta, we used the tracking coordinates to identify puncta that colocalized for at least two or more consecutive frames.Colocalized puncta were defined as having centroids ≤ 0.25 lm apart at a given time point.By these criteria, MyD88 tracked puncta were classified as either positive or negative for TRAF6/HOIL1 colocalization.
To estimate the size and number of MyD88s in MyD88-GFP puncta, we acquired images of single mEGFP fluorophores (referred to as simply GFP) absorbed to glass with identical imaging settings to those used in live-cell imaging.Images of single molecules of mEGFP were processed identically to live-cell imaging data, with background subtraction, tracking, and intensity measurement performed as described above.Once intensity measurements were obtained for single molecules of GFP, this was used to divide the fluorescence intensity of MyD88-GFP puncta to yield an estimate of MyD88 copy number.To normalize the puncta by the number of Myddosome complexes, we divided GFP normalized intensity by 4.5 (e.g., the intensity of a single Myddosome, based on the broad fluorescent intensity distribution of a complex containing 6× MyD88-GFP (Lin et al, 2010a), see Fig EV3B, and (Deliz-Aguirre et al, 2021)).Using these criteria, a MyD88 puncta is defined as a Myddosome complex if the fluorescent intensity is greater than or equal to 4.5× GFP and is a cluster of Myddosomes (e.g., 2 or more complex) if the fluorescent intensity is greater than or equal to 9× mean intensity of GFP.
Finally, we performed data analysis and visualization in R. MyD88-GFP puncta with an intensity ≥ 4.5× the mean intensity of mEGFP were defined as fully assembled Myddosome complexes (see Deliz-Aguirre et al, 2021).Myddosome clusters (defined as MyD88-GFP puncta containing two or more Myddosome complexes) were defined as MyD88-GFP puncta with an intensity ≥ 9× the mean intensity of mEGFP (see Fig EV3B).
▸Figure 3 .
Figure 3.Comparison of pIKK and pp65 antibody staining at Myddosomes assembled off and on nanopatterned grids.A, B Top, TIRF images of fixed EL4-MyD88-GFP cells incubated with IL-1 functionalized SLBs for 30 min and stained with antibodies against pp65 (A) or pIKK (B).Scale bar, 5 lm.Region of interest (red box, merge image) shows an example of MyD88-GFP puncta that colocalizes with pp65 (A) or pIKK (B) puncta.Bottom, 2D histograms of the distribution of MyD88 puncta intensity and associated pp65 (A) or pIKK (B) staining intensity.Linear fit is shown as a blue line superimposed on 2D histograms (Pearson correlation coefficient, R, of linear fit labeled on 2D histograms).Blue-shaded regions on scatter plot high MyD88 puncta classified as clustered Myddosomes.Bottom right, zoomed images of the region of interest (red box overlaid merge image, top) show MyD88-GFP channel and associated pp65 (A) and pIKK (B) channel (pp65/pIKK images are displayed with Fire LUT).Red data points on the 2D histogram are from indicated puncta in the MyD88-GFP image (numbered red arrows).Scale bar, 1 lm.C, D TIRF images of fixed EL4-MyD88-GFP cells incubated with partitioned IL-1 functionalized SLBs (2.5 lm top row and 1 lm bottom row) and stained with anti-pp65 (C) or anti-pIKK (D).Region of interest (red box overlaid merge image) shows examples of MyD88-GFP puncta that colocalize with pp65 (A) or pIKK (B) puncta.Scale bar, 5 lm.Far right, zoomed image of pp65 (C) or pIKK (D) puncta (from region of interest overlaid merge image) displayed with Fire LUT.Scale bar, 1 lm.E, F 2D histogram of MyD88-GFP puncta intensity and associated pp65 (E) or pIKK (F) staining intensity on 2.5 and 1 lm grids.Linear fit is shown as a blue line superimposed on 2D histograms (Pearson correlation coefficient, R, of linear fit labeled on 2D histograms).G, H Quantification of mean pp65 (G) or pIKK (H) staining intensity for puncta classifieds as single or clustered Myddosomes, and MyD88 puncta formed on 2.5 and 1 lm grids.The normalized mean intensity for clusters, single Myddosomes MyD88 puncta on 2.5 and 1 lm grids are the following: for pp65 0.318 AE 0.044, 0.163 AE 0.009, 0.059 AE 0.005 and 0.057 AE 0.008; for pIKK 0.393 AE 0.051, 0.130 AE 0.004, 0.037 AE 0.006 and 0.035 AE 0.007 (a.u., mean AE SEM, mean value states in the order they appear on plot, left to right).Violin plots show the distribution of all segmented MyD88 puncta.Data points superimposed on the violin plots are the averages from independent experiments.P-values are * < 0.05, *** < 0.001, **** < 0.0001.Bars represent mean AE SEM (n = 3-4 biological replicates for pp65, with 10,273, 14,009 and 2,675 puncta off grid, on 2.5 lm and 1 lm grid measured in total across all replicates; n = 4-5 biological replicates for pIKK, with 2,375, 35,496 and 59,593 puncta off grid, on 2.5 lm and 1 lm grid measured in total across all replicates).Statistical significance is determined using unpaired two-tailed Student's t-test.
Fig 5C, All); however, when we normalized MyD88 puncta intensity to the number Myddosomes per puncta (see Materials and Methods, and Fig EV3B), we found that 58.4 AE 5.9% of Myddosome clusters were TRAF6-positive (Fig 5C, ≥ 2).In comparison, the percentage of TRAF6-positive puncta was 6.1 AE 1.8% and 36.2AE 5.2% for puncta containing ≤ 1 or > 1 Myddosome complexes (mean AE SEM, ◀ Figure 4. Comparison of Myddosome K63-Ub and M1-Ub antibody staining on and off nanopatterned grids.A, B Top, TIRF images of fixed EL4-MyD88-GFP cells incubated with IL-1 functionalized SLBs for 30 min and stained with anti-K63-Ub (A) or anti-M1-Ub (B).Scale bar, 5 lm.Region of interest (red box, merge image) shows an example of MyD88-GFP puncta that is colocalized with K63-Ub (A) or M1-Ub (B) puncta.Bottom, 2D histograms of the distribution of MyD88 puncta intensity and associated K63-Ub (A) or M1-Ub (B) staining intensity.Linear fit is shown as a blue line superimposed on 2D histograms (Pearson correlation coefficient, R, of linear fit labeled on 2D histograms).Blue-shaded region on scatter plot high MyD88 puncta classified as clustered Myddosomes.Bottom right, zoomed images of the region of interest (red box overlaid merge image, top) show MyD88-GFP channel and associated K63-Ub (A) and M1-Ub (B) channel (K63/M1-Ub images are displayed with Fire LUT).Red data points on the 2D histogram are from indicated puncta in the MyD88-GFP image (numbered red arrows).Scale bar, 1 lm.C, D TIRF images of fixed EL4-MyD88-GFP cells incubated with partitioned IL-1 functionalized SLBs (2.5 lm top row and 1 lm bottom row grids) and stained with anti-K63-Ub (C) or anti-M1-Ub (D).Region of interest (red box overlaid merge image) shows an example of MyD88-GFP puncta that colocalize with K63-Ub (A) or M1-Ub (B) puncta.Scale bar, 5 lm.Far right, zoomed image of K63-Ub (C) or M1-Ub (D) puncta (from region of interest overlaid merge image) displayed with Fire LUT.Scale bar, 1 lm.E, F 2D histograms of the distribution of MyD88 puncta intensity and associated K63-Ub (E) or M1-Ub (F) staining intensity on 2.5 and 1 lm grids.Linear fit is shown as a blue line superimposed on 2D histograms (Pearson correlation coefficient of linear fit labeled on 2D histograms).G, H Quantification of mean K63-Ub (G) or M1-Ub (H) staining intensity for puncta classifieds as single or clustered Myddosomes, and MyD88 puncta formed on 2.5 and 1 lm grids.
Figure 5 .
Figure 5. E3 ligases TRAF6 and HOIL1 are recruited to Myddosomes.A, B Top: TIRF images of MyD88-GFP and mScarlet-TRAF6 (A) or mScarlet-HOIL1 (B).Region of interest (yellow box, merge image) shows an example of a MyD88-GFP puncta colocalized with mScarlet-TRAF6 (A) or mScalet-HOIL1 (B).Bottom: Time-series TIRF images from the region of interest (left) and fluorescence intensity times series (right) of MyD88 and TRAF6 (A) or HOIL1 (B).C, D Quantifications of percentage of MyD88-GFP puncta that colocalized with TRAF6 (C) or HOIL1 (D) grouped for all puncta, puncta containing ≤ 1×, > 1×, and ≥ 2× Myddosome complexes.The percentages for TRAF6 (C) in these groups are 15.4 AE 2.4%, 6.1 AE 1.8%, 36.2AE 5.2%, and 58.4 AE 5.9%, respectively (mean AE SEM).The percentages for HOIL1 (D) in these groups are 8.6 AE 0.5%, 0.4 AE 0.2%, 14.9 AE 1.9%, and 25.9 AE 5.0%, respectively (mean AE SEM).Violin plots indicate the distribution of individual cell measurements.Colored dots superimposed on the violin plots are the averages from independent experiments.Bars represent mean AE SEM (n = 6 biological replicates for TRAF6, with 191 cells measured in total across all replicates, see also Appendix Fig S4A; n = 9 biological replicates for HOIL1, with 230 cells measured in total across all replicates, see also Appendix Fig S4B).E Quantifications of the percentage of TRAF6-or HOIL1-positive MyD88 puncta on single Myddosomes and clusters containing 2-4, 5-7, 8-10, 11-13, and 14-16 Myddosome complexes.With greater Myddosome numbers per puncta, the percentage of MyD88 colocalized with TRAF6 or HOIL1 increases.The data points represent the average across replicates and bars represent mean AE SEM (n = 6 biological replicates for TRAF6, with 191 cells measured in total across all replicates, see also Appendix Fig S4A; n = 9 biological replicates for HOIL1, with 230 cells measured in total across all replicates, see also Appendix Fig S4B).F Analysis of TRAF6 and HOIL1 landing size and recruitment time to MyD88 puncta.Left, time-series TIRF images showing MyD88-GFP puncta nucleation and the appearance of TRAF6.Right, the associated fluorescence intensity time trace for the time series shown.Recruitment time is defined as the time interval from Myddosome nucleation (e.g., time = 0 s when MyD88-GFP puncta appears) to the appearance of a TRAF6 or HOIL1 puncta.Landing size is defined as the fluorescent intensity of the MyD88 puncta at the time when TRAF6 or HOIL1 appears (indicated on the fluorescent intensity trace with arrow).G Histogram of landing size of MyD88 puncta, expressed as number of Myddosome complexes per puncta, for TRAF6 (top, n = 6,015 recruitment events, technical replicates, from 183 cells pooled from six biological replicates) and HOIL1 (bottom, n = 5,562 recruitment events, technical replicates, from 212 cells pooled from nine biological replicates) recruitment.Histogram is overlaid with a density plot of the distribution.Black horizontal lines on the histograms denote the average landing size (mean AE SEM).H Histogram for the recruitment time of TRAF6 (top, n = 94 recruitment events, technical replicates, from four cells pooled from three biological replicates) and HOIL1 (bottom, n = 69 recruitment events, technical replicates, from four cells pooled from four biological replicates) overlaid with the density plot of the distribution.Black horizontal lines on the histograms denote the average recruitment time (mean AE SD).Source data are available online for this figure.
from 6 replicates, Fig 5C and Appendix Fig S4A
(
Fig 7C).Our results show that Myddosome clustering and the sequential recruitment of TRAF6 and then HOIL1 (Fig 5H) is a potential mechanism to generate signaling outputs proportional to the stimulation level (Fig 7C).
◀ Figure 7 .
Clustering increases TRAF6 and HOIL1 lifetime at Myddosomes.A, B Histogram showing the lifetime of TRAF6 and HOIL1 recruitment to single Myddosomes, and clusters containing 2-4 and 5-7 Myddosome complexes, off (blue) and on (red) 1 lm grids.C Fig S2, full-length western blot shown in Appendix Fig S3).Finally, all cell clones were imaged by microscopy to check for correct localization of fluorescent signals.
5 AE 1.8 GFPs, n = 8 biological replicates, with 88,304 MyD88-GFP puncta from 161 cells measured in total across all replicates; for 2.5 lm grids is 5.2 AE 1.7 GFPs, n = 3 biological replicates, with 13,164 MyD88-GFP puncta from 31 cells measured in total across all replicates; for 1 lm grids is 2.3 AE 0.1 GFPs, n = 8 biological replicates, with 126,600 MyD88-GFP puncta from 254 cells measured in total across all replicates.Bars represent mean AE SEM.G Widefield images showing RelA localization in unstimulated EL4 cells and EL4 cells stimulated by SLB formed on and off grids.EL4 was fixed 30 min after addition to IL-1-functionalized SLBs and stained for RelA (magenta); DAPI-stained nuclei (blue).Scale bar, 10 lm.H Quantification of RelA nucleus to cytoplasm ratio.Violin plots show the distribution of measurements from individual cells.Data points superimposed on the violin plots are the averages from independent experiments.The RelA nucleus-to-cytoplasm ratio of single cells marked with X in panel (G) is superimposed on the violin plot.RelA nucleus-to-cytoplasm ratio off grids, on 2.5 and 1 lm grids, and unstimulated conditions are 0.45 AE 0.01, 0.34 AE 0.02, 0.32 AE 0.02, 0.21 AE 0.03 (mean AE SEM), respectively.The P-value are * = 0.0133 and ** = 0.0027.Bars represent mean AE SEM (n = 3-5 biological replicates, with a total of 18,370, 3,627, 2,988, and 965 cells measured off grids, on 2.5 and 1 lm grids, and unstimulated conditions, respectively).Statistical significance is determined using unpaired twotailed Student's t-test.Source data are available online for this figure.
The normalized mean intensity for clustered, single Myddosomes, and MyD88 puncta on 2.5 and 1 lm grids are the following: for K63-Ub 0.444 AE 0.030, 0.257 AE 0.025, 0.113 AE 0.015 and 0.104 AE 0.008; for M1-Ub 0.520 AE 0.020, 0.183 AE 0.008, 0.174 AE 0.022 and 0.153 AE 0.016 (a.u., mean AE SEM, mean value states in the order they appear on plot, left to right).Violin plots show the distribution of all segmented MyD88 puncta.Data points superimposed on the violin plots are the averages from independent experiments.P-values are ** < 0.01, *** < 0.001.Bars represent mean AE SEM (n = 3-4 biological replicates for K63, with 14,571, 27,494 and 24,026 puncta off grid, on 2. 5 and 1 lm grid measured in total across all replicates; n = 3-4 biological replicates for M1, with 3,114, 6,091 and 1,844 puncta off grid, on 2.5 and 1 lm grid measured in total across all replicates).Statistical significance is determined using unpaired two-tailed Student's t-test.Source data are available online for this figure.Ó 2023 The Authors EMBO reports 24: e57233 | 2023 9 of 23 Fakun Cao et al EMBO reports ).We analyzed the relationship between lifetime and TRAF6 recruitment.Using a threshold of 50 s to define long-lived MyD88 puncta, we found that 34.6 AE 3.5% MyD88 with lifetimes ≥ 50 s colocalized with TRAF6 versus 7.1 AE 1.3% of puncta with lifetime < 50 s (mean AE SEM, from six replicates, FigEV3D).In summary, stable Myddosome clusters are more likely to recruit TRAF6.We applied the same analysis to investigate HOIL1 recruitment.We found that HOIL1-positive MyD88 puncta were greater in size than noncolocalized puncta (47.4 MyD88s for positive puncta versus 11.6 MyD88s for negative puncta, FigEV3E).19.3 AE 1.3% of MyD88 puncta with a lifetime ≥ 50 s colocalized with HOIL1.In comparison, only 2.5 AE 0.2% of MyD88 puncta with lifetime < 50 s colocalized with HOIL1 (FigEV3F).In summary, like observed for TRAF6, MyD88 puncta that recruit HOIL1 were more likely to be larger puncta with longer lifetimes.We investigated the role of Myddosome clustering in HOIL1 recruitment.We found that on average 8.6 AE 0.5% of all MyD88-GFP puncta colocalized with HOIL1 (mean AE SEM, from 9 replicates, Fig5D, Appendix Fig S4B).
2, and the percentages are 16.2 AE 2.5% and 6.1 AE 1.5%, respectively (mean AE SEM).Violin plots indicate the distribution of individual cell measurements.Colored dots superimposed on the violin plots are the averages from independent experiments.Bars represent mean AE SEM (n = 4 experimental replicates off grids, with a total of 24,315 MyD88 puncta from 91 cells; n = 3 biological replicates on 1 lm grids, with a total of 23,161 MyD88 puncta from 70 cells).Statistical significance is determined using unpaired two-tailed Student's t-test.D, E TIRF images of EL4 cells expressing MyD88-GFP and mScarlet-TRAF6 stimulated on IL-1 functionalized SLBs at a ligand density of 10 IL-1 per lm 2 off grids (D) or on 1 lm grids (E).Kymographs derived from dashed lines overlaid TIRF images (left panel).Scale bars, 5 lm.F Quantification of percentage of MyD88-GFP puncta that colocalized with TRAF6 off grids and on 1 lm grids at a ligand density of 10 IL-1/lm 2 , and the percentages are 25.0 AE 1.6% and 12.7 AE 1.8%, respectively (mean AE SEM).Violin plots indicate the distribution of individual cell measurements.Colored dots superimposed on the violin plots are the averages from independent experiments.Bars represent mean AE SEM (n = 4 biological replicates off grids, with a total of 34,452 MyD88 puncta from 87 cells; n = 4 biological replicates on 1 lm grids, with a total of 71,525 MyD88 puncta from 100 cells).Statistical significance is determined using unpaired two-tailed Student's t-test.G, H TIRF images of EL4 cells expressing MyD88-GFP and mScarlet-HOIL1 stimulated on IL-1 functionalized SLBs at a ligand density of 10 IL-1/lm 2 off grids (G) or on 1 lm grids (H).Kymographs derived from dashed lines overlaid TIRF images (left panel).Scale bars, 5 lm.I Quantification of percentage of MyD88-GFP puncta that colocalized with HOIL1 off grids and on 1 lm grids at a ligand density of 10 IL-1/lm 2 , and the percentages are 7.0 AE 0.8% and 1.7 AE 0.7%, respectively (mean AE SEM).Violin plots indicate the distribution of individual cell measurements.Colored dots superimposed on the violin plots are the averages from independent experiments.Bars represent mean AE SEM (n = 4 biological replicates off grids, with a total of 53,852 MyD88 puncta from 74 cells; n = 4 biological replicates on 1 lm grids, with a total of 55,075 MyD88 puncta from 154 cells).Statistical significance is determined using unpaired two-tailed Student's t-test.J, K TIRF images of EL4 cells expressing MyD88-GFP and mScarlet-HOIL1 stimulated on IL-1 functionalized SLBs at a ligand density of 32 IL-1/lm 2 off grids (J) or on 1 lm grids (K).Kymographs derived from dashed lines overlaid TIRF images (left panel).Scale bars, 5 lm.
L Quantification of percentage of MyD88-GFP puncta that colocalized with HOIL1 off grids and on 1 lm grids at a ligand density of 32 IL-1/lm 2 , and the percentages are 8.6 AE 0.5% and 4.2 AE 0.5%, respectively (mean AE SEM).Violin plots indicate the distribution of individual cell measurements.Colored dots superimposed on the violin plots are the averages from independent experiments.Bars represent mean AE SEM (n = 9 biological replicates off grids, with a total of 118,354 MyD88 puncta off grids from 230 cells; n = 4 biological replicates on 1 lm grids, with a total of 68,819 MyD88 puncta from 138 cells).Statistical significance is determined using unpaired two-tailed Student's t-test.Source data are available online for this figure.Ó 2023 The Authors EMBO reports 24: e57233 | 2023 13 of 23 Fakun Cao et al EMBO reports | 2023-01-09T14:10:45.762Z | 2023-02-17T00:00:00.000 | {
"year": 2023,
"sha1": "84a0589fe6dec680cf5aa50f5a5871c6ce116f04",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.15252/embr.202357233",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "60fd5fc25d501645865c98789583803152198182",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
52142799 | pes2o/s2orc | v3-fos-license | Effect of Orally Administered Atractylodes macrocephala Koidz Water Extract on Macrophage and T Cell Inflammatory Response in Mice
The rhizome of Atractylodes macrocephala Koidz (AM) is a constituent of various Qi booster compound prescriptions. We evaluated inflammatory responses in macrophages and T cells isolated from mice following oral administration of AM water extract (AME). Peritoneal exudate cells were isolated from thioglycollate-injected mice and alterations in scavenger receptors were examined. Peritoneal macrophages were stimulated with lipopolysaccharide (LPS). Serum cytokine responses to intraperitoneal LPS injection were also evaluated. Splenocytes were isolated and their composition and functional responses were measured. The content of atractylenolide I and atractylenolide III, known anti-inflammatory ingredients, in AME was 0.0338 mg/g extract and 0.565 mg/g extract, respectively. AME increased the number of SRA(+)CD11b(+) cells in response to thioglycollate. Peritoneal macrophages isolated from the AME group showed no changes in inflammatory markers such as tumor necrosis factor- (TNF-) α, interleukin- (IL-) 6, inducible nitric oxide synthase, and cyclooxygenase-2 but exhibited a decrease in CD86 expression. Interestingly, AME decreased the serum levels of TNF-α and IL-6 upon intraperitoneal injection of LPS. Regarding the adaptive immune system, AME increased the CD4(+) T cell population and major histocompatibility complex class II molecule expression in the spleen, and cultured splenocytes from the AME group showed increased production of IL-4 concurrent with decreased interferon-γ production during T cell activation. AME promoted the replenishment of peritoneal macrophages during the inflammatory response but its anti-inflammatory activity did not appear to be mediated by the modulation of macrophage activity. AME also altered the immune status of CD4 T cells, promoting the Th2 response.
Introduction
Inflammation is a protective response to eliminate harmful stimuli, and immune cells are the major participants in this process. Depending on the modality of antigen recognition and the capacity to generate memory response, immune cells are divided into the innate immune system and the adaptive immune system [1]. Innate immune cells such as macrophages and dendritic cells react instantly to antigen with limited receptor specificity [1]. Adaptive immune cells, consisting of T cells and B cells, are antigen-specific, initiate a response to antigen that has entered the peripheral lymphoid tissue, and generate a memory response [1]. The innate immune cells are principal players in the early stages of inflammation, but over time, adaptive immune cells take over.
Tissue resident macrophages play a key role in immunity and tissue integrity [2]. Most tissue macrophages are derived from embryonic precursors [3]. Under steady-state 2 Evidence-Based Complementary and Alternative Medicine conditions their populations are maintained through their longevity and by local proliferation, and some macrophages are replenished by blood monocyte-derived cells [3]. During inflammation, bone marrow-derived monocytes are recruited to the site and differentiate into macrophages [3]. Macrophages eliminate pathogens and antigens through phagocytosis and induce inflammatory responses by producing cytokines and enzymes such as tumor necrosis factor-(TNF-) , interleukin-(IL-) 6, inducible nitric oxide synthase (iNOS), and cyclooxygenase-(COX-) 2. In addition, macrophages are one type of professional antigen presenting cells (APCs) that present antigens to T cells [4,5].
T cells, which mainly consist of CD4 T cells and CD8 T cells, are activated when T cell receptors (TCRs) contact antigenic peptides bound by major histocompatibility complex (MHC) molecules on APCs [6]. CD4 T cells, which account for more than two-thirds of T cells, can be differentiated into various effector T helper (Th) cells such as Th1, Th2, Th17, T follicular helper, and T regulatory cells [7]. Among these subsets, Th1 and Th2 cells were the first types to be defined. Th1 cells secrete high levels of interferon- (IFN-) and are efficient in the defense against intracellular pathogens by activating macrophages whereas Th2 cells secrete interleukin-(IL-) 4, IL-5, and IL-13 and protect the host from helminth infection by recruiting eosinophils and mast cells [7]. Although these T helper cells are important for host defense, chronic activation of any Th cell type can cause immune-mediated disorders. Th1 cells play a critical role in organ-specific autoimmunity and chronic inflammatory disorders and Th2 cells are responsible for allergic inflammation [7].
The rhizome of Atractylodes macrocephala Koidz (AM), belonging to the Compositae, has been used for the treatment of functional defects in the digestive system such as loss of appetite, abdominal distention, and diarrhea. According to traditional Chinese medicine, AM invigorates Qi by resolving abnormal retention of fluid in the gastrointestinal tract. AM is a constituent of various Qi booster compound prescriptions. In traditional Chinese medicine, one of the essential functions of Qi is defense. For this reason, Qi boosting herbs are thought to enhance the immune system. Since Qi boosting herbs are taken on a preventive basis to improve the immune status of individuals without overt defects, it is necessary to evaluate how the immune system may be altered in normal individuals following the administration of AM. Despite its frequent use, there have been few studies to explore the effects of AM on the immune system. AM contains several bioactive sesquiterpenoids such as atractylenolide I, atractylenolide II, and atractylenolide III and polyacetylenes [8]. In vitro treatment of macrophages with atractylenolide I, atractylenolide III, and some polyacetylenic compounds inhibited lipopolysaccharide-(LPS-) induced TNF-and iNOS expression [9,10]. Oral administration of these lipid-soluble components showed antiinflammatory activity in mice [11,12]. However, the majority of traditional herbal preparations are water-based decoctions, which results in a low yield of pharmacologically active lipid-soluble components. Furthermore, polyacetylenes can be easily destroyed in boiling water. Therefore, we wanted to address whether anti-inflammatory responses occur in macrophages isolated from mice given AM extracted in boiling water (AME). We also examined the effect of AME on the serum inflammatory response. Finally, we examined the composition and functional response of splenocytes for any alteration in the adaptive immune system after AME supplementation.
Materials and Methods
2.1. Preparation of Sample. AM originating from Eusung (South Korea) was purchased from E-Pulip Co., Ltd. (Lot. EPL1356-4) (Seoul, South Korea). A voucher specimen (# 2013-AM) was deposited in the Laboratory of Herbal Immunology, Kyung Hee University. Briefly, 100 g of sample was ground, extracted with 1 L of deionized water (DW) in a reflux apparatus and heating mantle for 2 h at 95 ∘ C, and filtered through Whatman number 2 filter paper (Whatman International, Kent, England). The extract was concentrated using a rotary evaporator and freeze-dried under vacuum. The yield of AME was 37.7%. For high-performance liquid chromatography (HPLC) analysis, 0.4 g of AME was dissolved in 10 ml of DW and sonicated for 5 min at 25 ∘ C. The extract was added to ethyl acetate, shaken to mix, and allowed to stand for 1 min. The upper layer of ethyl acetate was transferred and this procedure was repeated three times. The final ethyl acetate layer was concentrated and freeze-dried.
Animals.
Seven-week-old male Balb/c mice were obtained from SamTaco (Osan, South Korea) and housed in a temperature-and humidity-controlled pathogen-free animal facility with a 12-h light-dark cycle. All animals underwent 1 week of adjustment prior to experiments. Doses were determined using a calculation extrapolated from the difference in body surface area between a mouse and a human [13]. The recommended dose of AM for a 60 kg adult human is 8-24 g of raw plant per day or 3-9 g of extract per day (based on the extraction yield in this study). The dose for mouse can be determined as follows: a human equivalent dose of 50-150 mg/kg × 12.3 (the conversion coefficient) = a mouse dose of 615-1,845 mg/kg. Based on this dose range, we chose doses of 500 mg/kg and 2,500 mg/kg for this study. Animals were randomly allocated to experimental groups.
Evidence-Based Complementary and Alternative Medicine 3 AME was given via oral gavage once daily for 10 days. There were no differences in body weight among groups during the experimental period. The animal protocol was approved by the Institutional Animal Care and Use Committee at Kyung Hee University (KHUASP(SE)-15-012), and mice were cared for according to US National Research Council for the Care and Use of Laboratory Animals (1996) specifications.
Macrophage Preparation.
For macrophage isolation, mice were injected intraperitoneally with 2 ml of 3.5% sterile thioglycollate (BD, Sparks, MD, USA) 4 days before sacrifice. At the end of the experiment, mice were sacrificed by cervical dislocation and peritoneal exudate cells were aseptically isolated by peritoneal lavage with cold DMEM (Hyclone, Logan, UT, USA) containing 10% fetal bovine serum (FBS; Hyclone) and 1% penicillin-streptomycin. After centrifugation, cells were resuspended and counted using a TC20 Cell Counter (Bio-Rad Laboratories, Hercules, CA, USA).
Splenocyte Preparation.
For splenocyte isolation, spleens were aseptically obtained at the end of the experiment. After disrupting the spleen between glass slides in RPMI 1640 (Hyclone) with 1% FBS and 1% penicillin-streptomycin, the cells were filtered through a 70-m cell strainer. After centrifugation, red blood cells were lysed using BD PharmLyse lysing buffer (BD Biosciences, San Diego, CA, USA). Cells were resuspended in RPMI 1640 with 10% FBS and 1% penicillin-streptomycin and counted using a T20 Cell Counter.
Intraperitoneal Injection of LPS.
Mice were intraperitoneally injected with 1.3 mg/kg LPS (serotype 055:B5, Sigma) at the end of the experiment. After 1 h, mice were anesthetized with ether and blood was collected by cardiac puncture. Serum was obtained and stored at −20 ∘ C until analysis.
Cell Culture.
Peritoneal exudate cells were plated in 6well plates or 60-mm dishes and incubated overnight at 37 ∘ C. After removal of nonadherent cells, attached cells were stimulated with 100 ng/ml LPS for 24 h. Supernatant and cells were collected for subsequent assays. Splenocytes were plated in 24-well plates and stimulated with 2 g/ml anti-CD3 antibody (BD Biosciences) for 48 h. Supernatant was collected for cytokine analysis.
Cytokine Analysis.
The levels of TNF-, IL-6, IFN-, and IL-4 in supernatants and sera were determined using BD OptEIA mouse ELISA sets (BD Biosciences) according to the manufacturer's protocol.
RNA Isolation and Real-Time PCR.
Total RNA was isolated using a FavorPrep Total RNA Purification Kit (Favorgen Biotech, Pingtung, Taiwan), and cDNA was reversetranscribed using a High Capacity RNA-to-cDNA kit (Applied Biosystems, Foster City, CA, USA). Diluted cDNA was mixed with Power SYBR Green PCR Master mix (Applied Biosystems) and 2 pmol of primers specific for iNOS, COX2, or GAPDH. Amplification of cDNA was performed using a StepOnePlus real-time PCR system (Applied Biosystems). After initial heat denaturation at 95 ∘ C for 10 min, PCR conditions were set at 95 ∘ C for 15 sec and 60 ∘ C for 1 min for 40 cycles. For each PCR, a corresponding mRNA sample without reverse transcription was included as a negative control. Quantification of cDNA copy number was achieved using a standard curve.
Statistical
Analysis. Data were presented as mean standard error of the mean (SEM). Two-sided Student's t-test or two-way analysis of variance was applied to compare differences between groups. If the statistical analysis showed that differences between multiple groups were significant, Tukey post hoc test was used for further comparison. All statistical analyses were performed with IBM SPSS 22.0 version software (IBM, Chicago, IL, USA). P-values less than 0.05 were considered significant.
Content of Atractylenolide I and Atractylenolide III in AME.
Among the known quality control markers, atractylenolide I and atractylenolide III are verified antiinflammatory compounds in vitro [9]. The ethyl acetate fraction from AME was tentatively identified using a spiked input of authentic standards with comparison of retention times and UV-visible spectral patterns. The HPLC chromatograms are shown in Figure 1. The content of atractylenolide I and atractylenolide III in AME was 0.0338 mg/g extract and 0.565 mg/g extract, respectively.
Effect of Oral Administration of AME on Scavenger Receptor Expression in Mouse Peritoneal Exudate Cells.
Intraperitoneal injection of thioglycollate is commonly used to induce sterile peritonitis and enrich peritoneal macrophages from mice in laboratories [14]. The majority of peritoneal macrophages are derived from blood monocytes [15]. We collected peritoneal exudate cells from AMEtreated mice using this method. CD11b was used as a marker for macrophages. Scavenger receptors such as SRA, CD36, and LOX-1 are upregulated during monocyte-tomacrophage differentiation [16][17][18]; therefore we examined the expression of these proteins. SRA, CD36, and LOX-1 were almost exclusively expressed in CD11b(+) cells ( Figures 2(a)-2(c)). The percentage of SRA(+)CD11b(+) cells in the control group was 66%, and treatment with 500 mg/kg and 2,500 mg/kg AME significantly increased this population to 69% and 76%, respectively. The frequencies of CD36(+)CD11b(+) and LOX-1(+)CD11b(+) cell populations in the control group were 95% and 14%, respectively, and AME induced no significant changes in both populations. The increase in the SRA(+)CD11b(+) cell population indicates that AME can stimulate the differentiation of blood monocytes into macrophages in response to thioglycollate.
Effect of Oral Administration of AME on Surface CD86 Expression in LPS-Stimulated Macrophages.
Costimulatory molecules such as CD86 on macrophages are required to strengthen the crosstalk between macrophages and Th cells [6]. Peritoneal macrophages isolated from AME-treated mice were stimulated with LPS for 24 h and the membrane expression of CD86 was measured using flow cytometry. Stimulation with LPS increased the mean fluorescence intensity of CD86 from 5.24 to 11.24 ( Figure 3). The mean fluorescence intensity of CD86 in the 500 and 2,500 mg/kg groups significantly decreased to 10.14 and 10.59, respectively. These results indicate that AME may affect the interaction between macrophages and Th cells.
Effects of Oral Administration of AME on the Inflammatory Cytokine Response in Macrophages and Serum.
We first examined whether oral administration of AME affects the inflammatory response of macrophages. Peritoneal macrophages from the control or high-dose AME group were stimulated with LPS for 24 h and production of TNFand IL-6 in the supernatant was measured. There was no difference in the level of TNF-secretion between control and AME groups but the level of IL-6 was increased in the AME group (Figure 4(a)). We also found that AME did not induce any alterations in iNOS and COX-2 gene expression in cells stimulated with LPS ( Figure 4(b)). Next, we examined the systemic response of AME-treated mice to intraperitoneal LPS stimulation. AME decreased serum levels of TNF-and IL-6 by 20% and 47%, respectively ( Figure 5). These findings indicate that the anti-inflammatory activity of AME may occur independently of the modulation of macrophages.
Effects of Oral Administration of AME on Splenic T Cell and B Cell Populations and MHC II Expression.
To determine whether oral administration of AME alters adaptive immune cells, we analyzed the percentages of splenic CD4 and CD8 T cells and B cells in the control and AME groups. The CD4(+) T cell population significantly increased from 23.4% to 27.2% and 26.9% in the 500 and 2,500 mg/kg groups, respectively (Figures 6(a) and 6(d)). No differences were observed in CD8 T cell and B cell populations (Figures 6(a), 6(b), and 6(d)). MHC class II molecules are required for the presentation of antigens to CD4 T cells. We analyzed the splenic expression of the mouse MHC class II molecules IA/IE and found that the mean fluorescence intensity of MHC II molecules significantly increased from 65.3 to 68.9 in the 2,500 mg/kg AME group (Figures 6(c) and 6(e)). These findings suggest that AME induces alterations in the adaptive immune system.
Effects of Oral Administration of AME on T Cell Proliferation and Th1/Th2 Cytokine Response in Splenocytes.
We investigated the function of splenic T cells following AME treatment. Splenocytes isolated from control or AME groups were stimulated with anti-CD3 antibody, a mitogen that activates the whole population of T cells irrespective of antigen receptor specificity. Treatment with anti-CD3 antibody for 48 h increased optical density 2.6-fold as measured by MTS assay. There was no difference in proliferation induced by anti-CD3 antibody between control and AME groups (Figure 7(a)). IFN-and IL-4 are representative cytokines for Th1 and Th2 cells, respectively. We evaluated the secretion of IFN-and IL-4 in splenocytes stimulated with anti-CD3 antibody. A significant reduction in IFN-secretion was observed in the 500 mg/kg AME group, whereas IL-4 secretion was significantly increased in the 2,500 mg/kg group (Figure 7(b)). Although no dose-dependent effect was observed, AME tended to promote the Th2 response.
Discussion
In traditional Chinese medicine, Qi boosting herbs are expected to enhance the immune system. In this study, we specifically focused on the inflammatory responses of macrophages and T cells isolated from mice that were orally given AME.
Thioglycollate-induced sterile peritonitis was first introduced in 1964 by Gallily et al. and since then has been the most commonly used method for the isolation of primary macrophages [19]. On day 4 after intraperitoneal injection of thioglycollate, the total number of peritoneal exudate cells increases approximately 5-fold [15]. Among these cells, macrophages are the predominant cell type, followed by eosinophils [15]. The source of the increased number of peritoneal macrophages in thioglycollate-injected mice is bone marrow-derived blood monocytes [15]. Upregulation of scavenger receptors occurs during the process of monocyteto-macrophage differentiation [16][17][18]. Scavenger receptors, one type of macrophage innate receptors, are responsible for phagocytosis and specifically recognize polyanionic ligands [20]. We used CD11b and several scavenger receptor markers to identify monocyte-derived macrophages in peritoneal exudate cells and found that the CD11b(+)SRA(+) cell population was significantly increased in the AME group. This suggests that administration of AME promotes recruitment and differentiation of blood monocytes to macrophages in response to thioglycollate.
LPS is recognized by the toll-like receptor (TLR)-4/MD-2 complex. TLR4 induces inflammatory responses through two adaptor molecules, MyD88 and TRIF [21]. The MyD88dependent signaling pathway activates NF-B and mitogenactivated protein kinase (MAPK) to induce inflammatory genes such as TNF-and IL-6 [22]. The TRIF-dependent signaling pathway activates interferon regulatory factor-3 to produce IFN-, which is required for the upregulation of costimulatory molecules [23,24]. The TRIF signaling pathway also participates in the activation of NF-B and MAPK but in a delayed manner relative to the MyD88-dependent pathway [22]. Upregulation of costimulatory molecules is solely TRIFdependent while inflammatory responses are co-dependent on MyD88 and TRIF [24]. There was no inhibitory effect on the inflammatory markers tested in macrophages from the AME group. Instead, CD86 expression was decreased. CD86 on macrophages binds CD28 on Th cells to strengthen the activity of Th cells [1]. Our results indicate that oral administration of AME does not affect NF-B-and MAPKdependent inflammatory responses in macrophages but specifically interferes with the TRIF-dependent pathway that leads to CD86 expression only. Further studies are needed to evaluate whether AME causes alterations in a pathologic situation where macrophages and Th cells predominate. The LPS-stimulated macrophage system is a very common in vitro model for evaluating the anti-inflammatory activity of natural products or drug candidates. Using this model, it is easy to obtain the desired result with lipidsoluble components because they can easily penetrate the cell membrane. Our data showed that peritoneal macrophages isolated from mice that were orally given AME did not show anti-inflammatory effects ex vivo, contradicting previously reported in vitro results [9,10]. In contrast, antiinflammatory activity of AME was observed in the serum response of TNF-and IL-6 upon intraperitoneal injection of LPS. This systemic anti-inflammatory activity is least likely to be mediated by the modulation of macrophages. One of the differences between the in vivo and in vitro conditions is that LPS is carried in the circulation by several lipoproteins and then cleared by hepatocytes in vivo, whereas this event cannot be mimicked in vitro [25,26]. LPS clearance can prevent overstimulation of the liver macrophages [26]. Whether the systemic anti-inflammatory activity of AME is related to LPS clearance in the liver remains to be determined. A similar result was obtained in peritoneal macrophages isolated from mice given oral Astragalus membranaceus water extract (unpublished data). Astragalus membranaceus and AM belong to the same Qitonifying herb category. At this time, we do not know whether in vivo anti-inflammatory activity that does not involve macrophage modulation is unique to these medicinal plants or a common property inducible by Qi-tonifying medicinal plants, and we need to accumulate more data to draw any conclusions. In addition, Li and 14-acetoxy-12-senecioyloxytetradeca-2E,8E,10E-trien-4, 6-diyn-1-ol, a type of polyacetylenic compound isolated from AM, have a molecular structure that interacts with membrane-bound glucocorticoid receptor [11]. According to their study, 300 mg/kg of oral atractylenolide I and 30 mg/kg of oral polyacetylene were the minimum doses required to show anti-inflammatory effects [11]. The amount of atractylenolide I in 2,500 mg/kg AME is merely 0.097 mg/kg, a dose far below the minimum required. Moreover, loss of polyacetylenes must have occurred during AME preparation. It is possible that AME contains unidentified glucocorticoid-like compounds that contribute to its systemic anti-inflammatory activity. A sufficient number of T cells is required to maintain a proper immune response. Under normal conditions, the total T cell number is maintained by the generation of naïve T cells in the thymus and the turnover of peripheral naïve T cells and memory T cells. Mice and humans undergo thymus atrophy with age and accordingly naïve T cell output declines in both species [27,28]. However, in terms of naïve T cell maintenance, mice produce naïve T cells during their lifetime, whereas adult humans maintain this population by peripheral naïve T cell division [29]. Besides, the lifespan of mouse naïve T cells is 40-fold shorter than their human counterparts [30]. Memory T cells are maintained by intermittent division [31]. The precise survival and homeostatic proliferation mechanism of naïve and memory T cells is not completely defined but involves signals from TCR/MHC complex and cytokines such as IL-7 and IL-15 [31,32]. The prolonged effect of vaccines depends on memory T cells whereas treatment of lymphopenic conditions requires naïve T cells. We did not determine whether the splenic CD4 T cell population that increased upon AME treatment consisted of naïve CD4 T cells or memory CD4 T cells. A detailed characterization of the cell fraction that responds to AME will help to specify which situation is better suited for the application of AME.
Of note, concurrent upregulation of MHC class II molecules in the spleen occurred in the AME group. MHC class II molecules are necessary to provide antigens to CD4 T cells. We routinely found that the majority of MHC class II expressing cells in the spleen are B cells and the remaining cells are macrophages and dendritic cells. We did not clarify which types of cells showed upregulation of MHC class II molecules after AME administration. Nonetheless, increases in both CD4 T cell number and MHC class II molecule expression in the spleen indicate that supplementation of AME contributes to the systemic maintenance of CD4 T cells. The role of IL-4 under physiological conditions is to enhance the antibody response by promoting the survival and proliferation of B cells and provide defense against helminth infection [33][34][35]. Splenocytes from the AME groups showed increased IL-4 production during T cell activation ex vivo concurrent with decreased IFN-production. These results suggest that under normal conditions AME promotes the Th2 response. In contrast, oral administration of AM-derived glycoprotein promotes the Th1 response while decreasing the Th2 response in an allergic model [36]. It is not clear whether this compound represents the entire activity of AM. Further study is required to determine whether AME prevents or aggravates pathologic Th2 responses.
Conclusion
In this study, we observed changes in the responses of macrophages and T cells in normal mice following oral administration of AME. AME enhanced thioglycollateinduced monocyte differentiation in the peritoneum and suppressed LPS-induced TNF-and IL-6 levels in serum. Unlike these systemic anti-inflammatory effects, anti-inflammatory effects were not evident in macrophages isolated from the AME group except for alterations in the expression of costimulatory molecules. AME also influenced the adaptive immune system by increasing the number of CD4 T cells and the expression of MHC class II molecules and promoting the Th2 response over the Th1 response.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2018-09-16T02:44:23.320Z | 2018-08-07T00:00:00.000 | {
"year": 2018,
"sha1": "59afc1ca19e7cfc270adca71414ed09bf38c732d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ecam/2018/4041873.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f98581210b01ef8c98f350848f2086b7a58e973a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
128363173 | pes2o/s2orc | v3-fos-license | Impact of Parkinson’s disease on the efficiency of masticatory cycles: Electromyographic analysis
Background This study evaluated the efficiency of masticatory cycles by means of the linear envelope of the electromyographic signal of the masseter and temporalis muscles in individuals with Parkinson’s disease. Material and Methods Twenty-four individuals were assigned into two groups: with Parkinson’s disease, average ± SD 66.1 ± 3.3 years (n = 12) and without the disease, average ± SD: 65.8 ± 3.0 years (n = 12). The MyoSystem-I P84 electromyograph was used to analyze the activity of masticatory cycles through the linear envelope integral in habitual mastication of peanuts and raisins and non-habitual mastication of Parafilm M®. Results There was statistically significant difference (P ≤ 0.05) between individuals with Parkinson’s disease and without the disease in non-habitual mastication of Parafilm M®, in the right temporal muscle (P = 0.01); habitual mastication of peanuts, in the right temporal muscle (P = 0.02), left temporal muscle (P = 0.03), and right masseter muscle (P = 0.01); and habitual mastication of raisins in the right temporal muscle (P = 0.001), left temporal muscle (P= 0.001), right masseter muscle (P= 0.001) and left masseter muscle (P= 0.03). Conclusions These results suggest that Parkinson’s disease interferes in the electromyographic activity of the masticatory cycles by reducing muscular efficiency. Key words:Parkinson’s Disease, electromyography, masticatory efficiency, masseter muscle, temporal muscle.
Introduction
Parkinson's disease is a chronic degenerative and progressive disease that produces changes in the central nervous system. These changes involve the basal nuclei, specifically the striatum, which is composed of the caudate nucleus and putamen. In addition, this disease leads to the death of dopaminergic neurons in the substantia nigra (1). The elderly population has increased considerably due to an increase in life expectancy. Therefore, chronicdegenerative diseases have become more common, thus forming a new epidemiological profile (2). Parkinson's disease commonly affects individuals older than 50 years of age, although it can be diagnosed in young adults and adolescents (3). Approximately 2% of the world's population over 65 years of age is affected by the disease, which is considered the second most common senile disease. In addition, Parkinson's disease is equally prevalent across ethnic groups and social classes and has a low prevalence in males (4). The physiological changes that Parkinson's disease triggers can compromise functions and balance and promote alterations in the stomatognathic system (5). This complex anatomical system has structures specialized for specific functions, and any alteration due to degenerative diseases can produce a functional imbalance (6). Neurodegenerative diseases cause motor alterations that affect the musculoskeletal system (7). Previous studies have reported that over 50% of the individuals diagnosed with Parkinson's disease exhibit eating disorders and dysfunction in the masticatory process (8); however, there is little information in the literature about the impact of this disease on the function of the masticatory muscles. This study is necessary to better understand the functional alterations of the stomatognathic system and observe the impact of the disease on the masticatory system. The hypothesis of the study is that Parkinson's disease negatively influences the performance of the masticatory muscles. The aim was to evaluate the efficiency of masticatory cycles during the chewing of soft and hard foods of Parkinson's patients compared to individuals without the disease.
Material and Methods
-Sample This research was approved by the Committee of Ethics in Research with Humans at the Claretian University Center of Batatais, São Paulo, Brazil (protocol # 61113916.6.0000.5381). All participants signed a free and informed consent form, in accordance with Resolution 466/2012 of the National Health Council. Individuals with Parkinson's disease were diagnosed by the neurologist and were recruited from the Department of Neurology, Claretian University Center, Batatais, São Paulo, Brazil. The Hoehn and Yahr scale was used to determine the degree of impairment in individuals with Parkinson's disease (9), and the Mini Mental State Examination (MMSE) was employed to evaluate cognitive function (10), whose result was 28.08 points. A trained professional administered these examinations. A post hoc sample size calculation was conducted considering a level of α=0.05, a power of 100% for the main outcome Parafilm M® chewing (mean of the right temporal muscle, PG = 1.90 [0.34] and CG= 0.98 [0.11]), effect size of 3.64. The minimal sample size obtained was 24 volunteers (12 for each group). Sample size calculation was performed with the G*Power 3.0.10 software. A total of 54 individuals with Parkinson's disease, between 50 and 70 years, were evaluated in this study. Following the exclusion criteria, 12 individuals with Parkinson's disease (average ± SD 66.1 ± 3.3 years), Angle Class I, contact pattern in maximum intercuspal position with tooth to two tooth occlusion and presence of all permanent teeth (except third molars) were selected (grade I and III of the Hoehn and Yahr Scale). The disease-free group (n=12; average ± SD 65.8 ± 3.0 years) was composed of dentate individuals, without temporomandibular dysfunction (RDC/TMD) who were age-, gender-, weight-, and height-matched with individuals in the Parkinson's disease group. There were no statistically significant differences between the groups in age (P = 0.80), weight in kg (with Parkinson's disease: 69.08 ± 3.87; disease-free: 67.75 ± 2.70, P = 0.34), or height in cm (with Parkinson's disease: 166 ± 0.08; disease-free: 168 ± 0.08, P = 0.61). The exclusion criteria involved the temporomandibular dysfunction (RDC/TMD, n=08); absence of complete dentition (n=9), presence of ulcers and cutaneous hypersensitivity, a cognitive deficit (MMSE score below 24, n=03), neurological and systemic (decompensated) pathologies associated with the disease, stage IV and V of Hoehn and Yah disability (n=05), inadequate occlusal conditions (i.e. teeth with periodontal mobility, n=10), and use of anti-inflammatories, analgesics and muscle relaxants that could interfere in neuromuscular physiology (n=7). In addition, it was required that individuals with Parkinson's disease used the drug levodopa to control their symptoms (12).
-Electromyographic analysis -masticatory efficiency The electromyographic signals of the masticatory cycles were collected using the MyoSystem-I P84 portable electromyograph (DataHominis, Uberlandia, Minas Gerais, Brazil), with analog bandpass filters for a cutoff frequency of 10-1000 Hz, scanning for sample frequency of 4 kHz, and 12-bit resolution. Silver/silver chloride bipolar surface electrodes (DataHominis Ltda., Model DHT-EASD) with diameter and inter-electrode distance of 10 mm were used. e316 To reduce impedance, the skin was cleaned with alcohol a few minutes before the surface electrodes were positioned (13). A rectangular stainless steel electrode (3 x 4 cm) (Bio-logic Systems Corp., Mundelein, IL, Chicago) was also used as a reference electrode to reduce noise acquisition, fixed on the right wrist of the individual Surface electrodes were positioned according to the recommendations of Surface EMG for Non-Invasive Assessment of Muscles (SENIAM) (14). A quiet environment was maintained while recording the electromyographic data of the masticatory cycles through the ensemble average analysis, which consists of using the integrated amplitude values of the linear envelopment of the masticatory cycles. The electromyographic signals were acquired in the clinical condition of the free habitual mastication of food with hard consistency (5g. peanuts) and soft consistency (5g. raisins). The non-habitual mastication was obtained with chewing of an inert material constituted by a sheet of paraffin (Parafilm M®), which was folded (18 × 17 × 4 mm, weight 245 mg) placed on both sides of the dental arches. During the non-habitual mastication, subjects were asked to make a movement of short opening so as to reduce the effects of the change in length × tension in the muscle, in typical dynamic records. The data of all the masticatory cycles were collected in 10 s (15). The individuals remained seated, feet resting on the ground and palms on the thighs, with an erect neck in order to keep the Frankfurt plane parallel to the ground. The individuals were instructed to remain calm and to keep the inspiratory and expiratory movements well paused (16). At the beginning of the masticatory process, the initial cycles showed a variation in the pattern of the mandibular movement. Therefore, to calculate the results obtained from the integral of the linear envelope of the masticatory cycles, the initial masticatory cycles were eliminated while the central cycles of the electromyographic were maintained. Three initial masticatory cycles were excluded since, in the initial phase of the masticatory process, the first cycles vary considerably during mandibular movements (15). -Method Error The method error of the habitual and non-habitual masticatory efficiency measurements was calculated with the Dahlberg formula using the records of five individuals from two different sessions, with a seven-day intersession interval. There was a small variation in the measurements between the first and second sessions for electromyography (3.74%). Intra-rater reliability was analyzed using the calculation of the intraclass coefficient (ICC). Reliability for electromyographic activity was considered good (ICC = 0.936).
-Statistical Analysis After obtaining the masticatory efficiency data, a nor-mality test was run, and the data were considered normally distributed. The efficiency of the masticatory cycle was analyzed after calculating the integral of the linear envelope of the normalized electromyographic signal. The data were normalized by dental clenching in maximum voluntary contraction and were then statistically analyzed (Statistical Package for the Social Sciences Version 22.0 for Windows, IBM Inc.; Chicago, IL, USA). A descriptive analysis was run to obtain the mean and standard error for each variable. A Student's t-test (independent samples), with a significance level of 5% and a 95% confidence interval, was used to determine if there were significant differences between the groups. Table 1 shows the standard electromyographic data for habitual mastication (peanuts and raisins) and non-habitual mastication (Parafilm M®) for the groups. Normalized electromyographic means for the masticatory muscles were higher in the individuals with Parkinson's disease compared to the individuals without the disease. Specifically, there was a statistically significant difference (P ≤ 0.05) for the right temporal muscle (P = 0.01) in the mastication of Parafilm M®, the right temporal muscle (P = 0.02), the left temporal muscle (P = 0.03) and the right masseter muscle (P =0.01) in mastication of peanuts, and the right temporal muscle (P = 0.001), the left temporal muscle (P = 0.001), the right masseter muscle (P = 0.001) and the left masseter muscle (P = 0.03) in the mastication of raisins.
Discussion
This study showed that individuals with Parkinson's disease demonstrated significant changes in masticatory efficiency. The lower masticatory efficiency in people with Parkinson's disease is relevant, because most of these individuals suffer from considerable involuntary weight loss, masticatory difficulties and malnutrition, mainly due to inadequate food intake (8). These changes were observed in the non-habitual mastication of Parafilm M® and habitual mastication of soft and consistent food, and in the normalized electromyographic means of the largest masticatory cycles in the individuals with Parkinson's disease group versus individuals without the disease group. These data are characteristic of dysfunction and a lack in efficiency, and we can conclude that to perform the same function, the energy expenditure was higher in the individuals with Parkinson's disease group compared to the individuals without the disease group. The dynamic short-excursion movement of the buccal opening, in order to reduce the effects of changing length and muscular tension, was used to identify nonhabitual mastication (17). We observed that the indi- Table 1. Mean (standard error) and statistical significance (P ≤ 0.05)* of the normalized electromyographic data of the right masseter (RM), left masseter (LM), right temporal (RT) and left temporal (LT) muscles for Parkinson's disease (PG) and without the disease (CG) groups in the habitual and non-habitual chewing.
viduals with Parkinson's disease group demonstrated increased electromyographic activity of the masticatory cycles relative to the individuals without the disease group for all the muscles evaluated.
To perform the masticatory movement, it is necessary to perceive different textures of food, and proprioception is greater when chewing soft foods compared to consistent foods (18). The results of this research suggest that to perform masticatory movements, individuals with Parkinson's disease recruited a greater number of muscle fibers compared to individuals without the disease. This lead to an increased energy expenditure, and suggested that there is a functional impairment in the individuals with Parkinson's disease (5).
For dynamic movement to have adequate muscle efficiency, force production is necessary but with less activation of muscle fibers (19). The alteration of the body posture in Parkinson's disease changes the cervical and mandibular biomechanical relationships in the static and dynamic positions of the stomatognathic system (6). All the individuals with Parkinson's disease in this study used Levodopa, a drug that can directly influence musculoskeletal change. This medication is used for long-term response of symptom control, but with continuous use motor deterioration occurs. This deterioration leads to the development of fluctuations, making it difficult to maintain dopaminergic presynaptic terminals, thus reducing the capacity of the skeletal striated musculature to store dopamine (20,21). As the disease progresses and dopamine storage capacity decreases, the effect of the drug is impaired. Short-term responses begin to appear and slow the mo-tor response (22). This event may explain the musculoskeletal changes that occur in these individuals, and the increased recruitment of motor plaques to perform the masticatory process. One of the side effects of levodopa is dyskinesia (23), which may alter muscle activity and possibly the masticatory pattern. According to differences between patients and controls may be due to side effects of the treatment, not the disease. It is known that over the years the use of the drug loses the systemic effect and new doses must be adjusted. Further studies should be performed to verify over time the effect of the Levodopa use throughout the skeletal muscle system, including chewing. Another important factor is that individuals with Parkinson's disease demonstrate postural alterations that can provide bodily imbalance by modifying the position of the head, which consequently changes the mandibular position (24). This alteration can affect the masticatory pattern, leading to muscular compensation and a decrease in masticatory efficiency (25). In this study, the posture of individuals with Parkinson's disease was not evaluated. Patients with a tremor in the face show a deviation in the movement of the mandible, which is a result of the level of dopamine present in the brainstem (26). This situation may also determine the muscular compensations and therefore, functional alterations in the masticatory process. Among the changes caused by Parkinson's disease, weight loss is common, often associated with lack of appetite related to the side effects of medications, which e318 contribute to low food intake. These effects may also be associated with lower masticatory efficiency. Therefore, analyzing the results obtained in this research made it possible to observe functional changes in the performance of the stomatognathic system, specifically in the efficiency of the masticatory cycles in individuals with the disease. Healthcare professionals should take great care when proposing rehabilitative treatments, especially in relation to food intake and nutritional status of individuals with Parkinson's disease. In addition, these treatments should include multidisciplinary follow-up, which should involve a nutritionist, speech therapist, physiotherapist, dental surgeon, and physician. As the number of individuals with Parkinson's disease is small, further studies should be performed with a larger sample.
Conclusions
Based on the results of this study, we suggest that the masticatory cycles of the electromyographic signal for the masseter and temporal muscles in the habitual mastication of soft and consistent food are lower efficient in individuals with Parkinson's disease when compared to individuals without the disease. | 2019-04-24T13:03:28.611Z | 2019-04-24T00:00:00.000 | {
"year": 2019,
"sha1": "ca54a9e03300c14f56cb853ba6d63601ef332843",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/medoral.22841",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca54a9e03300c14f56cb853ba6d63601ef332843",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25140458 | pes2o/s2orc | v3-fos-license | Impaired Trafficking of Connexins in Androgen-independent Human Prostate Cancer Cell Lines and Its Mitigation by α-Catenin*
Gap junctions, composed of connexins, provide a pathway of direct intercellular communication for the diffusion of small molecules between cells. Evidence suggests that connexins act as tumor suppressors. We showed previously that expression of connexin-43 and connexin-32 in an indolent prostate cancer cell line, LNCaP, resulted in gap junction formation and growth inhibition. To elucidate the role of connexins in the progression of prostate cancer from a hormone-dependent to -independent state, we introduced connexin-43 and connexin-32 into an invasive, androgen-independent cell line, PC-3. Expression of these proteins in PC-3 cells resulted in intracellular accumulation. Western blot analysis revealed a lack of Triton-insoluble, plaque-assembled connexins. In contrast to LNCaP cells, connexins could not be cell surface-biotinylated and did not reside in the cell surface derived endocytic vesicles, in PC-3 cells, suggesting impaired trafficking to the cell surface. Intracellular accumulation of connexins was observed in several androgen-independent prostate cancer cell lines. Transient expression of α-catenin facilitated the trafficking of both connexins to the cell surface and induced gap junction assembly. Our results suggest that impaired trafficking, and not the inability to form gap junctions, is the major cause of communication deficiency in human prostate cancer cell lines.
Gap junctions, composed of connexins, provide a pathway of direct intercellular communication for the diffusion of small molecules between cells. Evidence suggests that connexins act as tumor suppressors. We showed previously that expression of connexin-43 and connexin-32 in an indolent prostate cancer cell line, LNCaP, resulted in gap junction formation and growth inhibition. To elucidate the role of connexins in the progression of prostate cancer from a hormone-dependent to -independent state, we introduced connexin-43 and connexin-32 into an invasive, androgen-independent cell line, PC-3. Expression of these proteins in PC-3 cells resulted in intracellular accumulation. Western blot analysis revealed a lack of Triton-insoluble, plaque-assembled connexins. In contrast to LNCaP cells, connexins could not be cell surface-biotinylated and did not reside in the cell surface derived endocytic vesicles, in PC-3 cells, suggesting impaired trafficking to the cell surface. Intracellular accumulation of connexins was observed in several androgen-independent prostate cancer cell lines. Transient expression of ␣-catenin facilitated the trafficking of both connexins to the cell surface and induced gap junction assembly. Our results suggest that impaired trafficking, and not the inability to form gap junctions, is the major cause of communication deficiency in human prostate cancer cell lines.
Cell-cell and cell-matrix adhesion is involved not only in maintaining the structural integrity of cells in tissues but also in governing a wide array of cell behavior (1)(2)(3). Cell-cell and cell-matrix adhesion molecules frequently cluster at specific contact areas to form cell structures, such as adherens junctions, tight junctions, desmosomes, and focal adhesion plaques (1)(2)(3)(4)(5). Loss of these junctions has profound consequences on cellular growth, differentiation, and apoptosis during neoplastic development in several tumor model systems (1). Recent studies have shown that expression of cell-cell and cell-matrix adhesion molecules is decreased in prostate cancer (PCA) 1 cell lines and that impairment or loss in the expression of these molecules is associated with the malignant potential of prostate epithelial cells (6 -12). Direct support for the role of cell adhesion molecules in controlling the invasive behavior of PCA cells has come from studies showing that forced expression of ␣-catenin, an E-cadherin-associated protein, and C-CAM (7)(8)(9)(10) in human PCA cell lines mitigates their malignant phenotype. These studies suggest that direct cell contact-dependent interactions among epithelial cells in prostate tumors are likely to play an important role in PCA progression.
In addition to cell-cell and cell-matrix adhesion junctions, epithelial cells also form a highly specialized class of cell junctions called gap junctions, which are membrane appositions that are traversed by clusters of channels through which molecules up to 1 kDa can directly pass between adjoining cells (13). The channels are bicellular structures formed by the members of a family of about 20 related but distinct proteins named connexins (Cxs). Connexins first assemble into hexamers to form connexons that align and join with connexons in adjacent cells to form cell-cell channels, which get clustered to form gap junctions (14 -16). In addition to its well documented role in the maintenance of tissue homeostasis and synchronization of cellular behavior, it has been proposed that altered gap junctional communication and/or impaired expression of Cxs may be one of the genetic or epigenetic changes involved in the initiation and progression of neoplasia (13,(17)(18)(19). This notion has been well supported by several independent studies (20 -24) showing that forced expression of Cx genes in several Cx-deficient tumor cell lines attenuates their malignant phenotype. A recent study (25) showed that transgenic mice deficient in Cx32, a Cx abundantly expressed in liver, developed a higher incidence of age-related liver tumors and were more susceptible to the tumor-promoting effect of liver-specific chemical carcinogens.
Although a number of tumor suppressor genes and oncogenes have been implicated in the development of PCA, no consistent genetic or epigenetic changes are known to be associated with its initiation and progression. What is clear, however, is that the incidence of PCA increases with age and is characterized by the progression from an indolent, slow-growing, and hormone (androgen)-dependent state to an invasive, hormone-independent state (10,11,26). Thus, identification of cellular and molecular events that play formative roles in driving the expansion and clonal selection of incipient PCA cells from an androgen-dependent state to an androgen-independent state is essential for understanding PCA progression and designing strategies for its intervention (11,26). Our previous studies showed that, compared with normal prostate epithelial cells, gap junctional communication in PCA cell lines was either absent or reduced (27) and that forced expression of Cx32 and Cx43, the two Cxs expressed by the well differentiated epithelial cells of the prostate, into an indolent, androgen-dependent and Cx-deficient human PCA cell line, LNCaP, inhibited growth, retarded tumorigenicity, and induced differentiation (20). These studies also showed that Cxs were localized at cell-cell contact areas in epithelial cells of well differentiated prostate tumors, and they began to accumulate intracellularly as the tumors progressed to more invasive and undifferentiated stages with an eventual loss of expression in advanced stages (20).
Prostate epithelial cells from the most invasive forms of human androgen-independent prostate carcinomas show frequent impairment and/or deletion of cadherins and their associated proteins, such as ␣-, -, and ␥-catenins (10,11,26). Because bi-directional signaling between cell adhesion molecules and Cxs may be important in initiating the formation of gap junctions, we investigated if forced expression of Cx43 and Cx32 into an invasive, androgen-independent PCA cell line PC-3, with deficient cadherin-mediated adhesion due to the deletion of the ␣-catenin gene (7), would abrogate its malignant phenotype in a manner similar to that of LNCaP cells, which have functional cadherin-mediated adhesion. Our findings showed that, in contrast to androgen-dependent PCA cell lines, expression of Cx43 and Cx32 in PC-3, and several other androgen-independent cell lines, resulted in the intracellular accumulation of Cxs due to defective trafficking and that transient expression of ␣-catenin, a cadherin-associated protein, triggered trafficking and assembly of Cxs into gap junctions.
EXPERIMENTAL PROCEDURES
Materials-Cell culture media were obtained from Invitrogen. Defined fetal bovine and dialyzed fetal calf sera were from HyClone Laboratories (Logan, UT). Tissue culture plasticware was from Nalge Nunc International (Rochester, NY). Seakem GTG-agarose was from FMC BioProducts (Rockland, ME). TRIzol reagent, geneticin (G418), RNA molecular weight markers, and FuGENE 6 transfection reagent were from Invitrogen. Yeast tRNA, poly(A), poly(C), and herring sperm DNA were from Roche Molecular Biochemicals. Fluorochrome-conjugated secondary antibodies were from Jackson ImmunoResearch (West Grove, PA). Ultrapure formamide was from Clontech. Lucifer Yellow (LY, lithium salt), rhodamine-and Alexa 594-conjugated dextrans (M r 10,000, lysine-fixable), and Alexa 488-and Alexa 594-conjugated mouse and rabbit secondary antibodies were from Molecular Probes (Eugene, OR). The super-signal chemiluminescent substrate was from Pierce (Rockford, IL). Enhanced chemiluminescent kit (ECL plus) was from Amersham Biosciences. GeneScreen Plus nylon membranes and [ 32 P]dCTP were from PerkinElmer Life Sciences. Trans 35 S-label was from ICN Biomedical (Irvine, CA). Restriction enzymes and pre-stained protein molecular weight markers were from New England Biolabs (Beverly, MA). BCA reagent for protein determination was from Pierce.
Cell Culture-Human PCA cell lines PC-3 (ATCC CRL 1740) and LNCaP (ATCC CRL 1740) were grown in RPMI containing 7.5% defined fetal bovine serum in an atmosphere of 5% CO 2 , 95% air. Stock cultures were maintained in 12 ml of RPMI in 75-cm 2 flasks and sub-cultured weekly at 1.5 ϫ 10 5 cells/flask with a medium change at 3-or 4-day intervals as described previously (27). LNCaP clones expressing Cx43 and Cx32 stably were isolated and grown in culture medium supplemented with 200 g/ml G418 (active) as described (20). The growth characteristics and the hormonal dependence of these cell lines/clones have been described previously (20,27). The retroviral packaging cell lines PA317 (ATCC CRL 9078) and PG13 (ATCC CRL 10686) were grown in RPMI containing 10% defined fetal bovine serum as described previously (20,28).
Cells were immunostained after fixing with methanol/acetone, paraformaldehyde, and histochoice (depending on the antibody) as described previously (20,27,29). Briefly, 5 ϫ 10 4 cells were seeded in 6-well clusters containing glass coverslips and allowed to grow to confluence. They were washed 3 times with PBS, fixed for 10 min, and immunostained at room temperature with various antibodies. Secondary antibodies (rabbit or mouse) conjugated with fluorescein, CY2, CY3, Texas Red, Alexa 488, and Alexa 594 were used as appropriate. Images of immunostained cells were acquired with Leica DMRIE microscope (Leica Microsystems, Wetzler, Germany) equipped with Hamamatsu ORCA-ER CCD camera (Hamamatsu City, Japan). For colocalization studies, serial z sections (0.5 m) were collected and analyzed using image processing software (Openlab 3.01; Improvision, Inc., Lexington, MA).
Retroviral Vectors and Plasmids-Plasmids containing cDNAs for various Cxs were obtained from several sources as described previously (20,(27)(28)(29). Retroviral vector LXSN (33) was a generous gift of Dr. Dusty A. Miller (Fred Hutchinson Cancer Center, Seattle, WA). Retroviral vectors containing rat Cx43 and Cx32 in sense orientation were constructed as described (20) and designated LXSNCx43S and LXSNCx32S, respectively. The retroviral 5Ј long terminal repeat and SV40 virus promoter drive the expression of Cx cDNAs and neomycin phosphotransferase, respectively (33). Plasmids pECFP-N1, pEGFP-N1, pEGFP-N3, and PEYFP-N1 were purchased from Clontech (Palo Alto, CA). Chimeras of these fluorescent proteins fused to the carboxyl termini of Cx43 and Cx32 were constructed according to the standard molecular biology methods. The details of these constructs and their function and assembly into functional gap junctions will be described elsewhere. 2 Plasmid pcDNA3-␣-catenin (chicken) was constructed by cloning a chicken 3.5-kb HindIII to XbaI fragment, encompassing the coding range of chicken ␣-catenin cDNA from pUC21 vector, into the HindIII and XbaI site of pcDNA3.
Retrovirus Production and Infection of Cells-Control and recombinant retroviruses harboring Cx cDNAs were produced in an amphotropic packaging cell line PA317 and, to improve the efficiency of retroviral mediated gene transfer into human cells, in gibbon ape leukemia virus envelope-based packaging cell line PG13 as described previously (20). The titer of recombinant retroviruses produced from the most stable and the best producing PA317 and PG13 clones were 2 ϫ 10 6 and 4 ϫ 10 5 G418-resistant colony-forming units/ml, respectively, as measured in rat Morris hepatoma cells (20,28). PC-3 cells were infected with equivalent titer (4 ϫ 10 5 colony-forming units/ml) of recombinant retroviruses LXSN (Neo control), LXSNCx32S and LXSNCx43S, and selected in G418 (400 g/ml, active) for 2-3 weeks as described previously (20,28). Glass cylinders were used to isolate individual G418-resistant clones, which were expanded, frozen, and maintained in G418 (200 g/ml).
Isolation of RNA and Northern Blot Analysis-Total RNA was extracted from two 10-cm dishes of confluent cells using Trizol reagent as described previously (29). Ten to 20 g of RNA was analyzed on 1% agarose/formaldehyde gels, transferred to nylon filters, pre-hybridized, and hybridized with 32 P-labeled DNA probes for various Cxs or glyceraldehyde-3-phosphate dehydrogenase, washed in 0.1ϫ SSC at 65°C for 1-2 h, and the membranes exposed to Fuji-RX x-ray film for 1-24 h. The labeled DNA probes were prepared using gel-purified fragments (100 ng) and a random priming kit (Roche Molecular Biochemicals). The probes were labeled to a specific activity of 10 8 -10 9 cpm/g DNA and used at 10 6 cpm/ml hybridization buffer.
Western Blot Analysis-Triton X-100 solubility/insolubility of Cxs was assayed essentially as described by Musil and Goodenough (34,35). Preparation of cell lysates and Western blot analysis of Cxs was as described previously (20,27) except for the following modifications. After centrifugation at 50,000 ϫ g for 50 min on a tabletop Beckman ultracentrifuge (model TL-100) to separate Triton X-100-insoluble and -soluble fractions, the Triton X-100-insoluble pellet was dissolved in 500 l of solubilization buffer (70 mM Tris/HCl, pH 6.8, 8 M urea, 2.5% SDS, and 0.1 M dithiothreitol). Total Triton X-100-soluble and -insoluble fractions were mixed with 4ϫ Laemmli buffer to a final concentration of 1ϫ and boiled at 100°C for 5 min (Cx43) or incubated at room temperature for 60 min (Cx32) before separating by SDS-PAGE.
Detergent (Triton X-100) Extraction of Connexin-43 and Connexin-32 in Situ-Cells were seeded in 6-well clusters containing glass coverslips as described above (see "Antibodies and Immunostaining"), allowed to grow to confluence, and were washed once with PBS at room temperature. Half of the coverslips were extracted with 2 ml of 1% Triton X-100 (weight/weight) in solution B (30 mM HEPES; 140 mM NaCl, 1 mM MgCl 2 , 1 mM CaCl 2 , 3 mM glucose) containing a mixture of protease inhibitors for 30 min at 4°C with occasional gentle shaking. Control cells were treated identically except for the omission of 1% Triton X-100. Cells were immunostained with the antibodies against Cx43 and Cx32 as described above (see "Antibodies and Immunostaining").
Metabolic Labeling and Cell Surface Biotinylation-PC3 and LNCaP cells (2.5 ϫ 10 5 ) were seeded on 6-cm dishes and grown to 80 -90% confluence. Cells were incubated for 30 min at 37°C in methionine-and cysteine-free DMEM (Invitrogen) containing 2 mM L-glutamine and 5% dialyzed fetal calf serum (pulse medium), and labeled with 0.15 mCi/ml Tran 35 S-label (ICN Biomedicals, Irvine, CA) for 30 min at 37°C (2.5 ml per dish). Cells were chased in normal cell culture medium supplemented with 0.5 mM methionine and 0.5 mM cysteine (chase medium) at 37°C. Arrival of newly synthesized Cxs at the cell surface at various time intervals was assayed by incubating cells in freshly prepared EZ-Link TM Sulfo-NHS-SS-Biotin reagent (Pierce) in PBS at 0.5 mg/ml for 30 min at 4°C and quenched with 15 mM glycine. Lysis of monolayers, immunoprecipitation of Cxs, and the recovery of biotinylated Cxs were done essentially as described by VanSlyke and Musil (36) with the following modification. After cell lysis, 1/5th of the total immunoprecipitate was used for detecting the total and 4/5th for detecting the biotinylated fraction of Cxs at various time intervals. The samples were boiled for 5 min in 1ϫ SDS-PAGE sample buffer and resolved by SDS-PAGE. Quantitation was done by PhosphorImaging (STORM 840, Amersham Biosciences) using ImageQuant software.
Dextran Uptake and Endosome Labeling-PC-3 and LNCaP cells were seeded on glass coverslips as described above (see "Antibodies and Immunostaining") and grown to 60 -70% confluence. Endocytosis of dextrans was achieved by incubating cells with 10 mg/ml of Alexa Fluor 594-Dextran (M r 10,000, lysine-fixable, Molecular Probes) in DMEM at 37°C for 30 min. Cells were then rinsed briefly with PBS and incubated with DMEM without dextrans for 30 min at 37°C, rinsed again three times with PBS before fixing (with 2% paraformaldehyde), and immunostained for connexins as described (see "Antibodies and Immunostaining").
Transient Transfection of PC-3 Cells-FuGENE 6 transfection reagent was used to transfect cells with various plasmids mentioned above according to the manufacturer's instructions. Briefly, cells (5 ϫ 10 4 ) were seeded on glass coverslips in 6-well culture plates (for immunostaining) or 100-mm dishes (for Western blotting) containing 2 and 12 ml of complete medium, respectively, and incubated at 37°C. After 16 h, cells were transfected with various plasmids using 3 l of Fu-GENE 6 reagent: 1 g of DNA complex in 100 l of serum-free medium. The transfection complex was preincubated for 15 min at room temperature before transfection. The medium was replaced by fresh medium 5 h after transfection, and cells were fixed for immunostaining or lysed for Western blotting (see above) after 24 -48 h.
Communication Assays-Gap junctional communication was assayed either by microinjecting fluorescent tracer Lucifer Yellow (443 Da, 5% aqueous solution) as described previously (28,38) or by scrapeloading method (39). Briefly, LY was microinjected into test cells by Eppendorf InjectMan and FemtoJet microinjection systems (models 5271 and 5242, Brinkmann Instruments) mounted on a Leica DMIRE2 microscope. The microinjected cells were viewed with the aid of Sony 3 CCD color video camera (Sony Corp., Japan) and the number of fluorescent cells (excluding the injected one) scored ϳ10 min after injection served as an index of junctional transfer. For scrape loading, cell culture medium from freshly confluent 6-cm dishes was removed and replaced with 2.5 ml of medium containing rhodamine-conjugated fluorescent dextrans (10 kDa, 1 mg/ml; fixable) and LY (0.05%).
Overexpression of Connexin-43 and Connexin-32 in PC-3
Cells-We chose PC-3 cells for these studies because they are well characterized, highly invasive, and androgen-independent cells (10,11,26). Moreover, our previous study (27) showed that they communicated poorly, formed few gap junctions, expressed a low level of Cx43 mRNA and protein, and expressed no other Cxs. Connexin-43 and Cx32 were expressed in PC-3 cells by infecting with a control (LXSN) and Cx-harboring (LXSN43S and LXSN32S) recombinant retroviruses. Expression of Cxs was confirmed in several randomly isolated G-418resistant clones by Northern and Western blot analysis, and the data from one representative clone are shown in Fig. 1. Northern blot analysis of total RNA isolated from Cx43-and Cx32-expressing clones showed abundant expression of retrovirally transcribed 4.4-kb Cx43 (Fig. 1A, labeled R-Cx43; lane PC-43-1) and Cx32 (Fig. 1B, labeled R-Cx32; lane PC-32-1) mRNAs that were not expressed by the parental (lane PC-WT) and control clones (lanes PC-NEO-4 and PC-NEO-1). In addition, parental PC-3 cells and all retrovirally transduced clones expressed a low level of endogenous 3-kb human Cx43 mRNA (Fig. 1A, labeled E-Cx43), which was detected only upon overexposure of blots (data not shown). Western blot analysis (Fig. 1, C and D) showed that parental PC-3 cells (lane labeled PC-WT) and PC-3 clone isolated after infecting with LXSN (lane labeled PC-NEO) expressed neither Cx32 nor Cx43, whereas clones isolated after infecting with LXSN43S and LXSN32S expressed abundant Cx43 (Fig. 1C, lane labeled PC-43-1) and Cx32 (Fig. 1D, lane labeled PC-32-1). Taken together, these data show that retroviral transduction of Cx32 and Cx43 in PC-3 cells results in the abundant expression of both Cxs at the mRNA and protein level.
Intracellular Accumulation of Connexin-43 and Connexin-32 in PC-3 Cells-To determine whether Cx43-and Cx32-expressing PC-3 clones formed gap junctions, we immunostained cells from several clones with Cx-specific antibodies. Fig. 2 shows the typical immunostaining pattern for Cx43 and Cx32 in one such clone. The results showed that a major portion of both Cxs remained localized in the intracellular compartments (Fig. 2, 1st and 3rd rows), and punctate dots characteristic of gap junctional plaques were rarely observed at cell-cell contact areas (Fig. 2, white arrows in the middle and right panel of top row). Moreover, in many cells intense intracellular immunostaining was observed not only in the perinuclear areas but also throughout the cytoplasm (Fig. 2, yellow arrows, middle panels, 1st and 3rd rows) and near the cell surface membrane (not shown). In contrast to PC-3 cells, both Cxs were assembled into gap junctions in indolent, Cx-expressing LNCaP clones, and very little intracellular accumulation was observed (Fig. 2, 2nd and 4th rows, junctions indicated by arrows). Because similar results were obtained with all other independently isolated PC-3 clones, we chose only 1 clone for each Cx subtype for further study.
Detergent Insolubility of Intracellular Connexin-32 and Connexin-43 in PC-3 Cells-A widely accepted biochemical attribute of Cxs upon assembly into gap junctional plaques is their insolubility in Triton X-100 (34 -36). To corroborate the immunocytochemical data, and to rule out the possibility that intracellular accumulation was due to the formation of gap junctions in the intracellular stores, we extracted Cx32-and Cx43-expressing PC-3 cells in situ with 1% Triton X-100 for 30 min (see "Experimental Procedures") before immunostaining with Cxspecific antibodies. We also analyzed the Triton X-100 solubility of Cxs by Western blot analysis. Only a small fraction of total Cx43 (Fig. 3A) and Cx32 (Fig. 3B) was converted into a Triton X-100-insoluble form in invasive PC-3 cells, whereas a major fraction of both Cxs was Triton X-100-insoluble in LN-CaP cells and in RL-CL9 cells (29) which form abundant gap junctions composed of Cx43 (compare lanes labeled T, S, and I under PC-3, LNCaP, and RL-CL9). In PC-3 cells, nearly all intracellular Cx32-and Cx43-specific immunostaining was lost upon in situ extraction (Fig. 3C, compare Control and Extracted in 1st and 3rd rows), whereas in LNCaP cells there was no effect (compare Control and Extracted in 2nd and 4th rows).
Intracellular accumulation of Cx43 and Cx32 in PC-3 cells was not an artifact of overexpression, because androgen-dependent Cx-expressing LNCaP clones seemed to express more Cxs compared with PC-3 clones (see Fig. 3) even though an equal amount of total protein was analyzed by Western blot analysis (see also "Discussion"). The results shown in Fig. 3 suggest that in contrast to LNCaP cells, both Cx32 and Cx43 accumulate intracellularly in PC-3 cells and that intracellularly accumulated Cxs were not assembled into gap junctions ectopically based on the assumption that Triton X-100 solubility of gap junctions formed intracellularly is not significantly different from those formed at the cell surface. Moreover, these data also agree with our previous in vivo studies, which showed intracellular accumulation of Cx43 and Cx32 and/or loss of formation of gap junctions in epithelial cells of aggressive prostate carcinomas (20).
Communication in Connexin-expressing PC-3 Clones-To examine whether formation of only a few immunocytochemically detectable gap junctions was sufficient to promote gap junctional communication in Cx-expressing PC-3 clones, we studied the junctional transfer of 443-Da fluorescent tracer, LY, by microinjection and scrape loading. Fig. 4 shows representative photographs of junctional transfer of LY in control and Cxexpressing PC-3 cells. There was no significant difference in the junctional transfer of LY in the Cx-expressing PC-3 cells compared with the control cells (see Fig. 4 legend for details). The data obtained with the microinjection were independently corroborated by the scrape-loading method (39), which measures the communication capacity of several hundred cells simultaneously (Fig. 4 legend). Similar data were obtained with three other control and Cx43-and Cx32-expressing PC-3 clones (data not shown). Taken together, the data suggest that, in contrast to Cx-expressing LNCaP clones and normal RL-CL9 cells (20), reintroduction of Cx43 and Cx32 into PC-3 cells does not significantly enhance communication.
Cell Surface Biotinylation, Dextran Uptake, and Trafficking of Connexins-Besides intense perinuclear staining, we also observed punctate immunostaining scattered throughout the cytoplasm and near the cell surface (see Fig. 2, cells with yellow arrows). These observations prompted us to investigate whether intracellular accumulation of Cxs was due to their inability to traffic from intracellular stores to the cell surface or due to internalization and recycling back into the cytoplasmic stores after arrival at the cell surface because of lack of cell-cell contacts conducive for the formation of gap junctions. To test this notion, Cx-expressing PC-3 and LNCaP cells were cell surface-biotinylated and immunoprecipitated after metabolic labeling with Tran 35 S-label for detecting the total and biotinylated fraction of connexins at various time intervals (see "Experimental Procedures"). Fig. 5 shows that Cx43 and Cx32 were readily biotinylated in LNCaP cells in three independent experiments, whereas no biotinylated fraction of connexins could be detected in PC-3 cells (see figure legends for the efficiency of biotinylation of connexins). The failure to detect a significant amount of the biotinylated form of Cx32 and Cx43 in PC-3 cells was not due to inefficient cell surface biotinylation of proteins as judged visually by immunofluorescence microscopy using Alexa fluor-conjugated streptavidin (data not shown). These data suggest that trafficking of Cxs from the cytoplasm to the cell surface is impaired in PC-3 cells.
Because of the inefficient biotinylation of membrane proteins at 4°C, particularly of Cxs after their assembly into gap junctions (36), and to dispel the possibility that intracellular accumulation was caused by endocytosis of Cxs after their cell surface arrival and not by defective trafficking, we performed the following experiment. Dextrans were allowed to accumulate in the endocytic vesicles in PC-3 and LNCaP cells at 37°C for 30 min (shorter time intervals were not investigated), and colocalization of dextrans with Cxs was studied by fluorescence microscopy as described under "Experimental Procedures." The results of Fig. 6 show that in PC-3 cells, Cxs did not colocalize with the cell surface-derived endosomes as there was a clear demarcation between endocytosed dextrans and Cx immunostaining. On the other hand in LNCaP cells, appreciable cytoplasmic immunostaining for Cxs was colocalized with the endocytosed dextrans (Fig. 6, higher magnification), indicating normal Cx trafficking to cell surface followed by their endocytosis. These data suggest that in PC-3 cells, intracellular accumulation of Cxs is caused by impaired trafficking of Cxs to the cell surface and not by endocytosis. (37,40) have shown degradation of Cxs by both proteasomal and lysosomal pathways. Therefore, we next investigated whether intracellular accumulation of Cx43 and Cx32 was due to their resistance to degradation via these pathways. Fig. 7 shows that treatment with leupeptin, an inhibitor of lysosomal function, and ALLN, an inhibitor of the proteasomal pathway (40,41), further increased intracellular accumulation of Cx32 and Cx43 as judged by immunocytochemical (Fig. 7A) and by Western blot analyses (Fig. 7, B and C). Similar data were obtained with lactacystin (not shown), which is a more specific inhibitor of proteasomal pathway (41). Although we did not measure the half-life of intracellular Cx32 and Cx43 in PC-3 cells by pulse-chase analysis, our data suggest that intracellular Cxs are constantly degraded by lysosomal and proteasomal pathways regardless of whether or not they traffic to the cell surface.
Trafficking and Assembly of Connexins into Gap Junctions
Induced by ␣-Catenin-The Triton X-100-solubility and poor cell surface biotinylation of Cxs in PC-3 cells suggested that their trafficking to the cell surface was impaired. Moreover, the data in Fig. 7 further ruled out the possibility that intracellular accumulation of Cxs was caused by their aggregation into plaques resistant to degradation via proteasomal and lysosomal pathways. Because cadherin-mediated adhesion is nonfunctional in PC-3 cells (42), due to a deletion of the ␣-catenin gene (7), we asked if restoring cell-cell adhesion, which is conducive to the formation of gap junctions, would trigger the trafficking of Cxs from intracellular stores to the cell surface. Chicken ␣-catenin, which shows 90% homology to human ␣-catenin (43), was introduced transiently into Cx-expressing PC-3 clones using Lipofectin. In addition, yellow fluorescent protein chimeras of Cx43 and Cx32 were introduced into wild type PC-3 cells together with chicken ␣-catenin. Because of the interplay between junctional complexes, we examined expression of E-and N-cadherin after transient expression of ␣-catenin. Fig. 8 shows that ␣-catenin expression not only triggered the trafficking of Cxs from the intracellular stores to the cell surface but also induced their assembly into gap junctions as judged by immunocytochemical analysis (Fig. 8A) and by Triton X-100 insolubility assay of Cx32 (Fig. 8B) and Cx43 (Fig. 8C). We do not as yet know whether gap junctions formed after transient transfection of ␣-catenin into PC-3 cells are functional or nonfunctional. Our data also show that PC-3 cells expressed N-cadherin (and not E-cadherin), and its expression level did not seem to change after transient expression of ␣-catenin (data not shown). Furthermore, we found that transient expression of ␣-catenin also increased the expression of ZO-1 at the areas of cell-cell contact (data not shown).
FIG. 2. Immunolocalization of Cx32
and Cx43 in invasive PC-3 and indolent LNCaP prostate cancer cell lines. Cells were immunostained with polyclonal anti-Cx43 and monoclonal anti-Cx32 antibodies as described under "Experimental Procedures." Note most immunostaining in invasive PC-3 cells that express Cx43 (first row) and Cx32 (third row) is intracellular with only a few punctate dots characteristic of gap junctional plaques seen at the cell-cell contact areas (first row, arrows). In indolent LN-CaP cells that express Cx43 (second row) and Cx32 (fourth row), no intracellular immunostaining is observed, and most punctate dots are localized at the cell-cell contact areas (second and fourth rows, arrows). Note also that in some PC-3 cells (yellow arrows) scattered punctate dots are observed throughout the cytoplasm. No intracellular immunostaining was observed in non-permeabilized cells or when primary antibodies were omitted.
Intracellular Accumulation of Connexins Is a Common Feature of Androgen-independent Prostate Cancer Cell
Lines-We next investigated whether impaired trafficking of Cxs from intracellular stores to the cell surface was observed in other androgen-independent human PCA cell lines. Chimeras of Cx32 and Cx43 fused to yellow fluorescent proteins were introduced transiently into several androgen-independent PCA cell lines using the androgen-dependent LNCaP cells as a positive control ( Fig. 9 and Table I). The site of localization of these chimeras as well as formation of gap junctions were examined at 24 and 48 h post-transfection. Fig. 9 shows that transient transfection of Cx32-YFP and Cx43-YFP into ALVA-31, an androgen-independent PCA cell line, resulted in intracellular localization, whereas transfection into LNCaP cells resulted in localization of Cxs at cellcell contact areas indicative of the formation of gap junctions (indicated by arrows in bottom row). The data from several such experiments are summarized in Table I (see legends) and corroborate the data of Fig. 9.
DISCUSSION
The study reported here was motivated by our previous findings. First, we had observed that epithelial cells in well differentiated prostate tumors assembled Cxs into gap junctions whereas those in invasive and poorly differentiated prostate tumors did not and, instead, contained Cxs that were localized intracellularly (20). Second, we had shown that the forced expression of Cx32 and Cx43, the two Cxs expressed by the well differentiated epithelial cells of the normal prostate (44,45), into an indolent, androgen-dependent, but Cx-deficient PCA cell line, LNCaP, inhibited growth, induced differentiation, and retarded tumorigenicity (20). The main findings of our present study show that, in contrast to indolent LNCaP cells, forced expression of Cx43 and Cx32 into an invasive cell line PC-3, and several other androgen-independent PCA cell lines, resulted in intracellular accumulation. The accumulation of Cxs was probably not caused by the formation of ectopic gap junctions in the intracellular compartments, their aggregation into plaques resistant to degradation via proteasomal and lysosomal pathways, and their endocytosis upon arrival at the cell surface but by impaired trafficking to the cell surface. Most significantly, transient expression of ␣-catenin, a cadherinassociated protein that links cadherins to the cytoskeleton elements (46), induced trafficking of Cx32 and Cx43 from intracellular stores to the cell surface.
In contrast to LNCaP cells, we failed to detect significant amounts of cell surface as well as endocytosed Cx32 and Cx43 in PC-3 cells, suggesting that intracellular accumulation was caused by impaired or inefficient trafficking and not by endocytosis and internalization. Moreover, the following evidence substantiates our notion that intracellular accumulation was not caused by an artifact of overexpression of Cx32 and Cx43 and by mutations in the Cxs that might impede trafficking (49 -52) but rather by the pathological state of the PC-3 cells themselves. First, transient transfection of ␣-catenin abrogated impaired trafficking of Cx32 and Cx43 and induced their assembly into gap junctions (Fig. 8). Second, neither Cx32 nor Cx43 accumulated significantly in LNCaP cells, which expressed more Cxs than PC-3 cells when an equal amount of total protein was analyzed by Western blot analysis (Fig. 3). Third, intracellular Cx32 and Cx43 continued to be degraded via proteasomal and lysosomal pathways, indicating that they had not aggregated into degradation-resistant plaques.
FIG. 3. Triton X-100 insolubility of Cx32 and Cx43 in invasive PC-3 and indolent LNCaP prostate cancer cell lines. Triton X-100-soluble and -insoluble extracts from PC-3 and LNCaP cells were analyzed by Western blot analysis as described under "Experimental
Procedures." Note that only a small fraction of total Cx43 (A) and Cx32 (B) was not soluble in Triton X-100 in invasive PC-3 cells, whereas a major fraction of total Cx43 and Cx32 was Triton X-100-insoluble in LNCaP cells and in RL-CL9 cells, which are derived from rat liver and form abundant junctions composed of Cx43 only (29). C, Triton X-100 insolubility of Cx32 and Cx43 in PC-3 and LNCaP cells in situ. PC-3 and LNCaP cells were extracted in situ with 1% Triton X-100 and immunostained with anti-Cx43 and anti-Cx32 antibodies. Note that in Cx43-and Cx32-expressing PC-3 cells, nearly all intracellular Cxs are lost upon in situ extraction compared with Cx-expressing LNCaP cells which form abundant gap junctions. T, total; S, Triton X-100-soluble fraction; and I, Triton X-100-insoluble fraction.
The intrinsic and extrinsic determinants crucial for regulating gating, degradation, trafficking, and assembly of Cxs into gap junctions are poorly understood (19,40,(53)(54)(55). Impaired trafficking of wild type Cx43 and Cx32, leading to their intracellular accumulation, in PC-3 cells and its abrogation by ␣-catenin raises several intriguing questions about the molecular mechanisms involved in the trafficking of Cxs and their assembly into gap junctions. It has been proposed that bidirectional signaling between cell adhesion molecules and Cxs may be important in initiating the formation of gap junctions (34,56). Consistent with this notion, several studies have shown that restoration of cadherin-based cell-cell adhesion induces the assembly of Cxs into gap junctions (56), and conversely, its abolition impedes that assembly (57,58). Because previous studies (7) showed that transient expression of ␣-catenin in PC-3 cells triggered the recruitment of E-cadherin from the cytoplasm to the cell surface and restored cell-cell adhesion, we reasoned that intracellular accumulation of Cx32 and Cx43 might be caused by deficient E-cadherin-mediated cell-cell adhesion. Therefore, we expressed ␣-catenin in PC-3 cells. We chose to express ␣-catenin transiently because previous attempts to express it stably in PC-3 cells had failed. 3 Although our data are consistent with the possibility that transient expression of ␣-catenin triggered trafficking of both Cx32 and Cx43 and induced gap junction formation, the mechanism involved is likely to be complex, and restoration of E-cadherin based cell-cell adhesion alone may not be the cause.
First, in agreement with other studies, we found that both the parental and Cx-expressing PC-3 cells expressed N-cadherin and not E-cadherin at cell-cell contacts, suggesting that N-cadherin mediates cell-cell adhesion in these cells (7,59,60). Moreover, N-cadherin levels did not change significantly upon transient expression of ␣-catenin as assessed immunocytochemically and by Western blot analysis (data not shown). The reasons for the discrepancy between our data and those of others regarding E-cadherin expression in PC-3 cells are not understood. Second, we also found that ␣-catenin expression not only triggered the trafficking and assembly of Cxs into gap junctions but also recruited ZO-1, a tight junction and adherens junction-associated protein (72), to the cell surface. Third, ZO-1, Cx32, and Cx43 were not only co-localized with ␣-catenin at the areas of cell-cell contact but were also in the cytoplasm (data not shown). The significance of this finding is not understood at present and remains to be explored in the future (61,62). Fourth, the trafficking of Cx32 and Cx43 from intracellular stores to the cell surface as well as formation of gap junctions could be induced significantly upon increasing intracellular cAMP levels, 4 suggesting that alternative pathways exist, in- were seeded on glass coverslips and allowed to grow to 60% confluence. They were then allowed to uptake dextrans by incubating with the culture medium containing Alexa fluor 594-conjugated dextrans for 30 min. After rinsing once with PBS, cells were incubated with the culture medium for 30 min, and the medium was removed. Cells were rinsed with PBS three times, fixed with 2% paraformaldehyde, and immunostained for Cxs as described under "Experimental Procedures." Note a clear demarcation between endocytosed dextrans and Cx43 immunostaining in PC-3 cells and detectable colocalization of Cx43 with the dextrans in LNCaP cells. The cells in right-most panels represent higher magnification of those marked by white arrows in the third panels. Similar data were obtained with Cx32-expressing PC-3 and LNCaP cells. been shown to interact with vinculin, ZO-1, ␣-actinin, and actin in addition to binding to E-or N-cadherin and has also been proposed to be one of the key regulators of the structural integrity of several junctional complexes (63)(64)(65). Previous studies implicating cadherin-mediated cell-cell adhesion in facilitating the assembly of Cxs into gap junctions utilized cell lines in which Cxs trafficked normally, did not accumulate intracellularly, and in which only the capacity to assemble Cxs into gap junctions was defective (56,68,69). In addition, the role of adhesion in facilitating the formation of gap junctions is further complicated by studies showing inhibition of gap junction formation upon restoration of cell-cell adhesion mediated by N-cadherin (70) or vice versa (57,58). Our data clearly show that N-cadherin is expressed at the regions of cell-cell contact (data not shown). Previous studies have shown that ␣-catenin controls the strength of cell-cell adhesion through its interaction with ␣-actinin and the cytoskeleton (see Refs. 63 and 72 for discussion). Because the adhesive force of the extracellular domain of N-cadherin is sufficient for increasing cell-cell adhesion and there was no difference in the strength of cell-cell adhesion between control and ␣-catenin transfected cells as assessed by cell aggregation assay, 5 it is possible that ␣-catenin induces the trafficking of Cxs and their assembly into gap junctions by modulating the state of cell-cell adhesion.
Recent findings (1,5,(63)(64)(65)(66)(67) have shown that cell-cell or cell-matrix adhesions are a versatile and complex array of interactions, modulations, and signaling events rather than just adhesion and that there is extensive cross-talk between various junctional complexes formed as a result of these adhesions. Therefore, it is possible that ␣-catenin may induce trafficking of Cxs and their assembly into gap junctions by activating cellular signaling pathways in addition to modulating cell-cell adhesion (72). For example, conditional ablation of ␣-catenin in keratinocytes has been shown to increase proliferation by activating mitogen-activated protein kinase, independent of effects on cell-cell adhesion (73). Several signal transduction pathways have been shown to regulate the formation and dissolution of gap junctions (13)(14)(15)(16)(17)(18)(19). In this regard we note that mitogen-activated protein kinase has been shown to be constitutively activated not only in PC-3 cells but also in several other androgen-independent PCA cell lines in which ␣-catenin has been found to be deleted (74). Impaired trafficking of Cxs has been observed previously (50 -52) in a number of other diseases such as X-linked Charcot-Marie-Tooth disease, sensorineural hearing loss, erythrokeratoderma variabilis, visceroatrial heterotaxy, and cataract, but the impairment in most cases has been causally linked to the mutations in the Cxs themselves and not to the pathological state of the cells. In several studies intracellular accumulation of Cxs was attributed to an artifact of overexpression and/or internalization and ectopic formation of gap junctions (47,48). Therefore, the role of cell-cell adhesion in enhancing the formation of gap junctions indirectly via its effect on trafficking or via activation of multiple signal transduction pathways may have been overlooked. Our data implicate impaired trafficking of Cxs as an additional cause of communication deficiency in tumor cells and support the notion that regulating transport of Cxs by physiological effectors may be a mechanism to control the assembly of gap junctions as suggested by others (71).
The incidence of PCA increases with age and is characterized by progression from an indolent, slow-growing androgen-dependent state to an invasive, androgen-independent state (75)(76)(77). Because intracellular accumulation of Cx43 and Cx32 was observed only in epithelial cells from invasive prostate carcinomas (20) and in androgen-independent cell lines (present study), but not in an indolent and androgen-dependent cell line 5 P. P. Mehta and R. Govindarajan, unpublished observations. LNCaP (20), it is tempting to speculate that the pathways governing the trafficking of Cxs and their subsequent assembly into gap junctions become altered during the progression of PCA from an androgen-dependent to -independent state. Our Cx43-and Cx32-expressing androgen-independent PC-3 cells, in which ␣-catenin is deleted, and androgen-dependent LNCaP cells, in which the cadherin-based adhesion system has remained intact (20), offer two in vitro systems that mimic the behavior of Cxs in vivo and should prove useful in elucidating the molecular mechanisms by which ␣-catenin controls the FIG. 8. Transient transfection of ␣-catenin induces trafficking of connexins and formation of gap junctions. A, parental PC-3 cells were grown on glass coverslips and transiently transfected with pcDNA3-␣-catenin and pCx43-YFP and pCx32-CFP. After 48 h, cells were fixed in 2% paraformaldehyde, permeabilized, and immunostained with antibody against ␣-catenin (red). The colocalization of Cxs (green) with ␣-catenin was studied after acquiring z section images and deconvolution. Note the formation of puncta (gap junctions) at the cell-cell contact areas in cells expressing ␣-catenin (arrows). The last row is a higher magnification of the third row. Similar data were obtained with connexin-32-and connexin-43-expressing PC-3 clones. Nuclei, stained with DAPI, are blue. B and C, connexin-32-and connexin-43-expressing PC-3 clones were seeded in 10-cm dishes and allowed to grow to 50% confluence for 3 days. Cells were transiently transfected with pcDNA3-␣-catenin, and the assembly of gap junctions was studied by Triton X-100 insolubility 48 h after transfection. Control, untransfected cells; Transfected, cells transfected with pcDNA3-␣-catenin. Note the appearance of Triton X-100-insoluble bands in cells transfected with pcDNA3-␣-catenin. T, total; S, Triton X-100-soluble fraction; and I, Triton X-100-insoluble fraction; TRF, transfected; CNT, controls.
FIG. 9. Impaired trafficking of connexin-43 and connexin-32 in other human prostate cancer cell lines. Several androgen-independent human PCA cell lines (see Table I) were grown on glass coverslips and transiently transfected with pCx43-YFP and pCx32-CFP. Androgen-dependent LNCaP cells were used as a positive control. Cells were fixed in 2% paraformaldehyde 48 h post-transfection, and localization of connexin-43 and connexin-32 in the intracellular stores and at the cell-cell contact areas was observed under fluorescent microscope. Note intracellular localization of Cx43 and Cx32 chimeras fused to fluorescent proteins in an androgen-independent PCA cell line (arrows) and formation of gap junctions at the cell-cell contact areas (arrows, bottom row) in androgen-dependent LNCaP cells. The nuclei (blue) were stained with DAPI.
trafficking of Cx43 and Cx32 and their assembly into gap junctions. | 2018-04-03T01:04:14.240Z | 2002-12-20T00:00:00.000 | {
"year": 2002,
"sha1": "94e9d803defa75828bb78385f963339117380582",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/277/51/50087.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "f4722f1721abc0ab1830356298d322e7b02f7577",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Physics"
]
} |
12715973 | pes2o/s2orc | v3-fos-license | The Effects of Group Play Therapy on Self-Concept Among 7 to 11 Year-Old Children Suffering From Thalassemia Major
Background Children suffering from thalassemia have higher levels of depression and lower levels of self-concept. Objectives The aim of this study was to determine if group play therapy could significantly increase self-concept among children with thalassemia major ages 7 to 11 years old in teaching hospitals of Golestan province, Iran, in 2012. Patients and Methods In this randomized, controlled clinical trial, 60 children with thalassemia major were randomly assigned to intervention (30 children) and control (30 children) groups. The intervention included eight 45 to 60 minute sessions during four weeks, during which the intervention group received group play therapy. The control group received no interventions. Self-concept was measured three times using the Piers-Harris children’s self-concept scale: before, immediately after, and a month after the intervention. Results For the intervention group, results showed that the mean self-concept score was significantly higher at the second point in time compared to the baseline (P < 0.001), going from 60.539 to 69.908. Likewise, comparing the first and third time points, the mean score significantly increased and reached 70.611 (P < 0.001). Furthermore, changes in the mean score from the second to the third time point, though non-significant (P = 0.509), followed the trend, going from 69.908 to 70.611. For the control group, comparing the first, second, and third time points did not result in any significant change in the mean score (P > 0.05). Conclusions The results showed that group play therapy improves self-concept in children suffering from thalassemia major.
Background
Thalassemia major is the most common genetic hemoglobinopathy and is accompanied by severe anemia. Hence, people suffering from this disease require frequent blood transfusions to survive, and, as age advances, psychological aspects regarding their quality of life become increasingly noteworthy (1). Prevalence of thalassemia major has reached 1 out of 100,000 people worldwide and 1 out of 10,000 in the European union (2). The disease has spread across a vast geographical strap from the eastern Mediterranean to the middle east, India, and southeast Asia and is growing outside of these regions due to mounting rates of immigration (3). In 1994, the world health organization declared that the disease carriers will comprise a minimum of 8% of the world's population in the twenty-first century (4). In Iran, it is most prevalent alongside the Caspian sea and the Persian gulf (10%) (5), and it is found at rates of about 4% to 8% in other regions (6). Today, thalassemia major is no longer a childhood disease, as life expectancy has risen with the advancement of medical treatments. Now, patients face new concerns in life, such as settling down, becoming educated, and finding employment. Unfortunately, they constantly struggle with new complications of the disease and a higher risk of mental illnesses. If the patients fail to adapt to the disease and treatment strategies throughout childhood, they will encounter severe complications that will significantly influence their lives (7). This illness is an externally-imposed source of stress, of which complication and recurrence are major crises for the patient and his or her family (8). Similar to other chronic diseases, thalassemia contains an imperative and psychological aspect. ural development (2). Very few studies have addressed the relationship between quality of life and the psychological state of children suffering from thalassemia major (1). Recent studies in developing countries such as India have reported a lack of attention to patients' psychosocial wellbeing as the main cause of their deaths. Thalassemia challenges the patients physically and mentally and disrupts their quality of life (9). It is not hard to believe that these patients undergo a great amount of psychological pressure and struggle with lack of self-confidence and self-concept. Research shows that children with chronic diseases who lack social support have low self-concept and a distorted body image. In general, in chronic diseases such as thalassemia, especially in children, the psychological aspect is highly important, because as children grow they need to confront psychopathological disorders, such as negative body image, anxiety, depression, and failure to control aggressive impulses (1). In 1997, Aidin et al. reported that 80% of the children with thalassemia major have at least one mental disorder that influences their self-concept (10). Studies show that children suffering from chronic diseases will adapt better to the disease if they have positive selfconcept (11). In a study by Pradhan et al. (2003), results revealed that children with thalassemia have higher levels of depression and low levels of self-concept (10). Shaligram et al. (2007) also consider psychological problems as a significant deteriorating factor for quality of life among children suffering from thalassemia major (12). Therefore, identifying and managing the psychological problems of these patients can enhance their medical outcomes and quality of life.
Nurses understand the medical, psychological, and social needs of the children and have contact with their families. As such, they can play a crucial role in the children's adaptation to the disease (13). To effectively help a child with an unpleasant body image, the nurse must identify the resources that support the child's adaptation. Encouraging children to use their skills can help them to adapt to environmental changes and increase their self-esteem (11). One of the most important psychological and physical needs of children is to play games, which can enhance their intelligence, character, and social development. By playing games, children reveal their tendencies and relieve their anxieties (14). From a psychological point of view, playing games is also a proper method to facilitate children's adaptation when faced with new tensions and recurrent admissions to hospitals. This method is well-accepted among children, parents, and nurses, since it is fun and pain-free (15). Playing games generates a nurse-child relationship based on trust, as it is a familiar and secure activity in an unfamiliar environment. In fact, natural games serve to reduce fear in order to aid in recovery following illness or injury and can be helpful in achieving medical objectives (16). It seems that play therapy is a sensitive and evolved technique, specifically geared for children, that prevents the development of future social and mental problems and helps children to grow properly (17). Success in play helps children to develop self-esteem, which they can experience through group play; even very small successes may affect the child's self-concept. A positive self-image helps them feel that they are loved, valued, and able to do valuable work, and these feelings will result in self-respect, selfconfidence, and a general sense of happiness and satisfaction (18). On the other hand, group play in the hospital is a well-recognized, valuable element in child health services, which can be provided at different levels of health care to accelerate the treatment process and reduce the need for medical interventions (19). In a clinical trial, William et al. assessed the effects of preoperative play therapy on schoolage children and showed that play therapy could reduce anxiety and negative feelings considerably (20). Likewise, Zareapour et al. demonstrated that group play therapy reduces depression symptoms in children with cancer (17).
Innate games, such as playing with mud, are known to be the most fundamental and fun activities for children. Since 1959, various forms of game play have been common in Britain's hospitals, and, to some extent, have decreased the negative effects of mother-child separation and compensated for the educational shortcomings of doctors and nurses dealing with the psychological and emotional aspects of illness in children.
Objectives
To our best knowledge, no previous studies have addressed the effects of group play therapy on self-concept among children suffering from thalassemia major in Iran, and we assume that play therapy could improve selfconcept and protect them against psychosocial problems. This study aims to verify the effectiveness of group play therapy to increase self-concept among children ages 7 to 11 years old with thalassemia major.
Study Population
This was a randomized, controlled clinical trial adopting a group play therapy intervention for 60 children with thalassemia major. The children were enrolled and randomly assigned to intervention (30 children) and control (30 children) groups. The research was conducted during nine months in 2012 -2013 in the thalassemia units of two teaching, governmental, and referral hospitals of the Golestan University of Medical Sciences. Located in Gorgan and Gonbad, these hospitals have the maximum daily counts of admissions for thalassemia major in Golestan Province, northern Iran.
Sample Size
To estimate sample size, we presumed a 95% confidence level and 80% statistical power. Considering a 7point increase on the self-concept scale following intervention to be statistically significant, the required sample size for each group according to the formula below is 30 children: Inclusion criteria were: displaying willingness to participate in the study, falling within the ages of 7 to 11 years, having no acute or chronic diseases other than thalassemia major, visiting the hospital regularly for blood injections, and being free of any known impairment, disabilities and cognitive-mental disorders. Exclusion criteria were absence from two or more consecutive sessions and the occurrence during the study of any incident that could affect a child's self-concept. Group allocations were completely at random, based on hospital patient lists and with 15 children coming from each hospital. The type of randomization was simple. In the control group, after the intervention, two patients were not willing to fill in the self-concept questionnaire and they were excluded. Figure 1 displays a detailed flow chart of RCT for the intervention and control groups.
Subjects were enrolled on weekdays between 8 to 10 AM at the time of blood transfusion. The group play therapy intervention was held between 5 to 7 PM outside of the hospital. In the intervention group, parents were asked to bring their children to group play therapy twice a week (on Mondays and Wednesdays) for one month. Furthermore, precautions were taken based on the hospital's visiting hours so that members of the two groups had no exchange of information during the study period.
Data Collection Tools
Data collection instruments included the child's demographic questionnaire (age, gender, order of birth, education, physical activities, relationships, date of the first blood transfusion), the family's demographic questionnaire (age, education, occupation, number of children, financial status), and the Piers-Harris children's self-concept scale (21). The Piers-Harris scale is designed to evaluate selfconcept among children and adolescents ages 7 to 18 years old. The scale is comprised of 80 items, each requiring either a "yes" or "no" response. These yes/no questions are grouped into six categories: behavior (16 questions), cognition and mental state (17 questions), physical appearance (11 questions), anxiety (14 questions), prosocial popularity (12 questions), happiness and satisfaction (10 questions). Each item is scored 1/0 for yes/no in positive items and the reverse for the negative items. Thus, the total score is 80, with a higher score indicating a more positive self-rating. This scale has been adopted in various studies in Iran (22,23) and other countries (21,(24)(25)(26), and it has acceptable reliability and validity. Alaei Karhrudy et al. reported a content validity index of 90% and a Cronbach's alpha of 0.91 for the instrument (23). It is worth mentioning that self-concept was measured through face-to-face interviews with children at three time points: before, immediately after, and one month after the intervention.
Intervention
Before the intervention, the children's and families' demographic features, along with children's levels of selfconcept, were recorded by questionnaires. Children in the intervention group were then divided into groups of seven to eight. They took part in eight sessions of group games, each lasting for 45 to 60 minutes. The activities undertaken during the sessions are listed in section 3.3. According to the research goals, these games were designed based on translations of Eric Berne (27), the play therapy techniques of Kaduson and Schaefer (28) and Reddy et al. (29), and Landreth's games (30).
The General Design of Group Games
First session: children met with one another and the researcher, who explained the activities, defined rules and tasks for each individual, and encouraged the children to play with play dough, with all materials for play having been prepared in advance.
Second session: the group continued to play with play dough, along with coloring patterns. The final works were then photographed, and the photographs were stored by the researcher.
Third session: instructions were provided, and the required facilities were prepared for playing with mud to encourage the children to generate diverse and desirable works of their own. Fourth session: the processes of playing with mud in parallel and one-by-one methods and using colorful materials to color patterns continued while the researcher photographed and stored the resulting works.
Fifth session: the session consisted of providing explanations about the clay roundabout game and encouraging the children to maintain interactions and exchanges among group members.
Sixth session: the group games of clay roundabout and pattern coloring continued, and the researcher photographed and stored the resulting works.
Seventh session: group mud play continued, and the children were encouraged to tell stories using their own created works of art.
Eighth session: having wrapped up the story-telling process, the children's artwork was returned to them. Finally, a group photo of the participants and their artwork was taken, and awards were distributed.
At the end of the eighth session and right after the end of group play, the level of self-concept was recorded via the Piers-Harris scale (21) in both groups. To ensure long-term effects, self-concept was assessed in both groups a month after the intervention.
Ethical Considerations
In carrying out their research, the authors thought deeply about and did their utmost to avoid certain ethical issues, including plagiarism, lack of informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, and redundancy. The purpose of the study and the study methods were clearly explained to the hospital authorities, head nurses, and parents. Written informed consent forms were received from parents, and oral consent was received from the children. The ethical committee of the research council of Tehran University of Medical Sciences approved the study procedure, assigning it the project number 90-04123-16112, under the code IRCT: 201112221788N.
Data Analysis
The Kolmogorov-Smirnov test confirmed that the selfconcept scale was normally distributed, but age did not fit the normal distribution. Data were summarized using descriptive statistics such as mean and standard deviation. Repeated measures of analysis of variance were performed to assess the effects of response variables through time (three time points) in the intervention and control groups. The sphericity assumption (that the variances of the differences between all possible pairs are equal) was tested and confirmed (P > 0.05) in advance. We have compared mean self-concept scores from three time points and between two groups. In addition, other factors, such as gender and doing sports (yes/no), were added to the model as the covariates of no interest. Analysis was completed using SPSS 20.00 and a P value below 0.05 was considered significant.
Results
The results of the study showed that the average age was 9.50 ± 1.74 in the intervention group and 9.29 ± 1.56 in the control group. As it is shown in Table 1, the majority of the cases under study in both groups were first-or secondborn children, and their first blood transfusion was before the age of one in 70% of the cases in the intervention group and in 68% of the cases in the control group. Both groups were identical in terms of demographic features.
Repeated Measures ANOVA results revealed that the mean self-concept score was significantly different for the three time points (P = 0.006).
For the intervention group, the trend was increasing ( Figure 2). Post-hoc analysis showed that the mean selfconcept score was significantly higher at the second time point compared to the first (P < 0.001), rising from 60.539 to 69.908. Likewise, comparing the first and third time points, the mean score increased significantly and reached 70.611 (P < 0.001). Furthermore, the change in mean score from the second to the third time point, though nonsignificant (P = 0.509), followed the trend, rising from 69.908 to 70.611 (Table 2). For the control group, the trend was first increasing and then decreasing, but there were no significant alterations ( Figure 1). According to the post-hoc analysis results, the mean self-awareness score increased from 57.139 at time point one to 58.533 at time point two (P = 0.623) and decreased to 55.955 at the third time point (P = 0.655). Comparing the second and third time points did not indicate any significant change in mean score either (P=0.117) ( Table 2).
Additionally, the mean self-concept scores of the two groups were compared at each time point. We observed no difference at the first time point (P = 0.313), but the self-concept score was significantly higher (P < 0.001) for the intervention group compared to the control group at both the second and third time points.
Discussion
The present research studies the effects of group games on the self-concept of children ages 7 to 11 years old suffering from thalassemia major. In confirmation of our hypothesis, the data analysis via statistical tests indicated that group games positively affect self-concept among children suffering from thalassemia major.
Playing games is a proper means of therapy and education for children. Play therapy is an appropriate developmental intervention for children (31). The healing power behind playing games can be used in various ways in order to teach adaptive behaviors for children with deficient social and emotional skills (29). For children, the need to play games is a deep and fundamental necessity that requires special attention in the same way as physiological needs. Playing games improves self-concept and might also decrease or eliminate social and behavioral problems in children (32).
Various studies confirm the effects of playing games on different social, emotional, behavioral, and physical aspects of life and also on tension-triggering stimuli, such as disease, recurrent admissions to hospitals, divorce, death, domestic violence, and physical and sexual abuse (29). A few examples include: teaching visual games with motorperceptual interventions to children in hospitals with emotional problems (33), comparing group games and individual play in children who were sexually abused (34), and comparing children's individual play and play with siblings among children who were victims of domestic violence (35).
In his research on the effects of group games on selfconcept among children with leukemia, Salmani (2002) stated that self-concept scores in the intervention group increased significantly after some sessions of group games (36).
In a study on the effects of cognitive-behavioral group therapy on enhancing self-esteem and reducing despair in adolescents with thalassemia in 2010 (11), Kiani stated that the self-confidence level in the intervention group improved post-intervention (evaluated via the Cooper-Smith Scale) compared to that found in the control group (8). Kouvava et al. (2011) conducted research on 11 boys and nine girls ages 7 to 8 years old who had weak interpersonal and social skills. The results stated that musical and roleplaying games are beneficial in improving the children's social skills and self-concept (37), which is consistent with the results of the study at hand. However, the questionnaire used in this study was different, leading to the conclusion that even with different means of evaluation, the effects of playing are confirmed.
In research carried out on children ages 7 to 9 years old with behavior disorders, Zare and Ahmadi (2007) also observed a decrease in the average score among children with behavioral disorders such as aggression, depression, anxiety, social maladjustment, and attention deficit disorder after play interventions via the cognitive-behavioral method as displayed in Rutter's Behavior Questionnaire. The statistical tests also indicated a significant difference in the average scores pre-and post-intervention (38). The results of that study are consistent with the results of our study. However, a different questionnaire was used, which again confirms the effects of playing, even with a different means of evaluation. Playing games is a suitable means of therapy and education for children. Play therapy is an appropriate developmental intervention for children (31). Furthermore, in a study conducted with 18 young female felons in Tehran, Albadi and Ronaghi (2002) declared that the self-concept scores in the intervention group had improved remarkably post-intervention compared to the control group. Coloring mandala patterns had a role in reducing their anxiety and improving their self-concept (39).
Additionally, in a study of 60 children from the kindergartens of Shiraz in the school year 2005 -2006, Emami Rizi et al. (2011) found that children's creativity scores had increased following group play in the intervention group. Playing games in groups had affected the children's verbal creativity. Moreover, holding group play sessions helped to develop the children's ability to interpret images (40).
In a study conducted with 720 children ages 3 to 12 years old, Aghajani Hashtchin (2011) found a significant change in the dependent variables of social skills. After comparing their results with those of similar studies around the world, they stated that playing games results in stronger social skills (32).
In a study in 2003, Salmani observed a significant difference between the scores of self-concept pre-and post-intervention in the intervention group (36). In his research in 2002, Mirbagheri stated that the average score of selfconcept in the intervention group before and immediately following intervention significantly changed for the intervention group (41).
In a study in 2010 on the effects of play therapy in Ethiopia, Nigussie (2010) notes a significant difference in the intervention group's scores for life skills, concentration level, and self-confidence prior to and after the intervention (42).
The results of the study by Goymour et al. (2000), carried out in the pediatric hospital of Sydney, Australia, showed that post-intervention, the level of anxiety in the intervention group was significantly lower compared to pre-intervention. Furthermore, self-concept did not change in the control group, but was found to rise to a higher level in the intervention group a month after the intervention (15). In contrast, the results of a study by Jalali et al. (2008) showed no significant differences between the levels of fear in the intervention group preand post-intervention (43). In a study on the effects of cognitive-behavior group play therapy on social phobia in children ages 5 to 11 years old, Jalali et al. (2011) stated that play therapy in the follow-up stage (two months postintervention) was effective in decreasing phobia in the intervention group (43).
The results of that study, like those of the study at hand, confirm the enhancement of self-concept in the intervention groups. It appears that no other study was conducted concerning the long-term effects of play therapy on selfconcept among children; thus, the present study is the only reliable source of data.
The benefits of group therapy sessions in enhancing the self-image of children suffering from thalassemia major are vividly displayed in the present study. Furthermore, our findings are consistent with those of the studies conducted by Salmani (36) In the present research, the therapy took place in eight play sessions, two times a week, each lasting 45 to 60 minutes. Games using play dough and mud were each played during four sessions. The intervention in the study conducted by Jalali (43), however, took place once a week (six sessions, 60 minutes each) and used dolls and stuffed animals in most sessions, playing with play dough only in the final session.
The current study showed that group play improved self-concept in the intervention group consisting of girls and boys ages 7 to 11 years old. Schmidt and Cagran (2008) suggest that a child's self-concept may be affected by influential people, such as teachers, peers, parents, and their comments, as well as by self-comparisons with others in the society (47).
Thus, one of the most noteworthy factors affecting a child's self-concept is lack of communication with friends and peers, which can be eliminated to some extent when they take part in group game sessions with peers.
Considering the findings of the present study, it is recommended that nurses prepare a suitable base for improving interpersonal relationships by setting proper examples for the relationships between children suffering from thalassemia and the treatment staff. In addition, we suggest that those preparing the essential facilities and proper environment for treatment employ this method for children in all hospital units. Similar studies should be conducted for children with other chronic diseases, considering the great effects of group play therapy on self-concept in children with thalassemia. There are no similar studies with which to compare our results, which is a clear limitation. | 2016-08-09T08:50:54.084Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "9be036159fb1aeaf8de229d31b5a7a9492806749",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4893425?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9be036159fb1aeaf8de229d31b5a7a9492806749",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
80997001 | pes2o/s2orc | v3-fos-license | The Insulin-Exercise Connection
Submit Manuscript | http://medcraveonline.com twentieth century “disease” hypokinesis i.e. too little bodily movement. The late twentieth century Western epidemic of obesity is as much due to widespread chronic hypokinesis, as it is to the CHO/caloric excess typical of modern humans. Thus Thompson and colleagues note: “Body fat is significantly affected by a program of prescribed exercise in both sexes at all age levels.... Exercise has been shown to produce body fat loss without caloric restriction in both animals... and humans..., although the loss is usually more pronounced with caloric restriction.
In fact, reductions in activity level are strongly correlated with body fat increases, even if caloric intake is significantly reduced.... Studies done in the 1970's with both men and women found that significant body fat loss could be produced simply through a regular (i.e. at least four days/week) long-term walking program, without any dieting. "Vigorous regular walking has resulted in reduced body fat stores, reduced... insulin requirements (a 36% decrease in the ratio of insulin/glucose concentration occurred), and [spontaneously] reduced food intake." A key feature of the essentiality of moderate aerobic exercise, i.e. walking (the primary "natural" form of "exercise" engaged in of necessity by virtually all of humanity prior to the twentieth century) to preventing/ reducing obesity is that "exercise increases insulin sensitivity and decreases insulin resistance)".
The reason for this is quite simple. Actively exercising muscles may take in up to 30 times more blood sugar than they do when at rest, and this cellular uptake of glucose occurs without insulin! Thus walking provides the body with an alternative method to remove excess glucose from the bloodstream without the usual need for insulin secretion. Taking a brisk long walk 30-60 minutes after a large meal may help blunt the otherwise inevitable massive insulin surge large (CHO-rich) meals normally induce.
The anti-insulin program
a. Seriously reduce (better yet, eliminate) from the diet all processed, refined, junk food, high sugar (sucrose, fructose, glucose), high white flour "foods": bread, pasts, cake, pie, e. Take 40-60 minute brisk walks, 4-6 days/week. Avoid walking in highly polluted areas and/or times of day, as toxins from auto exhaust may inhibit mitochondrial burning of fuel (i.e. fat) for energy.
f. Take various supplements discussed in this article -Vitamin C, B6, B3, Zinc, Magnesium, and GLA.
Additional nutritional/pharmacologic aids to fat loss/ insulin reduction i. Chromium picolinate
This form of chromium is well absorbed, and has been shown in various animal and human studies to aid in fat loss while at least modestly enhancing lean body mass. "The ability of chromium picolinate to enhance insulin responsiveness has been demonstrated in rat myoblast cell cultures. 72-h pre-incubation with chromium picolinate (50ng Cr/ml) resulted in a 60% increase in insulin binding, and markedly enhanced glucose and leucine uptake". Dosage: 200mcg Chromium (as picolinate) two or three times daily for women; 200mcg three times daily or 400mcg twice daily for men.
Obesity, aging, chronic dieting, genetics, lack of exercise and lack of cold exposure may all lead to "subclinical" hypothyroidism, often involving deficient conversion of less active T4 to T3. T3
Book Review
The late twentieth century Western world has achieved the most sedentary lifestyle for the mass of humanity in all human history. Our sedentary modern world also provides a glutton's feast of cheap sugar-and starch-rich breads, chips, pastas, cakes, cookies, candy, etc. so abundantly available that even those on welfare can afford to feast on these hyperinsulinemia-promoting carbo-riches. It is perhaps no coincidence that in order to rapidly (and cheaply) fatten cattle and hogs before slaughter, they are confined in crowded feed-lots where the animals have virtually no room to move, while being fed all the CHO-rich grain they can eat.
Modern obese humans routinely suffer from the unique The Insulin -Exercise Connection 2/3 Copyright: ©2018 Salvitti decreases the activity of D5D, reducing pro-insulin PGE2, just as do glucagon and EPA. T3 also stimulates fat burning. Ideally one should use T3 (Cytomel) only under a physician's care and guidance, but those who fit the low-thyroid profile and suffer from chronic obesity and fatigue, and who are willing to take practical, moral and legal responsibility for their own actions, may wish to experiment with modest doses of T3 -i.e., 2-3 mcg once or twice daily, taken morning and/or early afternoon. T3 is fast/short-acting, and most effects will be gone within 24 hours or less. Nonetheless, there is some risk here -caveat emptor! Heart palpitations, excessive sweating, racing thoughts, headaches, irritability, and insomnia are all hints -it's not for you! Those with known or suspected (past or present) hyperthyroidism, even if obese, should not use T3 without a doctor's care. Similarly those with any other serious disease states -especially heart arrhythmia's/heart disease -should be extremely cautious in T3 use.
ii. Anti-cortisol states
Since cortisol levels tend to increase with age (and stress), and since cortisol promotes both obesity and insulin resistance, this is a key strategy to normalize weight/insulin levels. DHEA and high dose vitamin C may all help lower elevated cortisol levels. DHEA: 10-50mg A.M. Gerovital-H3®: 100mg A.M. Dilantin (Phenytoin) 25-50mg at bedtime. Vitamin C: 500-1000mg 3-4 times daily.
iii. L-Tryptophan/5-Hydroxytryptophan (Oxitriptan)
Several human studies with 5HTP, the precursor of serotonin, have found good weight loss results with 5HTP. There is evidence that some humans compulsively snack on CHO foods to feel better. The large insulin releases generated by such "carbo-bingeing" preferentially increase tryptophan/serotonin in the brain, temporarily reducing anxiety and depression in such people. By providing an alternative, non-insulin-driven way to increase brain serotonin, L-Tryptophan, supplements may help reduce weight not only by reducing total caloric intake, but especially by reducing CHO intake, thus lessening hyper-insulinemia/insulin resistance. In the 1992 Italian study, 300mg/5HTP supplements may help reduce weight not only by reducing total caloric intake, but especially by reducing CHO intake, thus lessening hyperinsulinemia/insulin resistance. In the 1992 Italian study, 300mg 5HTP 3 times daily before meals reduced women's caloric intake over a twelve week period from 3232 cal/day to 1273 cal/ day, while reducing CHO intake from 350gm/day to 150gm/day. Weight dropped an average of eleven pounds. (The study did use special enteric-coated 5HTP capsules to prevent gut irritation). Taking 1000-1500mg L-Tryptophan at bedtime, or 50-100mg 5HTP before meals may reduce CHO-craving and intake.
iv. Pro-GH supplements
As noted earlier, PGE1 may enhance GH release. So all the PGE1-enhancing nutrients (GLA, EPA, B3, B6, C, zinc, magnesium) may be helpful here. Hydergine® has been shown to increase GH-release in the elderly with long-term usage at 1.5mg every 6 hours. The authors of this study also note that bromocryptine (Parlodel) may also enhance adult GH-release. They also note that the enhanced pituitary GH-release from hydergine® seems to be related to an increase in brain (hypothalamic) dopamine status, which normally declines (often precipitously) with age. Thus the dopamine-enhancing agent Deprenyl may also be useful as part of a GH-restoration program.
v. Mitochondrial energizers and protectants
In a healthy human, storage fat is at a minimum and sooner or later all fat-dietary, body-manufactured, and storage fat -ends up as "fuel for the furnace" -i.e. the trillions of mitochondrial "power plants" found in most of our cells. Vitamins B1, B2, B3, B5 (pantothenic acid), and biotin, as well as NADH, alpha-lipoic acid, CoQ10/Idebenone, magnesium and manganese are all necessary "spark plugs" to facilitate burning fat and sugar for energy. 10-100mg B1, B2, B3, 50-200mg B5, 1-10mg biotin, 5-20mg NADH, 50-300mg alpha-lipoic acid, 60-300mg CoQ10 and/or 45-135mg Idebenone, 200-500mg magnesium, and 3-10mg managanese may optimize mitochondrial energy cycles. Since the mitochondrial structures inevitably generate massive amounts of free radicals in turning fuel into energy, and since these structures are rich in easily rancidified polyunsaturated fatty acids, a panoply of antioxidants -e.g. 100-400 IU vitamin E, 500-2000mg vitamin C, 100-200mcg selenium, 50-300mg alpha-lipoic acid, 500-1000mg N-acetylcysteine, 2mg copper as copper sebacate (SOD-mimetic), 50-100mg grape seed extract/pycnogenol, 300-500mg silymarin -may help protect the essential "fat burning furnaces." In addition, 1gm L-carnitine twice daily on an empty stomach may facilitate fat burning -carnitine is the "shuttle molecule" that "escorts" fatty acids into mitochondria where they are then oxidized. ALC (acetyl-l-carnitine) may also be a useful mitochondrial regenerator -mitochondria become progressively deformed and dysfunctional with aging. Dosage: 1-3gms/day. Ward Dean suggests this dose can be half L-carnitine and half ALC to achieve successful mitochondrial regeneration.
vi. Caffeine
Caffeine, whether from coffee or as a "drug", has many benefits for aiding fat loss. However, excessive doses (probably 300mg/ day and up, on average) may pose risks of "caffeinism", with such symptoms as headaches, restlessness, irritability, insomnia, anxiety, excessive urination, gut irritation, heart palpitations, and muscle tremors. A thermogenic/fat burning dose is probably 100-200mg daily -i.e. the equivalent of one to two cups of coffee/ day, or two to four cups made with half decaf and half regular. Caffeine taken with a meal may induce increased thermogenesis -burning fat to make heat. It may increase resting metabolic rate -our resting metabolism burns 60-70% of our total daily energy consumption. Caffeine preadministration 45-60 minutes before exercise has been shown to spare liver/muscle glycogen and to enhance fatty acid burning in humans. Caffeine taken after at least an eight hour fast, i.e. in the morning after arising, may be especially effective when combined with a 40-60 minute brisk walk, to enhance burning of stored body fat.
Biography
Tony Salvitti is originally from Los Angeles, California, and attended and graduated high school in Kaiserslautern, in Vogelweh West Germany. Where he was trained in Muay Thai-competing He became an avid treasure hunter/metal detecting both on land and under water, as a hobby, which he still does to this day. Some of his other interests and specialties besides natural bodybuilding, Occult, magic, witchcraft, Taoist sorcery, Development of esoteric powers, Chinese herbalist medicine, certified master of acupressure, rank of Sifu in Tai Ch'i Chuan, and Hsing-i chuan(Xing Yi Quan),World Black Belt Bureau member, Japanese archery (Kyudo, and Kendo), All types of cooking, wine tasting and gardening fresh food, hypnosis, binaural beatfrequency and brain wave activity, animals, artwork for other author's book covers, photography, archeology, rock climbing, drawing, painting, sculpting, woodworking, welding, designing new equipment and devices and exploring distant geographic points of interest and learning about new cultures and customs. The "Black Dragon Ch'i Kung Dojo", is where he will be found teaching students "Ch'i Kung" (Qi Gong), and being a personal trainer to all natural bodybuilders, and his wife Jacqueline in Roy, Utah. USA. | 2019-03-18T14:03:26.974Z | 2018-02-15T00:00:00.000 | {
"year": 2018,
"sha1": "caf0d87c56140cb865972e6310debca75aa70df8",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/IJCAM/IJCAM-11-00352.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "58bc959af2ee8ddd5a28343a5730c805a8d42867",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256558185 | pes2o/s2orc | v3-fos-license | A digital microfluidic system with 3D microstructures for single-cell culture
Despite the precise controllability of droplet samples in digital microfluidic (DMF) systems, their capability in isolating single cells for long-time culture is still limited: typically, only a few cells can be captured on an electrode. Although fabricating small-sized hydrophilic micropatches on an electrode aids single-cell capture, the actuation voltage for droplet transportation has to be significantly raised, resulting in a shorter lifetime for the DMF chip and a larger risk of damaging the cells. In this work, a DMF system with 3D microstructures engineered on-chip is proposed to form semi-closed micro-wells for efficient single-cell isolation and long-time culture. Our optimum results showed that approximately 20% of the micro-wells over a 30 × 30 array were occupied by isolated single cells. In addition, low-evaporation-temperature oil and surfactant aided the system in achieving a low droplet actuation voltage of 36V, which was 4 times lower than the typical 150 V, minimizing the potential damage to the cells in the droplets and to the DMF chip. To exemplify the technological advances, drug sensitivity tests were run in our DMF system to investigate the cell response of breast cancer cells (MDA-MB-231) and breast normal cells (MCF-10A) to a widely used chemotherapeutic drug, Cisplatin (Cis). The results on-chip were consistent with those screened in conventional 96-well plates. This novel, simple and robust single-cell trapping method has great potential in biological research at the single cell level. A novel microfluidic device enables researchers to isolate and culture single cells for use in drug testing and other experiments. To date, the high voltages required to capture cells on electrodes of a digital microfluidic (DMF) device have led to a reduced lifespan for the device and potentially damaged the cells. A team led by Yanwei Jia of the University of Macau overcame this by incorporating 3D microstructures into a DMF chip. This constrains the shape of cell culture droplets to isolate and capture single cells. With the use of low evaporation temperature oil and a surfactant, the system enabled them to use one quarter of the voltage of other designs for droplet transportation. The ability to isolate and culture individual cells will be of great value in addressing biological questions at the single-cell level.
Introduction
Traditionally, cells are analyzed based on the responses of a large population cultured in Petri dishes or well plates 1,2 . However, in bulk analysis assays, the differences among individual cells (especially for primary tumor cells from patients) are masked, preventing us from obtaining a unique insight into the complex interaction between the environmental and single cellular activity. Single-cell culture and analysis remain in high demand for a full understanding of the cell-to-cell variability and for precision medicine.
Microfluidics has emerged as the most promising platform for single-cell analysis due to its characteristics in handling small volumes of samples. There are two main types of microfluidics: flow-based channel microfluidics and electric-based digital microfluidics (DMF). Single-cell culture has been investigated with channel microfluidics with one or no cells in each droplet for precise cell identification [3][4][5][6] . Microfluidic devices integrated with dielectrophoresis (DEP) 7,8 , optical tweezers 9-11 , or acoustic waves 12,13 are powerful in trapping and manipulating single cells. Among the reported single-cell capture methods, microwell arrays fabricated in the flow channel have the highest single-cell capture efficiency. However, in all these studies, the cells were from the same inflow sample, where the stimuli had been already premixed. This setup greatly limited the number of drugs that could be screened on one chip. There is a possibility that droplets may diffuse away in some designs, with certain cells being lost. The problems arise from the characteristics of channel microfluidics, where droplets are generated and analyzed in a batch.
In contrast to channel microfluidics, digital microfluidics (DMF) is electric-based. This characteristic gives DMF advantages over channel microfluidics, such as individual droplet manipulation, multistep processes, flexible electric-automatic control and the potential for point-of-care. However, the size of a droplet on DMF (~0.3 μL) is much larger than can be achieved in channel microfluidics (1 nL), making it difficult to perform isolated single-cell analysis on a flat electrode. Wheeler's group cultured cells [14][15][16] by fabricating hydrophilic patches on electrodes for cell-based apoptosis assay applications 16 . However, multiple cells were captured in the droplet on the hydrophilic patch. To realize single-cell culture, Gidrol's group demonstrated that by preparing a cell suspension with low concentration, single-cell isolation can be realized using DMF 17 . However, the single-cell capture efficiency was quite low, with one or two cells captured on an electrode. Lammertyn's group reported that by fabricating many small-sized hydrophilic micropatches on an electrode, single cells can be captured for long-term culture. Nevertheless, the multiple hydrophilic patches greatly raised the actuation voltage needed to transport a droplet through this electrode 18 . As is well known, cells feel stress under certain electric field strengths and can even be lysed with a high electric field 19,20 . In addition, high-voltage actuation easily causes the dielectric layer to lose its insulating properties and break down, thus shortening the chip lifetime 21 . Therefore, the actuation voltage is desired to be as low as possible without compensating for the droplet movement efficiency, the cell viability and the observation of cells.
Physical and mechanical effects were also investigated for single-particle or single-cell trapping on DMF. For example, combining the function of gravity with the trapping geometry effect of negative dielectrophoresis, several research groups have realized single-particle or single-cell patterning on DMF [22][23][24] . However, all of these methods were used on a flat electrode surface. The most powerful 3D microstructures widely employed in channel microfluidics for single-cell trapping have been neglected in explorations of the DMF.
In this report, we present a DMF system ( Fig. 1) for single-cell culture with innovative micropatterned arrays constructed by 3D microstructures fabricated on a DMF chip to trap single cells and to prevent the trapped cells from aggregating during a long-time cell culture. To minimize the influence of electric actuation voltage on cellular health, a low evaporation temperature and gassoluble silicone oil with a solubility of oxygen several times greater than that in water [25][26][27] and a fluorinated surfactant (F127) were introduced into the system to lower the actuation voltage to 36 V, which is 4 times lower than normally used (150 V). The oil quickly evaporated at 37°C, the cell culture temperature, to expose the droplet to air for cell respiration. To demonstrate the technological advances, we ran the drug toxicity test by culturing breast cancer cells or normal cells with various concentrations of a clinically established chemotherapeutic reagent, Cisplatin (Cis), on-chip and compared with the off-chip scenario. The comparable results proved that the micropatterned arrays are effective for single-cell isolation and track monitoring during long-term culture. Due to the mature protocol of 3D microstructure fabrication on a DMF chip, the strong droplet control ability and the high single-cell trapping efficiency, the developed system has great potential for application in biological research at the single cell level.
Results and discussion
3D microstructures for virtual channels, virtual chambers, and micro-wells For digital microfluidics (DMF), individual droplets are manipulated on an array of electrodes. In the sandwichstructured DMF chip, a chamber is formed by assembling a bottom plate with patterned electrodes and a top plate with a grounded conductive layer. This structure concentrates the electric field between the bottom and the top plates to lower the actuation voltage. When assembling the two plates together, there is an inevitable possibility that the space on one side is slightly thicker than the others. During a long-term cell culture on-chip, the pancake-shaped droplet may drift away from its original location to a deeper spot to lower its surface energy (ESI, video 1 and video 2). This would result in losing track of each droplet and the unexpected merger of two droplets. Wheeler's group fabricated hydrophilic cell culture spots to avoid droplet drifting during cell culture. This required complicated chip fabrication, and later transportation of the droplet from the culture spot became a problem. In this work, we fabricated 3D microstructures as fences of 60 μm in height along the droplet transportation electrodes and the cell culture spots (Fig. 2). The distance between each fence post was approximately 300 μm, much less than the size of a droplet (1 mm). Surface tension prevented each droplet from getting through the fences, while the medium oil still freely moved around the fences. Virtual channels and virtual chambers were formed by structuring the fences on-chip to hold the droplets at certain places. During the long-term cell culture, the droplets were held in place for observation and analysis.
In the sandwich-structured DMF chip, a droplet is normally in a pancake shape, with a plenary interface between the droplet and the substrates. Nevertheless, the shape of the interface is affected by environmental structures. When shallow 3D microstructures ( Fig. 3a) existed on the plenary surface of the DMF chip, the restriction of the 3D microstructures beneath each droplet would force the droplet to form a curved interface due to the interfacial tension, as shown in Fig. 3b. We hypothesized that these curved surfaces would promote single-cell capture and storage.
In this work, we tested various microstructure designs. Details can be found in the ESI, Fig. S1. The optimized structure is shown in Fig. 3a and Fig. 3c. The 3D microstructures were patterned as walls with a width of 10 μm, a length of 20 μm and a height of 10 μm. There was a small gap of 5 μm between the ends of each wall, forming a semi-closed well between the walls, as shown in the yellow frame in Fig. 3c. The single-cell trapping results are shown in Fig. 3d. As can be seen, cells were perfectly isolated from each other and stored in each semi-closed well. The long-term isolation of the single cells for observation and tracking is shown in Fig. 4. As shown, without the microstructure array, the cells tended to aggregate after 24 h, even when they were initially individually suspended in solution. However, in the presence of the microstructures, the single cells initially captured in the micro-wells remained in a single-cell state. This provided us an easy way to locate the single cells and keep track of the responses of a certain cell to various stimuli. The regular structure also made automatic data analysis possible for a final intelligent cell culture and screening system.
All the following experiments were based on the wall array design.
Single-cell capture efficiency
Since the single-cell capture by the semi-closed wells shown above was passive, the capture efficiency was dependent on the cell concentration. Figure 5a shows images of the MDA-MB-231 cell distribution for cell concentrations of 2 × 10 5 , 4 × 10 5 , 8 × 10 5 , and 16 × 10 5 cells/mL, respectively (panels A-D). For easy analysis, image analysis software (ImageJ) was used to determine whether each semi-closed well was occupied by cells or not, as shown in Fig. 5b. For the on-chip cell culture, each droplet contained approximately 200 cells. Note that in all cases, 100% of the input cells were loaded into the semiclosed wells as the whole droplet stayed on the patterned electrodes. Figure 5c shows the percentage of semi-closed wells occupied by single cells with various cell concentrations. As shown, the efficiency increased from 8% to 20% when the cell density increased from 2 × 10 5 cells/mL to 8 × 10 5 cells/mL. However, further increasing the cell density to 16 × 10 5 cells/mL decreased the percentage of semi-closed wells occupied by single cells. Some of the semi-closed wells were occupied by two or more cells due to the high density of cells. In the following experiments, 8 × 10 5 cells/mL was used as the optimized cell concentration for single-cell culturing.
Oil film with surfactant for single-cell culture on chip For cell culture on a DMF chip, droplets containing cells were open to the air for cell aspiration during culturing. However, in the absence of medium oil, the droplet transportation required a higher actuation voltage [28][29][30][31] . Although transportation only took a short time during the whole process, the strong electric field may still affect the response and growth of cells on-chip. Brassard et al. reported a water-oil core-shell structure able to lower the actuation voltage 32 . However, the construction of the core-shell structure required a precise water-oil ratio, making it difficult to realize in the cell culture system. In this work, we introduced a low evaporation temperature oil (PSF silicone oil, 1cSt) together with an inert fluorinated surfactant, F127, to help droplet transportation while maintaining cell aspiration during long-term cultures.
As shown in Table 1, for the normal electrodes, when the droplet was open to air, a 295 V actuation voltage (the highest voltage our system can provide) could hardly actuate the droplet under our sandwich-structured DMF setup. The existence of oil or surfactant alone lowered the actuation voltage to 202 V. The voltage in the absence of oil or surfactant in this work was slightly higher than reported [33][34][35][36] . This result may be due to the difference in system setups, such as the electric actuation frequency 32,33 or thickness of the dielectric layer 35,36 . However, in the presence of both oil and surfactant, the voltage used for moving a droplet was 36 V, much lower than that normally used in the literature, that is, approximately 150 V. The significant difference may be caused by the change in the contact angle in the presence of surfactant. The microstructures on the electrodes increased the threshold of the actuation voltages under all conditions. The droplet was barely movable with a 295 V actuation voltage without the addition of oil and surfactant. However, with or without the microstructures, the addition of oil and surfactant significantly lowered the voltage by approximately 6-fold compared to the case of no oil/ surfactant, from 295 V to 36 V or 50 V.
The effect of the spacing between the top and bottom electrodes on the actuation voltage was slightly more complex. Decreasing the spacing would increase the electric field, which would require a lower actuation voltage. At the same time, a narrow spacing would also increase the surface/volume ratio of each droplet, which would require a higher actuation voltage. The movement smoothness also depends on the viscosity of the droplet. Empirically, a droplet moves best when the electrode sizeto-space ratio remains between 3 and 5. In our experiments in this work, the electrodes were 1 mm × 1 mm, and the spacer was 200 µm, yielding a ratio of 5. The chip surface adsorption problem needs to be addressed before carrying out the on-chip applications. To quantitatively demonstrate the surface adsorption under different conditions, we ran a droplet containing 10 and 100 mg/mL recombinant eGFP (Recombinant Enhanced Green Fluorescent Protein, Beyotime, P7410) across the electrodes 10 times, 50 times or until the droplet stopped moving. Because the eGFP droplet could not be actuated at all under the air condition or oil conditions with a 295 V actuation voltage, only the conditions of the surfactant and oil and surfactant were measured. The fluorescence on an electrode was measured before and after transposing the eGFP droplet. Not much fluorescent difference was observed for either condition over the 50 actuations (Fig. S2), which suggested that the surface adsorption could be neglected for a certain number of actuations under the oil and surfactant conditions. Figure 6a schematically illustrates the mechanism of single-cell culture in the presence of oil and surfactant. Once the droplet was transported to the cell culture chambers (A part of Fig. 6a), the chip was put into a humidified cell culture cabinet (37°C, 5% CO 2 ) for a long-time culture (B part of Fig. 6a). During the static stage of culturing, the suspended cells sedimented into the semi-closed wells as single cells (C part of Fig. 6a).
At 37°C, the silicone oil evaporated in 2 h given its low evaporation point, leaving a thin film of oil at the interface, as shown in D part of Fig. 6a. The size of the droplet remained the same (Fig. S3), even when the medium oil seemed to completely evaporate during the long-term culture (Fig. 6b and Fig. S4). This can be attributed to the presence of F127. The hydrophobic tale of the surfactant tended to hold a thin layer of oil film at the interface between the droplet and air. This thin film of oil allowed air to be exchanged while at the same time kept the water from evaporation. The existence of a thin film of oil did not affect the cell viability. As shown in Fig. 6c, a 90% cell viability was achieved after 48 h of culturing, similar to the initial cell viability just after loading on-chip.
In summary, the introduction of a low-temperature evaporation oil and surfactant has many advantages. The thin film of oil formed between the sample and the DMF chip reduced the sample adsorption and contamination during the loading process. The actuation voltage for droplet transportation was significantly lowered to 36 V, therefore reducing the risk of damage to the cells. In addition, the oil evaporated during the incubation to allow cell respiration while maintaining the size of the droplet to stabilize the drug concentration in the droplet. Therefore, the oil-filled configuration (with Pluronic F127 in the droplet) was used in the following drug sensitivity tests.
Drug sensitivity test on-chip
As demonstrated above, the semi-closed wells formed by 3D microstructures on DMF had a high single-cell capture efficiency and could isolate single cells at a certain place for a long time for cell observation and tracking. A drug sensitivity test was used to validate the reliability of the DMF system. In this work, we used Cis (Cisplatin) as a drug model to test the drug sensitivity of MDA-MB-231 breast cancer cells and MCF-10A normal breast cells. An off-chip drug sensitivity test in a 96-well plate was also run in parallel as a comparison to verify the effectiveness of the on-chip case. The dead cells were stained with ethidium homodimer-1 (EthD-1), emitting red fluorescence. Figure 7a, c show images of single-cell culture of the breast cancer cells and normal cells in the absence or presence of drug. As can be seen, for the control samples, both breast cancer cells and normal cells had good cell viability after 24 h of culture on-chip. In the presence of drug, more dead cells were observed for both cell lines. As single cells in semi-closed wells, the discrimination of dead cells from living cells was clear and easy to count, demonstrating the benefit of single-cell culture. Figure 7b, d show the viability of breast cancer cells and normal cells with Cis. As can be seen, the cell viability decreased when increasing the drug concentration either on-chip or off-chip for both cell lines. The IC 50 value for the MDA-MB-231 breast cancer cells treated with Cis on-chip was 8 μM, comparable to the value tested off-chip, 10 μM (Fig. 7b). For the MCF-10A normal breast cells, the IC 50 value for Cis on-chip was 35 μM, which was also comparable to that tested off-chip, 32 μM, (Fig. 7d). The slight difference between on-chip and off-chip was mainly caused by the total number of cells counted. Thousands of cells were counted off-chip, while only a few hundred cells were counted on-chip. All of the consistency validates the DMF system for drug screening on-chip with limited cell numbers.
Conclusions
We set up a DMF system with 3D microstructures constructed on-chip for single-cell isolation and long-term culture. In the system, a low-evaporation-temperature oil and surfactant are innovatively employed to lower the actuation voltage to 36 V, 4 times lower than that normally used (150 V), for droplet transportation while retaining the open environment for cell respiration during long-term culturing. The result of the drug sensitivity test suggests that the designed structures are effective for single-cell trapping and evaluation for drug toxicity over time. Due to the mature protocol of 3D microstructure fabrication on a DMF chip, strong droplet controllability, and high singlecell trapping efficiency, the DMF system setup has great applications in biological research at the single cell level.
A potentially exciting application of our technology is precision medicine. If the cancer cells from biopsy samples can be properly labeled in the future, the true drug response of only the cancer cells can be monitored, and at the same time, those drugs toxic to normal cells can be monitored. All of these results will provide helpful information regarding the drug toxicity and side effects to doctors.
Materials and methods
System setup The DMF system contained four parts: a DMF chip, an electronic control board, customer-written control software and a fluorescence microscope, as shown in Fig. 1a. An image of the real system setup can be found in the ESI, Fig. S5. The DMF chip was held by a 3D-printed chip holder and test clips, which connected the electric control to the chip via the exposed contact pads, with switches on the printed circuit board (PCB) for on-chip droplet actuation. A computer program 37 was used to acquire the droplet position and execute droplet manipulation automatically by controlling the power switches. A signal generator was used to generate an AC actuation signal (0.5-10 Vrms, 2 kHz, sinusoidal wave), which was amplified to 30-300 Vrms by a transformer to charge the electrodes. The relationship between input voltage and output voltage after amplification can be seen in Fig. S6. The DMF chip was observed and imaged by a fluorescence microscope.
DMF device fabrication
The DMF device consisted of three parts: the bottom plate, the spacer and the top plate. Arrays of on-chip electrodes (1 mm × 1 mm) were designed with AutoCAD and patterned on a glass substrate (31.5 mm × 31.5 mm) as the bottom plate. A layer of 10 μm SU-8 photoresist was first coated on the bottom plate as the dielectric layer, followed by a second patterned layer (60 μm thickness SU-8) as fences to prevent droplets from drifting 37,38 and a third patterned layer (10 μm thickness SU-8) as a microstructure array to perform single-cell cultures. During the fabrication, a mask aligner (ABM, California, USA) was used for precise patterning of the dielectric layer, fences and microstructure array on the patterned chromium electrodes. After exposure, baking and development, a bottom plate with microstructure arrays on certain electrodes was obtained. The top plate was made of ITO glass (50 mm × 17 mm). 1.5 mm diameter holes for sample loading were drilled into the ITO glass using a laser cutting machine (ZKJ Laser, Shang Hai). Both the bottom and top plate were coated with Teflon (100 nm thickness) to promote smooth sample transportation. A 200 μm thick conductive adhesive tape was used as the spacer. The assembly illustration of the DMF chip can be found in the supporting information (Fig. S7).
Reagents
IPA, acetone and ethanol were purchased from Millipore. The reagents used for photolithography, including SU-8 and SU-8 developer, were purchased from Micro-Chem. Amorphous Fluoroplastics Solution was purchased from the Chemours Company. Pluronic F127 was purchased from Sigma Aldrich (Oakville, ON, USA). Silicone oil (1 cSt) was purchased from Clearco, USA. MDA-MB-231 cells and MCF-10A cells were obtained from the American Type Culture Collection (Manassas, VA, USA). Dulbecco's Modified Eagle's Medium, fetal bovine serum (FBS), trypsin-EDTA and phosphate buffer solution (PBS) were purchased from Gibco. cis-Diammineplatinum(II) dichloride was purchased from Sigma. Ethidium Homodimer-1 (EthD-1) was purchased from Thermo Fisher Scientific.
Cell culture
The MDA-MB-231 cells and MCF-10A cells were cultured in a humidified incubator (37°C, 5% CO 2 ). The growth medium for the MDA-MB-231 cells was Dulbecco's Modified Eagle's Medium (DMEM), supplemented with 10% (w/v) FBS, 2 mM L-glutamine, and 100 U/mL penicillin-streptomycin. The medium for MCF-10A was DMEM/F12, supplemented with 5% (w/v) Horse Serum, 20 ng/mL (w/v) EGF, 0.5 mg/mL (w/v) Hydrocortisone, 100 ng/mL (w/v) Cholera Toxin, 10 μg/mL (w/v) insulin and 100 U/mL penicillinstreptomycin. Both cell lines were cultured every 2-3 days for each passage at 2 × 10 5 cells per cm 2 . Prior to the experiments, the cells were dissociated and resuspended in a fresh medium. The number of cells and cell viability were measured by cytometry and trypan blue exclusion.
Cell viability assay under optimized cell culture conditions
We explored the cell viability assay for the oil-filled configuration (with 0.01% Pluronic F127 in the droplet) under the cell culture condition. 0.6 μl droplets (8 × 10 5 MDA-MB-231 cells/mL) containing 0.01% Pluronic F127 and 2 μM EthD-1 were pipetted into the holes and dispensed by applying the actuation signal to the adjacent electrodes sequentially. When the droplets were moved to the virtual chambers, we placed the DMF chip in a humidified incubator (37°C, 5% CO 2 ). The experiments were performed in triplicate. Cells treated with a 60°C water bath for 30 min and then with 0.01% Pluronic F127 and 2 μM EthD-1 were used as positive controls. After 0 h, 12 h, 24 h, 36 h, and 48 h, the DMF chip was observed for cell viability estimation under a fluorescent microscope.
Drug sensitivity assay on DMF chip
One clinically established chemotherapeutic reagent, cisplatin (Cis), was used in the drug sensitivity test. MDA-MB-231 breast cancer cells and MCF-10A normal breast cells were used as the model cell lines. Briefly, the MDA-MB-231 cells and MCF-10A cells (8 × 10 5 cells/mL) were aliquoted in 0.2 mL PCR tubes and then mixed with 0.01% Pluronic F127 and 2 μM EthD-1. After that, we filled the DMF chip with silicone oil (1 cSt). Then, cell suspensions and drugs in a series of concentrations were pipetted into the holes and then moved by applying an actuation signal to the adjacent electrodes sequentially towards the virtual chambers and mixed on the DMF chip. In our chip design and experiments, one path was for only one drug with concentrations from low to high, loaded onto the chip in a serial manner. Although some residues remained on the common path from the low concentration samples, they did not cause cross contamination and had little effect on the higher concentration samples. Then, the chips were placed in a cell culture dish containing wet paper towels and placed in a humidified incubator (37°C, 5% CO 2 ) for 24 h. Finally, we measured the red fluorescence of the cells via inverted fluorescent microscopy.
Drug sensitivity assay off-chip
Determination of the half-effective concentration (IC50) of Cis for the MDA-MB-231 cells and MCF-10A cells was performed using a cell counting kit (CCK-8) assay 39,40 . Briefly, 1.0 × 10 4 cells (the total volume was 100 μl) per well were seeded in a 96-well plate in the corresponding cell culture medium. They were then treated with various concentrations of Cis (with 0.1% (v/v) dimethyl sulfoxide (DMSO) treatment as a negative control and a cell culture medium without cells as a blank control) for 24 h. Then, 10 μL of CCK-8 solution was added to each well and incubated for 0.5 h. All experiments were performed in triplicate. Finally, 450 nm absorbance was measured by a microplate reader. The absorption values were reduced by the blank and normalized to the control wells. Graphs were plotted as the drug concentration versus the percentage of viable cells. | 2023-02-04T14:43:25.601Z | 2020-01-27T00:00:00.000 | {
"year": 2020,
"sha1": "b91904c6b36d01e82b30521baa327ed29d254f3d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41378-019-0109-7.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "b91904c6b36d01e82b30521baa327ed29d254f3d",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
55638862 | pes2o/s2orc | v3-fos-license | The Impacts and Implications of Anthropogenic Forces on the Unstable Geologic Platform in Parts of Anambra and Imo States Southeastern, Nigeria
Anthropogenic activities have exacerbated the incidences of floods, soil and gully erosion and landslides in parts of southeastern states of Anambra and Imo, Nigeria. Intense urbanization, deforestation, agricultural, commercial/industrial activities has extensively-denudated and elluviated the total environment. The variations in climatic condition also have associated implications. The rainy season registers an average annual rainfall of 2000 mm. The Geology comprises an unstable platform of a regional escarpment/cuesta subtended by sandy, highly-fractured and faulted Nanka sands/Ameki Formation. The underlying unstable geology facilitates the development of gullies with depth ranging from 2 m to over 80 m. The calculated rate of soil removal from the gully prone areas is about 9.20 to 10.16 ton/ha/yr. The significant cuesta of the area with steep scarp slope and gentle dip slope forms both surface and groundwater divide that also facilitates gully and landslide developments. The underlying geologic sandy structure is quite porous and permeable with huge aquiferous horizons of high pore-water pressures and effective stress. The problems of laissez faire attitude and poor understanding of the destructive implications of the unstable regional geologic platform result in the failure of measures to prevent myriads of environmental destructions and economic wastes.
Introduction
Geological Sciences or Earth Sciences or Applied Geology is keyed towards fully-understanding the constituents and geotechnical intricacies of the earthly environment and being able to use the realized knowledge to control any arising problems and implications in anthropogenic activities. The geologic formations of the study area are the underlying clayey/shaley Imo Shale overlain by the Nanka Sands in Anambra State and the Imo State geologic equivalent of sandy Ameki Formation. Parts of southeastern, Nigeria ( Fig. 1) has been severely-gullied resulting to colossal losses in human lives and property. The gullies mostly developed where the contributing effects of land use, climate and slope interact [1] These disasters are exacerbated by myriads of anthropogenic activities. The gully heads/fronts emanate and are associated with the N-S trending Agulu-Nanka-Ekwulobia-Orlu escarpment in the region. The escarpment consists of the steep east scarp slope and the gentle west dip slope. The gully problems are more on the scarp slope than on the dip slope. Gully sites at Agulu, Adazi Ani, Nanka, Oko, Ekwulobia, Uga, Umuchu etc are along the scarp slope while the dip slope hosts the gully sites of Adazi, Alor, Oraukwu, Nnobi, Abatete and Ideani. Major rivers such as Mamu, Uchu, Idemili, Odo and Orashi emanate from both the scarp and dip slopes of the escarpment. The rivers form eroding agents wreaking havoc on the unconsolidated geologic unit of the area.
Development of old communities into urban/semi-urban centres with social amenities of power supply, water and population growth due to increased socioeconomic activities all impact on the land exposing the ground surface to gullying especially on the scarp slope of the Awka-Orlu escarpment. These slopes are underlain by young sedimentary materials and sandstone boulders that support and carry the load. These are geotechnically formed and inherent in them, are fractures, joints, faults, folds and grabens. The pore spaces can be syn-depositional or post depositional with resultant high porosity and permeability. The interconnectivity of the pore spaces also plays major role in groundwater movement and gully erosion development. The hydrodynamic properties of water in huge aquifers also cause more problems. The effluents end up in the rivers, lakes, dry valleys and marsh lands. Added to these geotechnical problems is the overhead pressure due to continued vertical and horizontal loading from deforestation and urbanization by anthropogenic agencies. The high average rainfall amount of about 2000 mm has its own surficial impact and some of the water infiltrate directly. These problems are not appreciated by government that infrastructure such as roads are constructed without drainages thereby facilitating gully development and growth. Anthropogenic activity such as agriculture also induces gully erosion with resultant soil loss and surface water siltation [2,3]. This study is aimed at evaluating the consequences of anthropogenic activities on the unstable geologic environment of southeast, Nigeria. Appropriate land management techniques are important in the study area where the geotectonic, geologic, and geohydrologic characteristics of the region make many areas within it susceptible to gully erosion [4]. Areas prone to gully and landslide should be delineated and anthropogenic activities that trigger and facilitate gully/landslide development avoided in such areas.
Geotechnical Problems and Implications of Anthropogenic Forces
Anthropogenic activities have greatly exacerbated the soil and gully erosion incidences and landslides in the study area. It is hereby-predicted and feared that in the not-a-muchdistant future, major landslides may occur in some of the present heavily-built-up and highly-populated urban and suburban communities, now, perching at the apices and slopes of some of the highly-faulted and fractured cuestas/escarpments in the areas. Such cuestas include the Agulu-Nanka-Oko-Ekwulobia, Alor-Oraukwu-Adazi, Osina-Orlu etc. Many of the built-up hills in the study area are situated on the scarp or dip slopes of these escarpments. Anthropogenic activities and urbanization events have not considered the implications of these geotechnical weaknesses and aquiferous horizons in their planning and design of the consequent infrastructure. The fear is that if major landslides were to occur in these endangered platforms, there would be calamitous loss of lives and property. Specific towns like Agulu, Nanka, Oko, Ekwulobia, Adazi, Alor, Oraukwu, Okwudor, Okigwe, Orlu are severely affected by the geotechnical characteristics of the environmental platforms. The oil/gas prospecting and production activities in parts of the Niger Delta; the consequent and extensive vehicular movements; the widespread atmospheric pollution and water contamination from acid rains equally endanger southeastern environments. They contribute to the total breakdown of infrastructures such as roads, water schemes, houses, monuments etc. There are fractures (joints and faults) on the slopes of the escarpment/cuesta on which buildings are erected. With increase in urbanization the total pressures will be mounted on the fractures causing more instability, slope failures and landslides.
Materials and Methods of Study
The study involved field geologic mapping, laboratory work and identification of gully and landslide sites in the study area. At each gully site, coordinates and elevation were measured using Garmin GPS. Samples for geotechnical analysis were collected from the gully base because of the instability of the gully walls. The dimensions of the gullies were measured only at accessible sites using tape and calibrated rope. The present Digital Elevation Model of the area was produced using ArcGis software.
Result and Discussion
The depth to gully in accessible areas ranges from 2.5m to 90 m fig. 2. More number of gullies in active stage of development was observed in the area with depth increasing with season. The concentration of the gully erosion and landslide in the area varies from areas of high hydraulic head to areas of low hydraulic head [5]. Conscious efforts were also made in relation to the effects of climate change in recent times vis-à-vis these extensive ecological disasters that are ravaging the study area, namely, the high temperatures, the excessive aridity during the dry season, the high temperature impacts on the exposed pedologic and geologic features that have been denuded through deforestation. Natural and anthropogenic factors work in tandem to initiate and facilitate gully and landslide development and growth [6]. The high rainfall intensity and amount of about 2000 mm play major role in the removal or loss of soil sediments facilitating erosion. The rate of soil loss in the area range from 9.20 to 10.16 ton/ha/yr [7]. The eroded loose soil end up in the lakes, streams and rivers resulting in the siltation and pollution of these surface water bodies while the elevation of the area is gradually lowered. The DEM reflecting the current elevation of the area is as produced using GIS software (Fig. 3 a & b). Fig. 4. Geologic map of the study area with concentration of gully on Nanka Sands.
Implications of Geology on Gully Erosion and Landslide Problems
The study area is mainly-located in the Anambra Basin, Nigeria and is characterized by the Santonian uplift [7]. Earlier authors such as [8 -10] gave a detailed account of the Geology of Anambra Basin. Recent contribution by [5] and [12] also highlighted the various geologic formations prevalent in the basin. The study area is predominantly covered by lose, friable unconsolidated Nanka Sands (Eocene) and underlain by impermeable Imo Shale (Palaeocene). The Nanka Sands constitutes the Agulu-Nanka-Ekwulobia-Orlu escarpment significant in the hydrogeological and geohazards studies of the area. The unconsolidated, loose, friable and uncemented properties of Nanka Formation facilitate the concentration of gully erosion sites on it more than other geologic formations (Fig. 4). The Geology of Nanka and Agulu erosion valleys exhibit large boulders that serve as possible support on which the cuestaridges lie. High rate of percolation of the rainfall and differential deposits also facilitate erosion development.
During the rainy season, massive groundwater infiltration into the subsurface geologic units occurs. This high infiltration raises the groundwater levels in aquifers resulting into high pore water pressures and groundwater discharges to the surface and at the sides of river or stream valleys. The groundwater level rises and pushes up pore water pressures beneath the ground surface resulting in effluent seepages, slope stability problems and hence development of gullies and landslides.
The Role of the Escarpments in Gully Erosion Development
Topographically, the study area consists of elevated and low-lying landscape. The elevated areas form escarpment or cuesta that runs northwest-southeast with low asymmetrical ridge [13]. The escarpment has a crest that stands above 350m above mean sea level. The cuesta is characterized by steep eastward scarp slope and gentle westward dip slope. It forms both surface water groundwater divides thereby facilitating surface water and groundwater flows as well as enhancing gully erosion and landslides development and growth [5]. Most of the gully sites in the area emanate from the slopes of the escarpment and attain maximum depth down slope [13]. Increased mass of water flowing downhill from the crest of the cuesta during the rainy season increases the energy of the moving mass beyond threshold binding energy of the composite soil particles thereby detaching soil particles continuously, transporting and depositing them at the foot of the cuesta resulting in the formation of gully erosion [14]. The area is drained by network of springs streams, rivers and lakes flowing out from both flanks of the escarpment with heavy loads of dislodged/eroded sediments and emptying into River Niger and Anambra River.
The Agulu-Nanka-Oko-Ekwulobia-Orlu Cuesta and Gully Complex as a Case Example of Potential Urbanization-Failure/Collapse
The Agulu-Nanka cuesta is bordered in the east by the steep scarp slope that trends and ends at the Odo River valley; and towards the west by the gentle dip slope which slopes and terminates into the Idemili River valley. Subtending this cuesta is a regional and huge aquifer from which effluent seepages of groundwater issue out from slope-sides and flow either as springs and into valleys or streams or rivers; they combine with surface water flows to cause gullies and landslides. The major surface water flows include Odo River that flows northwards into the Mamu River which suddenly flows westwards into the Anambra River that discharges into the River Niger; and the Idemili, Orashi and Njaba Rivers that flow westward and also discharge into the River Niger. Both Idemili River and Anambra River as well as their tributaries and distributaries carry heavy loads of clayey/silty/sandy sediments that are transported from uplands of Nanka/Ameki Formations and Imo Shale down into the River Niger and Imo River into the Niger Delta and eventually into the Atlantic Ocean. Along the flow systems of these major rivers are heavy deposits of sands that are commercially-mined and sold for construction purposes; much of the heavy loads of sand deposits being dredged presently in the River Niger come from these denuded cuestas as products of the erosion process. These erosional features in form of rills, channels, gullies and chasmic depressions some of which are of canyon proportions along this cuesta/escarpment occur annually in various grades and dimensions. The population density in the urban and peri-urban towns of Agulu, Nanka, Ekwulobia etc. is very high; deforestation and urbanization programmes are continuously-ongoing at alarming rates; market and agricultural resources have busy schedules; vehicular movements and transportation activities are intensive etc. The infrastructural and development activities by governments and the people are not properly-planned,executed or-supervised; and monitoring and maintenance programmes are non-existent.
Despite the above potential environmental instability, governments and people have been carrying out massive development projects in these areas. The lands and environment along the Agulu-Nanka cuesta are geologically and geotectonically quite unstable and should be regarded seriously asuch in planning and executing development projects. Residential houses, commercial centres, road networks and drainages stretch far down into the lowlands/valleys. Buildings are seen perching perilously at the edge of gullies ready to be thrown into the valley at the least moment of adjacent gullies or landslides (Plate 1 and 2). Some of the frontal areal lengths of the aggressive fingers of gullies are advancing into the urban heartlands at the rates of between 2 and 3 m per year while expanding at their widths at about 3 to 5 m wide annually approaching the major roads or have cut them in places.
Structural Control of Gully Erosion and Landslides
The tectonically associated structures such are fractures (joints, faults and grabens) also have a stake in the gully erosion formation and development in southeastern, Nigeria. The implications of the neotectonic features and structural effects on gully erosion initiation were outlined [5], [14], [15], [16], [17], [18] and [19]. These neotectonic features originate from the Atlantic Ocean in a NE-SW direction exhibiting zones of potential seismic effects and therefore areas of potential crustal instability within the total environment. These geologic structures form plains of weaknesses or pressure release spots along which future movements, slides, heaves or platform failures may occur whenever some energy/pressure event triggers off such an action. Such pressure actions may be from natural causes or anthropogenic effects in the immediate environment. The deforestation and urbanization impacts, the hanging hills, slopes, lowlands and valleys found all over the erosion prone-areas are evidence of these plains of weaknesses that can trigger off gullies and landslides within the study area.
Conclusion and Recommendations
The government at all levels and the people must consider the impacts and implications of their developmental activities on these cuestas. The prevalent anthropogenic activities may precipitate tragic disasters where many communities may lose more of their lands and infrastructure to wide scale gullies and landslides thereby posing potential debilitating damages to the socioeconomic growth of the area.
Drastic control measures should be taken to checkmate the problems of gully erosion and landslides that annuallyravage the environment, destroying socioeconomic resources. The immediate measures to be taken may include all or a combination of the following: (a) Proper control measures involving Total Water Catchment Management Strategy (TWCMGS) should be employed. Environmental and Engineering Sciences professionals with proper and good civil engineering plans/designs ought to be used in executing projects in checking floods, soil and gully erosion and landslides in the Southeastern States of Anambra and Imo. (b) Below the extensive stretch of the escarpment is a regional huge aquifer deposit of good groundwater quantity and quality. The surface waters in nearby lakes, streams and rivers are polluted/contaminated by inflows from erosion and floods. It is equally unfortunate that people of the towns and communities of these areas lack potable water supplies for domestic purposes. It is suggested that a network of giant boreholes should be located at strategic points to tap the aquifer for community uses; and this will lower the pressure heads and reduce groundwater discharge into gully faces. (c) Funded research for the locations of unstable cuestas/escarpments in the towns and communities is also recommended and replan/redesign the way and manner infrastructural development projects are executed to prevent possible failures of such structures. The already existing infrastructure and landscape should be properly-monitored and managed to prevent possible collapse or failures in the environment. (d) The States must discourage bad agricultural and civil engineering practices that are erosion-causative in badlands such as deforestation, unplanned agricultural practices, urbanization, bad roads and drainage network-constructions, blocking of drainages with solid wastes and building on wetlands. (e) Laws against environmental destruction should be enacted and strictly-enforced by government. | 2019-04-26T14:22:09.090Z | 2016-07-22T00:00:00.000 | {
"year": 2016,
"sha1": "1582751a2e69766f14894ccd94bc65f51b0ab933",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijepp.20160404.12.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6ce54ea294ff5e98d356270887ddfcd28852e61f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
210044217 | pes2o/s2orc | v3-fos-license | Factors Determining Work Arduousness Levels among Nurses: Using the Example of Surgical, Medical Treatment, and Emergency Wards
Introduction Staff shortages among nurses have been severely felt in most countries around the world for many years. In Poland, this problem is particularly visible due to the lowest nursing employment rate per 1000 inhabitants among 28 EU states and the high rate of leaving the profession. The average age of Polish nurses has been constantly growing for several years—in 2016 it was 50.79, while in 2008 it was 44.19. These data confirm that young nurses are the first to leave the profession. Diagnosis of the working conditions and psychosocial burden level among nurses should be subject to detailed analysis, so that leaving the profession will not additionally deepen the difficult staffing situation in health care. Aim The aim of the study was to identify factors affecting the assessment of work arduousness levels among nursing personnel. Materials and Methods The study was conducted among 573 nurses working on surgical, medical treatment, and emergency wards. A standardized job evaluation questionnaire was used to conduct the survey. Results (1) Stress levels depended on the ward in which the surveyed person worked. Nurses working in the emergency ward assessed their conditions the best, with the lowest stress. The average general result in this group was 38.1 points versus 46 and 45.7 points in the surgical and medical treatment wards, respectively. (2) At the level of the whole studied group, both the nurses' age and work experience did not differ statistically significantly in the total assessment of working conditions. Differences in the assessment of work arduousness in different age categories occurred at the level of individual wards. In the surgical ward, younger employees were characterized by higher stress levels, especially in the area of arduousness (p=0.0165). In the medical treatment wards, there was a similar age-to-stress ratio for the area of organizational uncertainty (p=0.0063). With age, employees of the emergency ward became more indifferent to stress related to unpleasant working conditions (p=0.0009), while stress related to organizational uncertainty increased (p=0.0495). (3) Nurses working in managerial positions assessed the overall stress related to their job higher than other nurses. They were particularly at risk for burdens related to haste, responsibility, and organizational uncertainty. The average overall assessment of work arduousness for this group was 44.6 points, while for surgical nurses it was 37.2 points. Correlations between the performed function and stress levels were found for almost all of the studied work characteristics (except for hazards). (4) Education had a statistically significant impact on the perception of working conditions in several dimensions. The people with the lowest education evaluated working conditions the best. The difference between people with a higher and those with a secondary education with a specialization was definitely smaller and often nonexistent. Education differentiated the work arduousness assessment depending on the ward. The most statistically significant correlations were obtained in surgical wards, and the least in medical treatment wards. Conclusions (1) The study results indicate the need to diagnose problems related to work conditions in the context of occupational stress within individual hospital wards. To limit employee turnover, nursing staff managers should approach the issue of improving working conditions individually for each ward, due to differences in the nature of the work and level of stressogenicity. (2) In each hospital ward, employees at different stages of their career are sensitive to the psychosocial burden resulting from different work characteristics. These areas should be thoroughly diagnosed and the burden minimized to prevent departures from the profession—at early stages of the professional career as well as among experienced personnel. (3) Nurses working in managerial positions should receive the necessary substantive support, due to the higher stress burden associated with greater responsibility.
Introduction
Staff shortages among nurses have been severely felt in most countries around the world for many years. In Poland, this problem is particularly visible due to the lowest nursing employment rate per 1000 inhabitants among 28 EU states, which in 2015 was merely 5.2. In comparison, in Denmark, it was 16.7, in Germany 13.3, and in France 9.9, while the average for 35 OECD member countries was 9.0 [1].
Research by the Supreme Council of Nurses and Midwives indicates that the shortage of nurses in Poland is much higher than the available studies show. This is due to the fact that the established minimum employment standards are based on the basis of registered health services, taking into account the number of beds, the specificity of the ward, and not the actual needs of the medical facility. The result is one nurse working the night shift or less personnel on Sundays and holidays. This translates into a greater workload due to the increased number of responsibilities per person [2].
Another unfavorable phenomenon observed in this professional group in Poland is the gradual increase in the average age of registered nurses. In 2008, it was 44. 19, while in 2016 it was 50.79, which means a rise by 6.6 years in only 8 years. The majority of registered nurses were in the 41-60 age group (66.51%), while nurses in the youngest age group from 21 to 35 constituted only 5.53% of all registered nurses [3].
The successive increase in the average age of nurses and the declining percentage of nurses in the youngest age group may indicate several phenomena, including lack of employment of new nurses in the profession despite obtained qualifications, financially motivated emigration after graduation, as well as leaving the profession early in the career and retraining to work in another profession. The scientific literature has identified a number of factors that are the most frequent reasons for nurses leaving the profession [4][5][6][7][8].
They include factors such as low wages, few career development opportunities, shift work, health problems, and the psychosocial burden in the workplace.
Psychosocial burden, in the case of nurses, concerns, among others, overloading with physical work, fears of infection with diseases, work complexity, the ambiguity of the role played in the organization, unpleasant working conditions, conflicts, employment insecurity, and negative family relations resulting from a work-home conflict. Excessive workload is compounded by the fact that in Poland, as in many other countries in the world, this profession is strongly feminized. As a result, there is no possibility that particularly physically burdensome work, such as lifting patients, for example, can be performed by physically stronger men. In addition, nursing aids, who could perform nursing activities around the patient, are not employed in Poland. Therefore, in addition to providing medical care, nurses perform a number of activities, such as patient hygiene and washing, change of bed linens, transporting patients for tests, and other activities [9]. The high physical burden and resulting fatigue additionally intensify stress, the fear of making a mistake, which in the case of nurses can have far-reaching consequences (giving the patient the wrong dose of a drug, for example). Furthermore, the nature of the work causes a rise in the number of factors that are hazardous to nurses' health, including those that require lifting, walking with a heavy load, and taking a forced body position. [10,11].
The complexity of work in the case of nurses is first and foremost the necessity to perform many activities requiring attention and accuracy, such as administering the right doses of medicines, carrying out measurements (e.g., blood pressure), handling medical devices that are interlaced with hard physical work. A significant source of stress is conflict at the workplace, which include conflicts with other nurses and superiors, including frequent cases of bullying [12,13].
The health problems of nurses often originate in psychosocial burdens at the workplace. This is confirmed by numerous scientific studies on the general impact of stress on the physical and mental health of employees as well as in detailed research, e.g., confirming the unequivocally negative impact of bullying on psychoemotional aspects of a nurse's health, and indirectly as one of the factors causing burnout on the general health condition of nurses [14].
In this study, we attempted to identify the most arduous and most frequently occurring burdens at the workplace of nursing personnel. Using statistical analysis, we diagnosed the effect of particular factors, such as age, duration of professional experience, and position held, on the intensity of psychosocial burden perception and compared the differences in the work arduousness assessment depending on the respondent's place of work, that is, the ward where he/ she worked.
Materials and Methods
The study was conducted from September 2017 to December 2017 in Poland in the Podlaskie Voivodeship. It included 573 people working as nurses at inpatient health care facilities. Participation in the study was voluntary and anonymous. Participants could quit the survey at any level. All procedures were prepared according to the ethics standard approved by the Local Bioethics Committee of the Medical University of Bialystok (ref. no R-I 002/296/2017).
Study
Procedure. The study was conducted by a group of experts composed of representatives of nurses and teachers in the nursing profession. These people understood the purpose of the study and knew the specificity of working as a unit nurse at inpatient health care facilities. The study was conducted using a standardized Work Features Assessment Questionnaire developed by Dudek et al. [15]. The questionnaire was developed by a team of Polish researchers and therefore it was considered the best suited tool to the specifics of the studied population. The experts thoroughly explained the purpose and meaning of the individual questions to the study participants and then filled out the questionnaire themselves based on the respondents' responses and observations. This way of conducting the study provided an objective assessment of work stressfulness. "Objectivity" in this case means that the assessment was not dependent on the individual stress experienced by the respondent and is a resultant of assessments made independently by 2-3 experts familiar with the specificity and working conditions at a given position.
Study Group Selection.
The selection of respondents to the research group was based on the register of nurses associated with the District Chamber of Nurses and Midwives in Białystok. The criterion was employment based on a contract of employment at a hospital in medical treatment, surgical, or emergency units. 10% of randomly selected persons meeting the selection criterion were invited to the study.
Description of the Questionnaire and the Applied Measures. A standardized Work Features Assessment
Questionnaire for objective assessment of work stressfulness was used as a research tool [15]. The questionnaire consisted of 34 statements describing particular work features. These statements were rated on a scale of 1 to 5 depending on the feature's frequency, duration, or severity.
Based on the statements of the questionnaire, 10 specific measures were determined: unpleasant working conditions, work complexity, hazards, conflicts, uncertainty resulting from the organization of work, arduousness, haste, responsibility, physical effort, competition, and one overall measure of work arduousness. The higher the score, the higher the work arduousness in a given aspect. The results for the individual measures are not comparable with each other, because each individual measure (and overall measure) has a different number of component statements. To allow for comparison between work arduousness estimations in different categories, raw values of the individual measures were normalized to a range of 0-100, with 0 indicating the absence of work arduousness and 100 indicating the maximum work arduousness.
Based on the standards set out in the questionnaire, individuals with high stress levels due to work arduousness in the different areas were distinguished. In the case of the overall scale of work arduousness, three categories were distinguished: low, medium, and high.
Statistical Methods.
Statistical analysis was performed using the appropriate statistical tests, by means of which the statistical significance of the considered dependencies was verified. In the case of studying the effect of a nominal factor (such as the ward) on work arduousness numerical values, descriptive statistics were determined in the compared groups and differences in the distribution of the measures between the groups were assessed using the Mann-Whitney test (for two groups) or the Kruskal-Wallis test (for three or more compared groups). When studying the effect of a numerical feature (e.g., age or work experience) on work arduousness numerical values, Spearman's rank correlation analysis was used. The percentage of people with a high level of work arduousness in particular professional areas relative to the grouping factor was compared using the chi-square test.
Results
The study included 573 respondents working as nurses. The vast majority of the respondents were women (97%). The average age for the studied group was 38.5 years, with a slightly higher median equal to 39 years. The youngest employee was 21 years old and the oldest was 61. Over half of the respondents had completed nursing studies, one-fourth had a secondary education with a specialization, and onefifth had secondary education. The average duration of work experience was 15 years, with a slightly lower median of 14 years. Work experience ranged from one to 41 years. The majority (3/4 of the respondents) worked as a unit nurse. Every tenth nurse was employed as a surgical nurse, and every twentieth held a managerial position. The percentage distribution of employees between wards (surgical, medical treatment, emergency) was almost even.
On the basis of the obtained results, a ranking of work arduousness measures, presented in Figure 1, was prepared. The elements of work such as hazards, work complexity, and haste were defined as the most arduous, and the least problems occurred in such categories as conflicts or competition.
Next, based on the standards set out in the questionnaire [12], individuals with high stress levels due to work arduousness in the different areas were distinguished, which allowed determining the burden intensity of particular elements of work. The ranking of the elements of work in terms of frequency of occurrence to a degree causing high stress levels is shown in Figure 2. Most often, high levels of stress were caused by work complexity, unpleasant working conditions, and haste, and the least often by arduousness and responsibility.
In the case of the overall scale of work arduousness, three categories were distinguished: low, medium, and high. According to such a classification, more than two-thirds of the respondents considered their work to be very arduous, and only one in twenty to be easy.
The work arduousness values were compared in terms of the ward on which the respondents worked. The results of the comparison lead to an unambiguous conclusion about the significant impact of this factor on work assessment. For all areas in which work arduousness was assessed, the differences between wards were statistically significant. Considering the total work arduousness levels, the emergency ward stood out (the average level for the general result in this group was 38.1 points versus 46 and 45.7 points for surgical and medical treatment wards, respectively). Detailed data are presented in Table 1, which shows that the emergency ward was characterized by the lowest stress levels in all areas. In all wards, such features as hazards, haste, and work complexity were rated as the most arduous, whereas competition, conflicts, and unpleasant working conditions as the least arduous. The relationship between the ward and work arduousness occurrence was also compared in terms of the incidence of people who felt high burden levels in individual areas. In this analytical approach, we also found highly statistically significant differences between the wards for almost all the considered areas (with the exception of the work complexity category). Analyzing the results in Table 2 in detail, it can be stated that the smallest percentage of people experiencing high work arduousness occurred in the emergency ward and the largest in the surgical ward. The most intense on all wards were features such as haste, unpleasant working conditions, conflicts, and organizational uncertainty. Additionally, organizational uncertainty occurred most frequently in the case of medical treatment wards and hazards in the case of surgical wards. The least intense were responsibility and work arduousness and physical effort, only in the case of surgical wards. Analysis of the variation of work arduousness assessments in terms of the wards ends with a list of adjective distribution of the arduousness scale in the compared groups, listed in Table 3. Differences in the assessment of stressful situation occurrence between respondents from individual wards were quite significant. For example, on surgical wards, as many as 81% of the employees assessed work as highly stressful, whereas on the emergency wards just over 50% gave such an answer. Differences in the distribution of the classification of stress levels at work between the considered wards are statistically significant.
The effect of age on the assessment of stress levels at work in particular areas and on the overall result was examined. The analysis consisted of determining the Spearman's rank correlation coefficients between age and the numerical measures of stressful situation occurrence, determined on the basis of a standardized questionnaire. The analysis was carried out both at the level of the entire population and when controlling for the ward type, because this factor may affect the occurrence of dependence (as previously noted, stress levels depended on the type of ward in which the respondents worked).
At the level of the entire surveyed population, very weak correlations with age were found only in two areas: haste and physical effort. The stressfulness of haste accompanying work increased with age and the stress associated with physical effort decreased with age. However, both of these correlations had negligible strength (Table 4), which means that their practical meaning is almost none.
An alternative form of analysis was also conducted. We divided the respondents into four age groups, presenting the descriptive statistics values in these groups and assessing the differences between them using the Kruskal-Wallis test. This analysis ignored age differences within the created groups, which may lead to different results than the conducted correlation analysis. As shown in Table 5, the created groups were quite numerous, which allowed for reliable analyses.
Analysis of work stressfulness in relation to age groups led to distinguishing one highly significant result; namely, age differentiates the assessment of stress resulting from physical effort. The stress levels associated with this were much higher among employees aged up to 39 years (median 50 points) compared with other people (median 37.5 points). Other burdens were not correlated with age in any way ( Table 6).
A similar correlation analysis was performed for each ward. It turned out that this was the right approach, because we found that there were more statistically significant relationships within individual wards. These were also dependencies of slightly greater strength than the two correlations found at the level of the entire population. The analysis results are presented in Table 7. Younger employees in the surgical wards were characterized by higher stress levels. This pertains to areas such as arduousness, responsibility, competition, and the overall result. On the medical treatment wards, there was a similar (to the surgical ward) age-to-stress ratio for the areas of organizational uncertainty and physical effort. With age, employees of emergency wards became more indifferent to stress related to unpleasant working conditions, hazards, and physical effort, while stress related to organizational uncertainty increased.
Work experience was very strongly correlated with age (Spearman's rank correlation coefficient between these two features was R � 0.94), so it could be expected that the duration of work experience would affect the work stress assessment in a similar way as age. The results in Table 8 confirm this assumption. Only two statistically significant, but very weak, correlations can be found at the level of the entire population-between work experience and the stressfulness of haste (those with more work experience were more susceptible to this factor) and the level of stress induced by physical effort (here, for a change, more work experience was a positive factor in the "immunity" of an employee to physical effort). Both of these correlations had very little strength.
Taking into account the specifics of individual wards in the analysis leads to much more interesting results. Work experience was a factor that had the most significant impact on the stress levels of surgical ward employees. All the distinguished, statistically significant correlations had a negative sign, which means that people with more work experience were more resistant to the occurrence of stressful situations at work. Work experience affected the stress caused by work burdens, responsibility, and the overall result the strongest. Among employees of surgical wards, more work experience had a positive effect (stress levels decreased) on only two areas: organizational uncertainty and physical effort, whereas, among emergency ward employees, longer work experience increased stress levels caused by conflicts, organizational uncertainty, work arduousness, and haste and decreased the stress associated with unpleasant working conditions and physical effort ( Table 9).
The relationship between the held position and the assessment of work arduousness was analyzed. Table 10 presents the values of basic descriptive statistics and the significance assessment of differences in the stress levels of employees depending on the performed function. Starting the interpretation from the general stress level, we noted that it was higher among nurses who performed managerial functions and among unit nurses. Nurses in management were particularly vulnerable to stress associated with haste, work complexity, organizational uncertainty, or work arduousness. Very weak correlations were found in the case of unpleasant working conditions, conflicts, and competition. The only area in which we found no effect of the held position on stress levels was the occurrence of hazards. Due to the fact that ward type was a factor strongly differentiating assessment of working conditions, analysis of the impact of education on working conditions was done separately for each ward type. To assess the significance of the differences between the groups the Kruskal-Wallis test was used.
In the group of nurses working in surgical wards, education had a statistically significant impact on the perception of working conditions in several dimensions. The people with the lowest education evaluated working conditions the best. The difference between people with a higher and those with a secondary education with a specialization was definitely smaller, and often assessments of working conditions in these two groups were almost identical (Table 11).
Among people working on medical treatment wards, the impact of education on the assessment of working conditions was not as pronounced. Only the assessment of stressogenicity related to responsibility and physical effort differed in a statistically significant way due to the education level. In this first dimension (responsibility), people with higher education had the highest stress levels. In the second dimension (physical effort), the differences were not so clear and logically oriented, so it is difficult to interpret these results unambiguously.
From the nurses working on the emergency ward, education differentiated the assessment of unpleasant working conditions, work complexity, and uncertainty resulting from the organization of work. In the first two areas, the higher a nurse's education, the worse the assessment of that particular dimension. In terms of organizational uncertainty, the worst assessments were obtained from nurses with a secondary education with a specialization.
Finally, a multivariate analysis was done. A regression model was constructed for the overall assessment of work conditions, in which, apart from nominal factors, which were ward and education, the potential impact of age and work experience was also considered. These variables had numerical values, so they were treated as continuous variables. The models also considered the 2nd-degree interactions between all factors. Using the stepwise regression procedure, the optimal model was selected. This model included only nominal factors: ward (p � 0.0000 * * * ) and education (p � 0.0001 * * * ); the interaction between them was also significant (p � 0.0185 * ). The nurses' work experience and age did not differentiate in a statistically significant way the total assessment of working conditions. Since only two nominal factors remained in the model, the results can be described in terms of analysis of variance, presenting the values of descriptive statistics in the compared groups. Table 12 presents average values and standard deviation. To facilitate the interpretation of the results, a graphic presentation (Figure 4) of group averages with a 95% confidence interval and a typical range of variation was also included. Analyzing the distribution of group averages, we can state that (i) People with a higher education assessed working conditions more negatively (ii) Nurses working in the emergency ward assessed their conditions the best and assessed their stress levels the lowest (iii) The effect of education on the working conditions assessment depended on the ward, with the largest differences in assessments occurring on the surgical ward
Discussion
Numerous studies conducted in different countries have shown that working as a nurse is characterized by clearly higher stress levels than the average stress levels for the employed population [16]. Poland has one of the lowest nursing employment rates (5.2 per 1000 residents) in Europe and a high rate of leaving the profession. The average age of Polish nurses has been constantly growing for several years-in 2016 it was 50.79, while in 2008 it was 44.19. These data confirm that young nurses are the first to leave the profession. It is similar in other European countries, such as Italy [17], Finland [18], and Sweden [19]. Many studies from different countries have shown that in addition to such factors as low wages or labor migration, working conditions have a huge impact on the number of nurses leaving the profession [20][21][22][23][24][25].
In this article, we decided to examine which factors affect the assessment of work arduousness among the nursing staff and whether there are determinable correlations between them. There have been many studies on stress [26] and recommendations on what measures should be taken to minimize the negative effects of stress. The majority of studies treated nurses as a homogeneous group, regardless of age, work experience, place of work, and sex, or these studies were conducted within a specific ward, such as intensive care or emergency. For our study, we selected three types of hospital departments-surgical, medical treatment, and emergency wards-to compare whether the results within individual wards differ from those obtained for the entire studied population. This was to verify whether individual work conditions, characteristics of individual wards, affect the assessment of work stressfulness.
Study results showed that at the level of the whole surveyed group, the nurses' age, education, and work experience did not differ statistically significantly in the total assessment of working conditions. Differences in the assessment of work arduousness levels in different age categories occurred at the level of individual wards. Similar results were also obtained in the case of work experience, education, and the position held. All these factors correlated differently depending on the ward, and even in many cases, any correlation occurred within the ward and did not occur in the entire studied population.
Similarly, research conducted in Iran showed a lack of correlations at the level of the whole group between demographic factors (such as age, sex, education, work experience) and the level of work satisfaction, which was strongly related to stress levels [27]. Studies conducted in Sudan in public hospitals in Khartoum State [28] and in private and public hospitals in Amman, Jordan [29] showed that the psychosocial burden felt by nurses varied depending on the ward, similarly to our results. Research on work satisfaction among nurses in Great Britain also indicated the need to conduct analyses at the level of individual hospital departments [25]. Our research showed not only differences in the psychosocial burden depending on the ward, but also other correlations between age, work experience, and education and psychosocial factors occurring within various ward types.
Keeping in mind the earlier observation that the youngest nurses most often leave the profession, we examined how age and experience affected the nurses' perception of psychosocial burdens. The obtained results indicate that with age, employees become more immune to certain stressors. On the surgical wards, the most noticeable were arduousness, responsibility, and competition for young employees; on medical treatment wards, uncertainty resulting from the organization of work and physical effort; while on the emergency ward, unpleasant working conditions, hazards, and physical effort. A similar effect was observed in the case of work experience. Employees with the least work experience felt burdens almost identically to the youngest employees, except for the emergency ward, where no correlations were found in the case of hazards, while inverse correlations were found in the case of uncertainty resulting from the organization of work and haste. These two factors in the case of employees of emergency wards increased with experience. This can be explained by the fact that with age, and thus with increased experience, some factors become less and less stressful, probably due to the fact that employees somehow get used to certain working conditions and do not perceive them as negatively as initially. Research conducted at the Medical University of Gdańsk among nurses employed in hospitals [30], outpatient clinics, and social care homes showed, similarly to our results, the highest levels of psychosocial burden among the youngest nurses. Studies carried out in Italian hospitals [31] showed that good workplace conditions positively stimulated a decline in work efficiency progressing with age, whereas a study conducted in four hospitals in Poland showed that nurses over 40 had the highest emotional exhaustion rate [32]. This is quite different from our study results.
An important result of the conducted research is also an indication of the burdens that did not depend on the nurses' age or experience. Regardless of the ward where the nurses worked, work complexity, conflicts, and haste were felt regardless of age, and hazards regardless of experience.
Organizational uncertainty, or a feeling of a threat of job loss, is a more stressful factor than the loss of work alone [33]. This factor intensified with age and work experience. This was confirmed by our research in the case of the emergency ward, and a completely different result was obtained for the medical treatment ward, where this factor decreased with age.
The psychosocial burden that nurses experience in their daily work may be the cause of their mistakes (incorrect doses, inappropriate medicines). This fact was confirmed by research conducted in public hospitals in Tehran, which showed a close correlation between the stress experienced by nurses and the number of mistakes made during treatment [34]. Similar results were obtained in India [35], as well as in Canada [36], showing a correlation between stress levels and the number of mistakes, injuries, and negligence.
The psychosocial burden had an impact on general job satisfaction among nurses, which in turn leads to employees being more likely to start looking for other career opportunities [37]. This is particularly important in the context of our obtained results, according to which the assessment of work arduousness of nurses working in managerial positions and those better educated was worse than other nurses. Nurses employed in managerial positions in the entire studied population assessed their workplace worse in all the assessed areas, with the exception of hazards. Education differentiated the assessment depending on the ward. We identified the presence of the most features on surgical wards: work complexity, arduousness, haste, responsibility, and physical effort whereas on medical treatment wards only responsibility and physical effort and on emergency wards unpleasant working conditions and work complexity. On the basis of such results, we can draw two conclusions. First, nurses with higher education and employed in managerial positions assess their working conditions worse, because they are usually burdened with more responsibility and a broader scope of duties. Second, thanks to their education, and usually more work experience, they are more aware of the hazards in the workplace.
One of the limitations of the conducted research is that we show a static picture of the psychosocial burdens in the nurses' workplace. An interesting issue seems to be the impact of global economic changes on the dynamics of psychosocial burdens in the workplace. The negative impact of the global crisis on workers' health was confirmed in the results of research carried out in Northern Ireland [38].
Summing up the results of our research, we found that the factors that influence the assessment of the nurses' working conditions were the position and type of ward they were employed in. Age, work experience, and education did not have a statistically significant impact on the assessment of nurses' working conditions if we treated the nurses as a homogeneous group. The results changed radically if we conducted analyses within the ward type. We then found statistically significant dependencies of the assessment of working conditions depending on age, work experience, education, and position held. In all the wards, the youngest employees were the most exposed to stress, but the most stressful was other work features in each ward. In the surgical ward, these were arduousness, responsibility, and competition; in the medical treatment ward: organizational uncertainty and physical effort; and in the emergency ward: unpleasant working conditions, hazards, and physical effort. Little work experience intensifies stress in the surgical wards, especially in terms of arduousness and responsibility; in the medical treatment ward: organizational uncertainty and physical effort; and unpleasant working conditions and physical effort in the emergency ward. Only in the emergency ward, as many as four features-organizational uncertainty, haste, arduousness, and conflicts-were perceived worse by employees with more work experience. Higher education affected a more critical assessment of the working conditions, which, however, differed between the wards. In the surgical ward, people with higher education experienced work complexity, arduousness, and haste as the worst; in the medical treatment ward, it was responsibility and physical effort; and in the emergency ward, unpleasant working conditions and work complexity.
In the face of staff shortages among nurses, which are intensifying due to the aging of society, it is necessary to diagnose factors that increase the stressfulness of work, so that effective actions to counteract them can be taken. Particular attention should be paid to young people, with less work experience and better education, as they are the most susceptible to the psychosocial burden and leave the profession the most often.
Conclusions
(1) The study results indicate the need to diagnose problems related to work conditions in the context of occupational stress within individual hospital wards.
To limit employee turnover, nursing staff managers should approach the issue of improving working conditions individually for each ward, due to differences in the nature of the work and level of stressogenicity. (2) In each hospital ward, employees at different stages of their career are sensitive to the psychosocial burden resulting from different work characteristics. These areas should be thoroughly diagnosed and the burden minimized to prevent departures from the profession-at early stages of the professional career as well as among experienced personnel. (3) Nurses working in managerial positions should receive the necessary substantive support, due to the higher stress burden associated with greater responsibility.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request. | 2020-01-02T21:52:28.359Z | 2019-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "19938cd9a7a9ce06d56c5d2d0a454ff2302f28e4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/6303474",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c6a0d5aadb5d0cf32017e09a0c51e0307a3b446",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
268558814 | pes2o/s2orc | v3-fos-license | Clinical Strategies in Gene Screening Counseling for the Healthy General Population
The burgeoning interest in precision medicine has propelled an increase in the use of genome tests for screening purposes within the healthy population. Gene screening tests aim to pre-emptively identify those individuals who may be genetically predisposed to certain diseases. However, as genetic screening becomes more commonplace, it is essential to acknowledge the unique challenges it poses. A prevalent issue in this regard is the occurrence of falsepositive results, which can lead to unnecessary additional tests or treatments, and psychological distress. Additionally, the interpretation of genomic variants is based on current research evidence, and can accordingly change as new research findings emerge, potentially altering the clinical significance of these variants. Conversely, a further prominent concern regards false assurances in genetic testing, as genetic tests can yield false-negative results, potentially posing a significant clinical risk. Moreover, the results obtained for the same disease can vary among different genetic testing services, due to differences in the types of variants assessed, the scope of tests, analytical methods, and the algorithms used for predicting diseases. Consequently, whereas genetic testing holds significant promise for the future of medicine, it poses unique challenges. If conducted without a full understanding of its implications, genetic testing may fail to achieve its purpose potentially hindering effective health management. Therefore, to ensure a comprehensive understanding of the implications of genetic testing within the general population, sufficient discussion and careful consideration should be given to counseling based on gene test results.
INTRODUCTION
Regular health screening involving a comprehensive health check-up test may detect a disease or pre-morbid condition at an early stage, at which preventative measures or early intervention can be taken.These screening strategies typically consist of comprehensive laboratory, imaging, and endoscopic tests, and can distinguish populations at high risk with a current disease or with a potential risk of developing a disease. 1)In this context, there has recently been an increasing trend in the application of gene tests as a further screening tool to prevent or diagnose diseases more proactively, by identifying those individuals who may be at a genetically high risk before the onset of a disease. 2,3)][8] In tandem with this trend, there is an increasing interest in performing genome tests for screening in the healthy population, and the number and scope of tests conducted are steadily expanding. 3)However, many health check-up institutions are conducting genetic tests and associated counseling, there are currently insufficient protocols or guidelines in place for counseling healthy people, as opposed to those who have already developed diseases.In the medical field, there is still no clear consensus as to the necessity for gene screening and counseling among the general population.However, given that testing is increasingly being adopted in the healthcare industry, it is necessary to prepare measures, even if we postpone discussing the medical and academic validity and evidence-based justification for such testing.
In this review paper, rather than dealing with opportunistic gene screening that is secondarily performed in incidence-based testing, we will instead consider the clinical significance of population gene screening (commercial gene test services) conducted on healthy individuals for preventive medicine in the public health context of the general population, and highlight the necessary precautionary measures for clinical application from the perspective of gene counseling.
The cases described in the paper are primarily taken from the South Korean healthcare system.
THE INCREASE IN GENE SCREENING FOR THE GENERAL POPULATION
With the ongoing growth in the number of genetic tests performed in clinical practice, many secondary genetic findings, unrelated to the designated targets of such tests, have been revealed, leading to considerable discussion regarding the clinical interpretation of specific genes.
To address this issue, in 2013, the American College of Genetics and Genomics (ACMG) provided guidelines and recommendations for action condition reports based on these secondarily discovered genes, 9) a subsequent update of which was made in 2016. 10)rrently, in the United States, most institutions that conduct ge-netic analysis report on secondary findings that correspond to actionable risk variants based on the ACMG's guidelines when conducting genome sequencing. 11)These secondary findings serve as opportunistic screening for patients undergoing tests for other purposes, and it is believed that early detection of actionable risk variants can make an important contribution to disease prevention.Additionally, it has been proposed that obtaining such secondary findings would be potentially beneficial and applicable for healthy individuals, and consequently, there has been a gradual burgeoning in the concept of genetic health screening among healthy subjects.This should be considered indicative of a move toward broadening the application of genomics in the context of precision and personalized medicine for health promotion and disease prevention among the general population.
The healthcare industry can be driven and influenced by providers, and given the considerable size of the healthcare market, there is believed to be substantial scope for genomic sequence analyses, and many companies are gradually seeking to offer genome sequence services to healthy individuals. 12)In the United States, a service called "23AndMe" has been a pioneer in this area, 13) and this has in turn had the effect of increasing supplier-induced demand, a phenomenon whereby customers seek more of a given product or service as a result of the healthcare industry increasing the production of a particular product or service. 14)Accordingly, gene screening services, previously offered only to patients, are now gradually being made available to the healthy population.This supplier-induced demand for gene screening has gained momentum in line with consumer interest and need, along with a BRCA testing-related incident highlighted in an editorial by Angelina Jolie published in the New York Times in 2013, 15) in which she sparked widespread interest in genetic testing by describing her own BRCA test results and the story of her preventive mastectomy. 16)A subsequent study revealed an increase in the frequency of BRCA gene testing as a consequence of this editorial. 17)Given the prominence of these social and industrial phenomena, it is widely anticipated that the demand for gene screening tests for the healthy population will progressively expand in the near future.
In recent years there has been a shift in emphasis in modern medical care from predominantly therapeutic medicine to a more preventive approach, 18) which accordingly necessitates the identification of individuals who are vulnerable to a given disease or disorder, for which genetic tests are the most appropriate diagnostic tools.In the past, gene tests were recommended based on pedigree analysis of individuals with a high likelihood of vulnerability, those with a family history of a particular disease, or targeted individuals who were suspected to have an underlying genetic condition based on clinical symptoms.
More recently, however, greater emphasis has been placed on identifying vulnerable individuals via genetic testing among healthy population groups, for whom there is no suspicion based on family history and no clinically significant findings.However, the use of genome sequencing as an approach for screening healthy individuals is still considered somewhat controversial. 19,20)This lack of consensus relates to the question of whether the clinical application of information on sec-ondary actionable genes obtained by performing gene tests for a specific purpose (i.e., opportunistic screening) also has relevance for population screening in the context of preventive medicine in healthy individuals.
TYPES OF GENETIC TESTS USED FOR POPULATION SCREENING 1. Dependence on Genome Type
With ongoing advances in gene analysis technology, there is an increasing attempt to exploit information inherent in different forms of biomolecules, including the genome, 21) epigenome, 22) transcriptome, 23) proteome, 24) metabolome, 25) and microbiome, 26) to predict diseases or health conditions.
In this regard, the most common types of biomolecule currently used in population gene screening are single-nucleotide polymorphisms (SNPs) associated with the traits of interest. 27)The genetic test services using SNPs vary for the types and numbers of SNPs selected by different genetic analysis companies, and there are differences regarding the algorithm technology used to predict traits based on the selected SNPs. 28)cently, algorithms based on polygenic risk scores, which use multiple SNPs, as opposed to selecting only a few specific SNPs, have been attracting attention. 29)Additionally, there have increasing attempts to apply multi-omics technology that incorporates a comprehensive combination of diverse genomic information, including that relating to DNA, RNA, and methylation. 30)Moreover, different combinations of genomic and prediction algorithms are continually being developed, which will inevitably contribute to enhancing trait prediction performance.Accordingly, when selecting a specific genetic test, it is necessary to establish and evaluate the type of genome for which the test was designed and how the algorithm for predicting traits was developed. 31)
Dependence on the Target Type
Genetic tests can be broadly classified with respect to the category of target.Genetic tests for diseases include those for chronic diseases (e.g., diabetes and hypertension), malignant diseases (e.g., colorectal and lung cancer), and a diverse range of other diseases, including retinal degeneration and endometriosis.34] A second class of traits comprises individual characteristics, including physical characteristics (e.g., obesity), appetite, satiety, nutrient deficiency, appropriate exercise (for identifying which strength and aerobic exercises might be genetically appropriate), and hair characteristics. 33)The genetic tests for these individual characteristics are being used as accessories for establishing health promotion plans through lifestyle interventions, and attempts are being made to introduce these in the food, cosmetics, and body management industries. 35)Moreover, they are increasingly being employed in healthcare institutions, including obesity and nutrition clinics. 36)armaco-genomic genetic testing is intended to determine the type and dosage of drugs based on genetic characteristics, to ensure the effective and safe use of medications.This enables the tailoring of drugs to the individual, based on the evidence that the effect of a specific drug may not be sufficient depending on the genetic characteristics, or that there is a potential for prominent side effects. 37)fourth category of genetic tests comprises those that can be used to trace ancestry.Although this type of testing is not actively conducted in South Korea, in countries, such as the United States, with populations of multiple races and mixed ethnicities, there is a heightened interest in determining ancestry and these tests are accordingly widely conducted. 32,38)
Dependence on Whether Genetic Testing Takes Place under Prescription or the Guidance of a doctor in a Medical Institution
Genetic testing is conducted for the prevention, diagnosis, or treatment of diseases, under circumstances in which doctors in medical institutions determine the necessity of testing during the treatment process, and prescribe the necessary tests.Typically, tests are therefore conducted after the doctor has thoroughly explained the genetic test and obtained an appropriate written consent.Contrastingly, direct-toconsumer (DTC) genetic tests are those that consumers can take directly without visiting a medical institution. 39)A pioneer in this field has been the 23andMe company. 40)Currently, in Korea (as accessed on November 13th, 2023), the "Bioethics and Safety Act" stipulates that such genetic testing may be permitted for personal wellness issues, which include nutrition, exercise, skin, hair, eating habits, personal characteristics (e.g., alcohol metabolism, nicotine metabolism, sleep habits, and pain sensitivity), health management (e.g., osteoarthritis, motion sickness, uric acid level, and body fat percentage), and lineage (ancestry tracing), totaling 56 items. 41,42)Any genetic tests conducted for purposes other than these 56 specified items, along with those for the diagnosis or treatment of diseases, or other medical purposes, can only be performed under the guidance of a medical institution.However, the uses permitted for DTC are continuously being amended, and consequently, it is necessary to establish whether a specific type of genetic testing can be legitimately performed by genetic testing institutions other than medical institutions, according to the laws of individual countries. 43)
GLOSSARY FOR UNDERSTANDING POPULATION GENE TEST
To understand the nature of genetic testing, it is beneficial to gain at
Single-Nucleotide Polymorphisms
An SNP is essentially a type of mutation, in that it is a modification of a single base, although whereas a mutation can be considered an exceptional (or pathological) phenomenon, SNPs are generally a more common phenomenon, as the term "polymorphism" tends to imply.As a general rule of thumb, if the frequency of a rare allele (version) of a single base position exceeds 1% in the total population, it can be defined as an SNP, whereas if the frequency is less than 1%, it can be defined as a mutation.47]
Effect Size
The frequency of an allelic variant in the population and the degree of disease occurrence, that is, the effect size of the allele, can be divided into several categories.Low frequency variants can have a relatively large effect size, in that they can give rise to a meaningful phenotype.
Contrastingly, the SNPs used in healthy population screening are typically commonly observed at a frequency of more than 5% among the general population, although the size of the effects associated with these variants tends to be small. 48,49)
Penetrance
Even if individuals harbor the same variant, a few may show severe forms of a disease, whereas others might have negligible phenotypic manifestations, a phenomenon referred to as variant penetrance.Variants associated with disease and traits are typically discovered via genome-wide association studies (GWAS) and are generally characterized by low penetrance at common frequencies. 50)
POINTS TO CONSIDER WHEN INTERPRETING THE RESULTS OF POPULATION GENE SCREENING TESTS 1. Subjects Should Be Informed of False-Positive Results
One of the most concerning issues regarding population gene screen-
Genomic Variant Identity: Re-evaluation of Clinical Significance in Research
The identity of genomic variants is based on research evidence at the time reported, although clinical significance may need re-evaluation in light of subsequent research findings.Among the most important challenges when assessing genetic test results in actual clinical practice is that of variant interpretation. 53)As a consequence of continuing research on genomic variants, it is frequently necessary to update previous interpretations for particular variants.Moreover, even if a given variant is accurately interpreted, it may not necessarily manifest as a disease condition.Many variants, considered to be pathogenic based on initial research evidence, could, in the light of subsequent evidence, be re-evaluated as being benign.
Consideration of the Penetrance of Genomic Variants
When a certain variant is present in different individuals, it is said to have an incomplete penetrance rate if the associated clinical phenotype is expressed in some individuals but not others. 53)This differential outcome can be attributed to many different factors, including the influence of regulatory SNPs, epigenetics, environmental factors, and lifestyle. 54)Accordingly, it is important to explain the concept of penetrance during the counseling of individuals who receive positive test results.
Beware of False Assurances
In genetic testing, false-negative results are not infrequent and can pose a significant clinical risk.Even if there is a variant associated with a specific phenotype or disease, its relevance may not be fully established depending on the current research status.Consequently, a false-negative finding may occur in the case of variants for which the pathogenicity might not be established until a later date.Moreover, depending on the manufacturer of the provided service, genetic testing services can differ widely with respect to the types of variants used, scope, analytical methods, and algorithms for predicting diseases (disease risk estimation algorithms).Consequently, there is a possibility that test results obtained for the same disease may differ when using tests provided by different companies, as has been highlighted by a study that compared the test results obtained by the 23andMe service and two commercial genetic-testing services provided in Korea. 28)ong the three services, there were cases in which different interpretations of relative risks were obtained for the same disease.Moreover, in the case of lung cancer, there were cases for which opposite test results were obtained, with associated relative risks ranging from 0.9 to 2.09.These discrepant outcomes can be attributed to the types of SNP used and differences in the applied algorithms.Furthermore, most genetic tests involve models based on GWAS results, and in many GWAS, research is often conducted with a well-defined and sufficient number of case-control groups, whereas there are many cases in which multi-ple comparisons and ethnical genetic differences are not taken into consideration. 55)Consequently, this can adversely influence the selection of optimal SNPs for specific diseases.Thus, even if a negative test result is obtained for a specific disease, tested individuals should be made aware of the fact that a negative finding does not necessarily imply that the genetic test result is 100% negative and that fundamental screening tests and preventive healthy lifestyle necessary in the average population should be continued.
CONSIDERATIONS WHEN SELECTING A PRODUCT FOR GENETIC SCREENING SERVICES AT HEALTHCARE INSTITUTIONS
When introducing genetic testing at healthcare institutions, it is necessary to review and consider different circumstances when selecting from a range of genetic test products offered by genetic analysis companies.
Firstly, it is necessary to determine the characteristics of the study population, research which served as a basis for the development of a given genetic test product.Generally, the variants applied in test algorithms have a relatively low effect size, and accordingly, relevant studies should be conducted in a population group of a certain minimum size. 48,49)Moreover, given the differential effects of genes among different races, it is necessary to confirm whether there has been a study conducted targeting a particular ethnic group. 56)condly, it is worthwhile establishing whether replicate research has been conducted in a group that differs from the target group used in the initial product development, which will assist in assessing the reliability of the product. 57)irdly, preparations are necessary regarding the type and scope of interventional or health management strategies and action plans that should be available after a genetic test has been conducted and the results obtained.If evidence of a high risk is detected in a genetic test, educational materials should be prepared describing how the condition can be managed, and the types of tests that would facilitate followup observations.
Fourthly, in the context of genetic testing among healthy individuals, numerous social and ethical aspects should ideally be taken into consideration.Among these, one of the most important issues is that of genetic discrimination, which relates to possible prejudicial treatment in areas such as insurance, 58) employment, 59) healthcare, 60) and marriage, 61) based on the outcome of genetic testing. 62)In this regard, genetic testing of healthy individuals is not aimed at diagnosing diseases but instead seeks to identify genetic factors that may or may not occur in a given individual's lifetime.If this leads to discrimination against an individual based on hypothetical characteristics, this information could be misused socially.Thus, it is necessary to ensure that the testee is fully informed of such consequences before undergoing a test.In this regard, thorough personal information management for gene test results is necessary to prevent the sharing of test results with anyone other than the individual concerned.
Currently, many countries manage genetic testing at the national level, regardless of its medical utility or validity. 41,61,63)In the case of South Korea, according to the "Bioethics and Safety Act" (accessed on November 13th, 2023), there are several genes for which gene screening in the healthy population is currently prohibited, among which are the Mt5178A gene associated with longevity and the SLC6A4 gene linked to violent behavior. 41,42,61,63)
PRE-TEST COUNSELING BEFORE TAKING POPULATION GENE TESTS
The outcome of genetic testing is a source of considerable anxiety among the affected individuals, whereas conversely, those receiving favorable outcomes may be overly reassured and lose sight of the perceived need for regular health screenings.For individuals who undergo genetic testing, the following three points must be thoroughly explained beforehand to fully accomplish the purpose of the genetic test and to prevent issues arising from any misunderstandings regarding tests.
CONCLUSION
Regardless of issues pertaining to the accuracy and utility of genetic tests per se, it is necessary for healthcare institutions to thoroughly consider and prepare for the introduction of genetic testing among individuals in the healthy population, given the increasingly high demand among consumers.Genetic testing is suitable for personalized treatment and preventive intervention, based on prediction before the onset of a given disease using genetic information.However, if a test is conducted without a comprehensive understanding of its uniqueness, given that such tests are targeted at healthy individuals, they may not fully achieve the intended purpose, and may even have little or no benefit from the perspective of health management.Consequently, we highlight the need for sufficient discussion and consideration within medical institutions before the introduction of general genetic testing.
least a rudimentary knowledge of genes and genetics.Most of the currently developed genetic tests used for gene screening in the healthy population are based on the use of SNP markers.DNA comprises four types of bases, abbreviated as A, G, T, and C, each of which can undergo mutation, insertion, deletion, and other changes, and thereby give rise to different phenotypes and diseases.44) ing is the occurrence of false-positive results.If such results are obtained, unnecessary additional tests or treatments may be performed, thereby heightening the likelihood of morbidity that would otherwise not have occurred.Moreover, such outcomes can represent a significant source of anxiety among the concerned individuals.This problem is exacerbated by the fact that the percentage of false positive results obtained during population gene screening is relatively high, particularly in the case of SNPs used by population gene screening services, as has been discovered by GWAS.GWAS is an association testing procedure for a phenotype of interest based on an analysis of hundreds of thousands of SNPs, and given that it involves multiple simultaneous comparisons, this inevitably increases the likelihood the possibility of type I errors.51)Accordingly when performing GWAS analysis, issues arising from such multiple comparisons should be sufficiently supplemented through appropriate statistical measures.52)Moreover, before conducting genetic testing, individuals should be made fully aware that a positive test result could be a false-positive outcome.
2 .
Has This Test Comprehensively Assessed the Genes Associated with a Certain Disease?No.The genetic variants underlying a given disease are often quite diverse, and among these variants, tests typically target only a small selection, testing for certain genes (or SNPs) that are known to be meaningful based on current research.The assessed variants may differ depending on the current level of scientific advancement and the gene test service company.Consequently, as science advances, the types of https://doi.org/10.4082/kjfm.23.0254 genetic tests offered may change, as may the interpretation of genetic test results.
Risk Imply That There Is No Likelihood of Developing the Disease in the Future?
No. Genetic tests do not provide comprehensive coverage of all potential genetic variants.Moreover, apart from genetic factors, additional factors, such as environmental factors, lifestyle habits, age, gender, and physical characteristics, all contribute to varying extents in disease development.Accordingly, even if test results indicate a low genetic risk, the occurrence of a disease may be influenced by many non-genetic factors that are not covered by specific tests, including interactions with other pathological conditions and environmental factors.Consequently, regardless of the test results, individuals should continue to make fundamental healthcare efforts, involving lifestyle modifications and ensuring they undergo regular health examinations. | 2024-03-22T15:27:36.709Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "e01b83a41794d3fd22e6fa417c362008fc7bf939",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4082/kjfm.23.0254",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "36bad303fa78f85e01e98e1ca3bc4e8ba6f1071a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6630573 | pes2o/s2orc | v3-fos-license | The Contribution of Faint Blue Galaxies to the Sub-mm Counts and Background
Observations in the submillimetre waveband have recently revealed a new population of luminous, sub-mm sources. These are proposed to lie at high redshift and to be optically faint due to their high intrinsic dust obscuration. The presence of dust has been previously invoked in optical galaxy count models which assume $\tau=9$ Gyr Bruzual&Charlot evolution for spirals and these fit the count data well from U to K. We now show that by using either a 1/$\lambda$ or Calzetti absorption law for the dust and re-distributing the evolved spiral galaxy UV radiation into the far infra-red(FIR), these models can account for all of the `faint'($\leq1$mJy) $850\mu$m galaxy counts, but fail to fit 'bright'($\ge2$mJy) sources, indicating that another explanation for the sub-mm counts may apply at brighter fluxes(e.g. QSOs, ULIRGs). We find that the main contribution to the faint, sub-mm number counts is in the redshift range $0.5<z<3$, peaking at $z\approx 1.8$. The above model, using either dust law, can also explain a significant proportion of the extra-galactic background at $850\mu$m as well as producing a reasonable fit to the bright $60\mu m$ IRAS counts.
INTRODUCTION
The SCUBA camera (Holland et al. 1999) on the James Clerk Maxwell Telescope has transformed our knowledge of dusty galaxies in the distant Universe as a result of the discovery of a new population of luminous, dusty, infrared galaxies (Smail et al. 1997;Ivison et al 1998). It has been proposed that these galaxies may be similar to IRAS ULIRGs (ultra-luminous infra-red galaxies) which appear to be starbursting/AGN galaxies, containing large amounts of dust. The possibility that much star-formation is hidden by dust means that sub-mm observations can give an invaluable insight into the star-formation history of the Universe. This view aided by the redshifting of the thermal dust emission peak in starbursting galaxies into the FIR, which results in a negative k-correction in the sub-mm. By this route, we can therefore study our Universe all the way back to very early times and gain unprecedented insight into the formation and evolution of galaxies.
The first sub-mm galaxy to be detected by SCUBA was SMM J02399-0136 , which is a massive starburst/AGN at z=2.8 and the current situation is that the complete 850µm sample from all the various groups consists of well over 50 sources Eales et al. 1999;Hughes et al. 1998;Holland et al. 1998; Barger et al. 1998;Smail et al. 1997). Optical and near infrared(NIR) counterparts have been identified for about a third of the sources, although the reliability of these identifications varies greatly. This problem is due to the fact that the ≈ 15 ′′ FWHM of the SCUBA beam results in + − 3 ′′ positional errors on a sub-mm source, so there is a reasonable chance that several candidates could lie within these errors. Also, there is no guarantee that the true source will be detected down to the optical flux limit as, for example, many of the sources have been shown to be very red objects (Dey et al. 1999; and therefore have not been found in optical searches for sub-mm sources. What has proved extremely enlightening is that radio counterparts at 1.4GHz have now been identified for many of the sub-mm sources providing much more accurate angular positions (< 1 ′′ in some cases) and reasonably accurate photometric redshifts. Various groups have obtained redshift distributions of submm samples (Hughes et al. 1998;Barger et al. 1999a;Lilly et al. 1999;Smail et al. 2000) and they all derive results that are consistent with a mean redshift in the range 1 < z < 3. The fact that almost all of the sources are associated with mergers or interactions seems to confirm that the population of sources contributing at the 'bright' (> 2mJy) submm fluxes (since most of the sources so far discovered are 'bright') are similar to local IRAS ULIRG's, ie massive, starbursting/AGN galaxies which are extremely luminous in the far-infra-red. This hypothesis is strengthened further by the fact that the only two sub-mm sources (SMM J02399-0136 and SMM J14011+0252) with reliable redshifts have been followed up with millimeter wave observations (Frayer et al. 1998(Frayer et al. , 1999, resulting in CO emission being detected at the redshifts of both sources (z=2.8 and z=2.6), a characteristic indicator of large quantities of molecular gas present in IRAS galaxies.
The nature of the fainter (≤ 1mJy) sub-mm population is, however, the focus of this paper. It has been claimed by Peacock et al. (2000) and Adelberger et al. (2000) that the Lyman Break Galaxy(LBG) population could not only contribute significantly to the faint sub-mm number counts, but could also account for a substantial proportion of the background at 850µm. This may indicate that ULIRG's cannot explain all of the sub-mm population and that the UV-selected galaxy population, which are predicted to be evolved spirals by the Bruzual & Charlot models, may in fact make a substantial contribution. It is exactly this hypothesis our paper addresses.
In this paper we will first review the situation regarding the optical galaxy counts, focusing in particular on the models of Metcalfe et al. (1996). These simple models which use a τ = 9Gyr SFR for spirals and include the effects of dust give good fits to galaxy counts and colours from U to K. The idea is then to see whether this combination of exponential SFR and relatively small amounts of dust in the first instance (AB = 0.3 mag. for the 1/λ law), which would re-radiate the spiral ultra-violet (UV) radiation into the FIR, could cause a significant contribution to the submm galaxy number counts and background at 850µm. Our modelling will be described in section 3 and then in section 4 our predicted contribution to the 850µm and 60µm galaxy counts and the extra-galactic background in the submm will be shown. Also in this section we demonstrate how to get a fit to the background in the 100 − 300µm range by using warmer, optically-thicker dust in line with that typically seen in ULIRG's. We will then discuss the implications of our predictions in section 5 and conclude in section 6.
THE OPTICAL COUNTS
It is well known that non-evolving galaxy count models, where number density and luminosity of galaxies remain constant with look-back time, do not fit the optical number counts e.g. (Shanks et al. 1984), as there is always a large excess of galaxies faintwards of B ∼ 22 m . One way to account for this excess of 'faint blue galaxies' is to investigate the way galaxy evolution will influence the optical number counts. Metcalfe et al (1996) showed that by assuming that the number density of galaxies remains constant, the Bruzual and Charlot(1993) evolutionary models of spiral galaxies with a τ = 9Gyr SFR give excellent fits to the optical counts. The galaxy number counts are normalised at B ∼ 18 m so that the non-evolving models give good fits to the B band data and redshift distributions in the range 18 m < B < 22. m 5. With this high normalisation, the models of the galaxy counts represent both spiral and early-type galaxies extremely well for 17 m < I < 22 m (Glazebrook et al. 1995a, Driver et al 1995 and also the less steep H/K counts out to K ∼ 20 m . The evolution model then produces a reasonable fit to the fainter counts to B ∼ 27 m , I ∼ 26 m , H ∼ 28 m . Metcalfe et al (1996) included a 1/λ internal dust absorption law with AB = 0.3 for spirals to prevent the τ = 9 Gyr SFR from over-predicting the numbers of high redshift galaxies detected in faint B< 24 redshift surveys (Cowie et al 1995). This 1/λ dust law differs from the Calzetti(1997) dust law derived for starburst galaxies, in that for a given AB, more radiation is absorbed in the UV. The Calzetti dust law is used by Steidel et al(1999) to model their 'Lyman Break' galaxies; they find an average E(B-V)=0.15 which gives AB = 0.87mag and A1500 = 1.7mag. This compares to our A1500 = 0.9mag with AB = 0.3mag. Both models also fail to predict as red colours as observed for the U-B colours of spirals in the Herschel Deep Field (Metcalfe et al 1996). However, if we assumed E(B-V)=0.15 for our z=0 spirals, as compared to our E(B-V)=0.05, then the rest colours of spirals as predicted by the Bruzual & Charlot model might start to look too red as compared to what is observed. Otherwise, the main difference between these two dust laws is that the Calzetti law would produce more overall absorption and hence a higher FIR flux from the faint blue galaxies. Thus in some ways our first use of the 1/λ law appears conservative in terms of the predicting the faint blue galaxy FIR flux. Later, we shall experiment by replacing the 1/λ law with the Calzetti(1997) law in our model.
So this pure luminosity evolution (PLE) model with 1/λ dust and q0 = 0.05 then slightly under-estimates the faintest optical counts but otherwise fits the data well, whereas for q0 = 0.5 the underestimate (with or without dust) is far more striking. An extra population of galaxies has to be invoked at high redshift to attempt to explain this more serious discrepancy for the high q0 model. This new population was postulated to have a constant SFR from their formation redshift until z ∼ 1 and then the Bruzual & Charlot models predict a dimming of ∼ 5 m in B to form a red dwarf elliptical (dE) by the present day and therefore has the form of a 'disappearing dwarf' model (Babul & Rees 1992). No dust was previously assumed in the dE population but this assumption is somewhat arbitrary.
The τ = 9Gyr SFR was inconsistent with the early observations at low redshift from Gallego et al.(1996) and this is partly accounted for by the high normalisation of the optical number counts at B ∼ 18 m . There is still a problem with the UV estimates from the CFRS UV data of Lilly et al at z=0.2. More recent estimates of the global SFR at low redshift based on the [OII] line (Gronwall et al.1998;Tresse & Maddox 1998;Hammer and Flores 1998) indicates that the decline from z=1 to the present day may not be as sharp as first thought and that the τ = 9Gyr SFR in fact provides a better fit to this low redshift data. Metcalfe et al (2000) have further found that this model also agrees well with recent estimates of the luminosity function of the z=3 Lyman break galaxies detected by Steidel et al.(1999).
The main question then that we will address in this paper is whether the small amount of internal spiral dust absorption assumed in these models which give an excellent fit to the optical galaxy counts, could cause a significant contribution to the sub-mm number counts and background at 850µm.
MODELLING
Using the optical B band parameters for spiral galaxies, we attempt to predict the contribution to the sub-mm galaxy counts and background at 850µm by using a 1/λ absorption law for the dust and re-radiating the spiral UV radiation into the FIR. We use the Bruzual & Charlot(1993) galaxy evolution models with H0 = 50kms −1 Mpc −1 and a τ =9 Gyr SFR -with a galaxy age of 16 Gyr in the low q0 case, and 12.7 Gyr in the high q0 case to produce our 1M⊙ galactic spectral energy distribution(SED) as a function of redshift. We then use the equation as used by Metcalfe et al(1996), which is used to calculate the radiation absorbed by the dust, G abs (ergss −1 ), for our 1M⊙ model spiral galaxy as a function of z, using our 1/λ absorption law with AB = 0.3. Since Bruzual & Charlot provides us with a 1M⊙ SED at each redshift increment, we need to calculate the factor required to scale this SED (after the effect of absorption from the dust) to obtain that of a galaxy with absolute magnitude MB at zero redshift, and this factor will then remain constant for MB galaxies at all other redshifts. This then provides a zero point from which to calculate scaling factors for all the other galaxies in our luminosity functions. We find the scaling factor for an MB galaxy by making use of a relation from Allen(1995) ( 2) where f λ is the received flux(ergs −1Å−1 cm −2 ) and B λ is the B band filter function. By re-arranging, setting mB=MB and then multiplying by 4π(10pc) 2 we obtain the total emitted power, LB(ergs −1 ) in the B band from an MB galaxy The intensity emitted in the B band, after absorption by the dust from our 1M⊙ galaxy, LBM ⊙ is then calculated by integrating the SED, assuming a flat B band filter, between 4000Å and 5000Å.
The scaling factor to scale a Bruzual & Charlot 1M⊙spectral energy distribution for a galaxy of absolute magnitude, MB, is then defined by the ratio LB/LBM ⊙ .
The way the dust will re-radiate this absorbed flux depends on its temperature, particle size and chemical composition. However the normalisation of the re-radiated flux from a galaxy with absolute magnitude MB, at redshift z, is already determined (the quantity G abs EB/EBM ⊙ ). We will adopt a simple model by assuming a mean interstellar dust temperature of 15K, (Bianchi et al. 1999) and also a modest warmer component of 45K, (the actual luminosity ratio we use is L45K /L15K = 0.162), which would come from circumstellar dust (Dominigue et al. 1999) and is needed in order to fit counts at shorter wavelengths eg. 60µm. The effect of varying the dust parameters is explored in section 4. We then simply scale the Planck function so that where C(z,MB ) is the scaling factor, which is a function of z and MB, β(λ, T ) is the Planck function (in this case a sum of two Planck functions) and is an opacity law (we use β = 2.0 for each Planck function to model optically thin dust).
We then calculate the received 850µm flux, S(z,MB), from a galaxy with absolute magnitude MB and redshift z using the equation where C(z,MB) is defined from (4) and λe is equal to 850µm/(1+z). We can then obtain the number count of galaxies with absolute magnitude between MB and MB + dMB and redshift between z and z+dz for which we measure the same flux density S(z,MB) at 850µm (see (4)).
where φ(MB) is the optical Schechter function and dV dz is the cosmological volume element. Then the integral source counts N(> S lim ) are obtained, for each value of S lim , by integrating (5) over the range of values of MB and z such that S(z,MB) > S lim , where S(z,MB) is defined in (4).
It is straightforward to then obtain model predictions of the FIR background for a given wavelength. The intensity, dI, at 850µm from galaxies with absolute magnitudes between MB and MB + dMB and redshifts between z and z+dz is given by multiplying the number of galaxies with these z's and MB's by the flux density which we would measure from each and then we simply integrate over all absolute magnitudes and all redshifts (0 < z < 4 in this case) 4 PREDICTIONS Fig. 1 shows our model predictions for the 60µm differential number counts of IRAS galaxies (Saunders et al. 1990). This was an all sky local survey carried out with the IRAS satellite down to a flux limit of 0.6Jy. It therefore provides an important test of our model since spiral galaxies contribute significatly to IRAS counts (Neugebauer et al. 1984) and so if we are going to assume PLE out to redshifts of 4 then our local galaxy count predictions at 60µm need to be reasonably consistent with the data. The figure shows our evolution and no evolution model(the q0 makes no difference) and because the IRAS survey was probing redshifts out to z=0.2 we can Figure 1. The 60µm differential number counts. The graph shows the evolution and no-evolution models for a low q 0 Universe(the corresponding high q 0 models are indistinguishable) along with the observed 60µm counts of IRAS galaxies down to a flux limit of 0.6Jy, plotted in the format used by Oliver et al. (1992). The crosses are from Hacking & Houck(1987), the empty triangles from Rowan-Robinson et al. (1990), Saunders et al.(1990) are the filled triangles and the circles are Gregorich et al.(1995) and Bertin et al.(1997). We use a two-component dust temperature of 15K and 45K to model both interstellar and circumstellar dust respectively. Other parameters used are β = 2.0, H 0 = 50 and a redshift of formation of z = 4. The dot-dashed line shows the same evolution model using the Calzetti dust law with three dust temperature components of 15, 25, and 32K. This fits the IRAS counts less well at < 0.2mJy, and this is because of the lack of a 45K dust component meaning that there is much less thermal emission from the dust at 60µm . see that there is very little difference between the two models and that they both fit the data reasonably well. The IRAS counts below 0.2Jy are slightly under-predicted using both dust laws, which could possibly be due to the fact our model doesn't include any fast-evolving AGN/ULIRG population. We use the Calzetti dust law with three dust components of 15, 25, and 32K and this failure of the fainter IRAS counts is greater than when the 1/λ law is used because of the absence of the 45K dust component, which dominates the thermal emission at 60µm.. We then go on to show in Fig. 2 our sub-mm predictions using the Bruzual & Charlot evolution model with low and high q0 (q0 = 0.05, q = 0.5) and also for the corresponding no-evolution models where we use the Bruzual & Charlot SED at z = 0 for all redshifts. We have used a twocomponent dust temperature, as described in the previous section and a galaxy formation redshift, z f = 4. The low q0 model reproduces the faint counts well, but fails the very bright counts. This makes sense since these very luminous sources would require ULIRG's, having SFR's of order ≈ 100-1000M⊙yr −1 , and/or AGN, in order to produce these huge FIR luminosities. Indeed, the 850µm integral log N:log S appears flat between 2-10mJy before rising again at fainter Hughes et al.(1998). Also shown are our predictions for q = 0.05 and q = 0.5 models with and without Bruzual & Charlot evolution, using the parameters from Fig. 1. Both the high and low q 0 models, with evolution (dashed and solid curves), do very well with the faint counts but fail the most luminous sources. In the no evolution cases(dotted and dot-dashed) the high q 0 model again predicts more galaxies then the low q 0 model, but they both underpredict the faint 850µm counts by about an order of magnitude and then again fall away again at the higher flux densities. The graph also shows a predicted contribution from AGN (Gunn & Shanks 1999) and a model using the calzetti dust law (the two dot-dot-dot-dashed curves). The AGN model (the steeper of the curves) predicts that, at most, QSO's could contribute 30 percent of the background at 850µm, and these models do much better in the number counts at brighter fluxes, but they fail to contribute at the 0.5mJy level where we predict that faint blue galaxies are dominant. Our Calzetti dust law uses three dust temperature components (see Fig. 1, and as with our 1/λ dust law, it can account for the faint number counts but then fails the much brighter sources. fluxes, suggesting that 2 populations may be contributing to the counts. The high q0 model contains a dwarf elliptical population in order to fit the optical counts, as already explained, but no dust was invoked in these galaxies in the optical models and so they do not contribute to our 850µm predictions. Contrary to the optical number counts, the high q0 models predict more galaxies greater than a given flux limit than low q0 models. The reason for this is illustrated in Fig. 3, which shows how the received flux density from a MB = −22.5 galaxy would vary with redshift in the high and low q0 case, with and without τ =9Gyr. Bruzual & Charlot evolution. In the no-evolution cases the two factors involved are the cosmological dimming and the effect of the negative k-correction, since we are effectively looking up the black body curve as we look out to higher redshift. The high q0 model( dotted line) predicts greater flux densities for a given redshift than with low q0, explaining why the integral number counts are higher for a given flux density. When the Bruzual & Charlot evolution is invoked (solid and dashed lines), we predict more flux than in the corresponding no-evolution cases at high redshift, because a galaxy is significantly brighter than at the present day. The high q0 model(with evolution) is virtually flat in the redshift range 0.5 < z < 2 and the low q0 model again predicts slightly lower flux densities for a given redshift compared to high q0. It may be noted that the no-evolution models in this plot differ slightly from that of Hughes et al.(1998). This discrepancy is a result of the different assumed dust temperature and beta parameter. The colder temperature means that the peak of the thermal emission from the dust is probed at lower redshifts and so we lose the benefit of the negative k-correction at z≈2-3 instead of at z≈7-9 as in Hughes & Dunlop(1998). Fig. 4 shows the effect of altering the interstellar dust temperature (where we have used the low q0 evolving model). The interstellar dust temperature, Tint makes a big difference to our 850µm number count predictions and the variation is perhaps contrary to what one may expect in that the lower Tint means that we expect to see more galaxies above a given flux limit S lim . This is because, as we lower the dust temperature, although the integrated energy ie the area under the Planck curve goes down, the flux density at 850µm goes up slightly because we are seeing the majority of radiation at much longer wavelengths. Now recall from the previous section that the normalisation of the Planck emission curve is already defined from the amount of flux absorbed by the dust and the Planck curve is simply scaled accordingly. So because the normalisation is fixed, when we lower the dust temperature, we have to scale the Planck curve up by a much larger factor and therefore find that we obtain much larger flux densities at 850µm, explaining why our models are very sensitive to Tint.
We have used a galaxy formation redshift, z f = 4 which is reasonable since sub-mm sources seem to exist out to at least that, but we do in fact find that adopting z f = 4 or z f = 6 or indeed z f = 10 does not make any difference to the number counts. Fig. 3 illustrates this, since at z > 4 we are observing radiation that was emitted beyond the peak of the black-body curve, and so cosmological dimming is no longer compensated for and all the curves begin to fall away very quickly explaining why increasing z f beyond about z=4 makes essentially no difference to the 850µm number counts. Of course, a higher assumed Tint would extend this redshift range to beyond z=4. Fig. 6 shows what sort of contribution we get to the extra-galactic background, simply by integrating over the number counts in each wavelength bin. The plot shows the low and high q0 models with and without evolution, and with our standard parameters of Tint = 15K, Tcirc = 45K, β = 2.0 and z f = 4. All the models predict the same intensity at short wavelengths(λ = 60µm), as low redshift objects would dominate making the evolution and q0 dependence less significant. The low q0 model is able to account for all of the background at 850µm, the high q0 model in fact overpredicts it by about a factor of 2 and the no evolution models, although underpredicting it, are still well within an order of magnitude. Although we can fit the background at 850µm, we noticeably fail the data between about 100 and 300µm. We find that the only way to fit these observations Figure 5. The predicted number-redshift distribution of sub-mm selected faint blue galaxies down to flux limits, S lim of 4.0, 2.0, 1.0 and 0.5mJy. The graph shows the low-q 0 model using the 1/λ dust law with the parameters described in Fig. 1 As the flux limit is increased, the peak in the n(z) distribution shifts from around z=1.8 at S lim =0.5mJy to much lower redshifts, reaching z≈ 0.2 for S lim =4.0mJy. using our model is to use higher values of AB and higher dust temperatures, as this means dust is absorbing more energy from each galaxy and so the contribution to the background in the wavelength range where warmer dust emission dominates(100µm < λ < 500µm) is much greater. The solid curve in Fig. 6 shows a prediction where we have tried the Calzetti dust model which gives more overall absorption with similar amounts of reddening; this model might also be expected to fit the B optical counts. We see that its larger amount of absorbed flux allows more flexibility in terms of using more dust components. By using three dust temperature components results we obtain a better (though still not perfect) fit to Fig. 6 in the 100µm < λ < 300µm range, while still giving fits to the IRAS 60µm ( Fig. 1) and faint 850µm number counts (Fig. 2).
DISCUSSION
We have taken a different approach from the standard way in which sub-mm flux's are estimated using UV luminosities (Meurer et al. 1999). Instead of assuming a relationship between the UV slope β and the ratio LF IR/LU V , we proceed directly from the spiral galaxy UV luminosity functions and simply re-radiate into the FIR by assuming a simple dust law constrained from the optical counts. A direct result of this, as has already been illustrated in the previous section, is that decreasing the interstellar dust temperature actually increases the received flux density at 850µm, firstly because the peak in the Planck emission curve moves towards longer wavelengths and secondly because (as the absorbed flux from the dust is fixed) the normalisation scaling factor goes up. The fact then that we model the dust using a dominant interstellar component of 15K, which is signifi- Figure 6. The predicted contribution to the FIR background from our models. The latest measurements of the extragalactic FIRB, compared with the COBE measurement of the cosmic microwave background (Mather et al. 1994). F98 -Fixsen et al.(1998)(upper solid line) and P96 -Puget et al.(1996)(lower solid line); H98 - Hauser et al.(1998);S98 -Schlegel et al.(1998). Both the Hauser and Schlegel data each have points at 240µm and 130µm. Low and high q 0 models are shown with and without evolution, where we have used our standard parameters of T int = 15K, T circ = 45K, β = 2.0 and z f = 4.0 The evolution model, in the low q 0 case can account for all of the FIR background at 850µm, whereas the high q 0 one in fact overpredicts by about a factor of 2. The no evolution models both underpredict the sub-mm background but are consistent with it to within an order of magnitude. The solid curve shows a model where we have used the Calzetti dust law using A B = 1.02 (equivalent to E(B-V)=0.18 and close to the value 0.15 used by Steidel et al for their Lyman Break Galaxies) for the dust obscuration with a three-component dust temperature of 15K, 25K and 32K. It fits the background and faint number counts at 850µm, the IRAS 60µm counts and also does much better in the wavelength range 100µm < λ < 500µm.
cantly colder than that used in models of starburst galaxies (typically 30-50K), means that we are able to show that the evolution of normal spiral galaxies like our own Milky Way, using the Bruzual model with an exponential SFR of τ = 9Gyr, could make a very significant contribution to the sub-mm number counts in the S850 < 2mJy range. Indeed this sort of temperature for spirals has been given recent support from observations of ISO at 200µm (Alton et al. 1998a) where, for a sample of 7 spirals, a mean temperature of 20K was found, about 10K lower than previous estimates from IRAS at shorter wavelengths. They found that 90 percent of the FIR emission came from very cold dust at temperatures of 15K. Sub-mm observations of spirals (Alton et al. 1998b;Bianchi et al. 1998) and observations of dust in our own galaxy (Sodroski et al. 1994;Reach et al. 1995;Boulanger et al. 1996;Sodroski et al. 1997) also support the claims of these sorts of dust temperatures. Of course, at z=4 our assumed interstellar dust temperature of 15K is comparable to that of the microwave background.
Our models show that normal spiral galaxies (ie those that evolve into galaxies like our own Milky Way assuming the Bruzual model) fail to provide the necessary FIR flux of the most luminous sources(> 2mJy) and this is not surprising since the τ = 9Gyr SFR at high redshift(z > 1), which is consistent with the UV data, is lower than that inferred by other models which fit the sub-mm counts by a factor of about 5 or so (Blain et al. 1998a). The LBG galaxies at high redshift are predicted to be evolved spirals by the Bruzual models and the dust we invoke (AB=0.3 implies an attenuation factor at 1500Åof 2.3) is enough to make them low luminosity sub-mm sources at flux levels of around 0.5mJy. This amount of dust, though, is not enough to account for the factor of 5 discrepancy and there are several possible reasons for this.
The first is the possible additional contribution to the sub-mm counts from AGN. Modelling of the obscured QSO population has shown that they could contribute, at most, about 30% of the background at 850µm but they can get much closer to the bright end of the sub-mm number counts (Gunn & Shanks 1999). This is shown in Fig. 2 where we also show the q0=0.5 model of Gunn & Shanks. Although the slope of the QSO count at the faintest limits is too flat, at brighter fluxes the QSO model fits better than the faint blue galaxy model and the combination of the two gives a better fit overall.
It is also possible that the optical and sub-mm observations are sampling a completely different population of galaxies as the obscured galaxies sampled by the sub-mm observations may well just be too red or too faint to be detected in the UV at the current flux limits Dey et al. 1999). That may mean that the most luminous sub-mm sources or ULIRG's(> 10 13 L⊙) are not the LBG galaxies (which the Bruzual model predicts as evolved spirals) and so then it would not be surprising if the current sub-mm and UV derived star-formation histories at high redshift were different. However, the evidence is growing that the faint blue galaxies are significant contributors to the faint sub-mm counts. Chapman et al.(1999) carried out sub-mm observations of 16 LBG's and found, with one exception, null detections down to their flux limit of 0.5mJy. But their one detection may suggest that with enough SCUBA integration time it might be possible to detect LBG's that are particularly luminous in the FIR and indeed, while this paper was in preparation, work from Peacock et al (1999) suggests that faint blue galaxies may be detected at 850µm at around the 0.2mJy level. This is below the SCUBA confusion limit of ≈ 2mJy (Hughes et al. 1998;Blain et al. 1998b) and highlights the problem faced by Chapman et al.(1999) in performing targetted sub-mm observations of LBG's. The conclusions of Peacock et al.(1999) suggest that the LBG population (the faint blue galaxies in our model) contribute at least 25 percent of the background at 850µm and Adelberger et al.(2000) also come to similar conclusions, namely that the UV-selected galaxy population could account for all the 850µm background and the shape of the number counts at 850µm. However, the conclusions of Adelberger et al.(2000) are based on the fact that the SED of SMM J14011+0252 is representative of both the LBG and sub-mm population. At present, they are only assumptions, but nevertheless the conclusions of all these authors seem to suggest that ULIRG's may not contribute to the faint sub-mm number counts and background as much as was first thought.
The spectral slope of the UV continuum and the strength of the Hβ emission line in Lyman Break Galaxies support the fact that interstellar dust is present (Chapman et al. 1999), but the physics of galactic dust and the way it obscures the optical radiation from a source is still very poorly understood. We started by adopting a very simplistic model for the dust, treating it as a spherical screen around our model spiral galaxy. The dust might, in reality, be concentrated in the plane of the disk for spiral galaxies and may also tend to clump around massive stars. This would make the extinction law effectively grayer as suggested by observations of local starburst galaxies (Calzetti, 1997). Indeed, we have investigated the effect of the grayer Calzetti extinction law and found that it would produce a larger sub-mm count contribution due to the higher overall absorption it would imply. Metcalfe et al (2000) have also suggested that there may be evidence for evolution of the extinction law from the U-B:B-R diagram of faint blue galaxies in the Herschel Deep Field.
We have assumed pure luminosity evolution (PLE) throughout this paper. The assumption that the number density of spiral galaxies remains constant might certainly not be the case if dynamical galaxy merging is important for galaxy formation. However, as we have seen it is relatively easy to fit the sub-mm number counts with PLE models whereas it is in fact impossible to fit the counts using pure density evolution models without hugely overpredicting the background by 50 or 100 times (Blain et al. 1998a). So, if existing sub-mm observations are correct then although density evolution may also occur, luminosity evolution may be dominant. It is also striking how well the PLE models do in the optical number counts and colour-magnitude diagrams and together with the fact that we observe highly luminous objects in the sub-mm out to at least z = 3 , this could indicate that the biggest galaxies could have formed relatively quickly, on timescales of about 1Gyr or so. If this were true, then the PLE models may be a fair approximation to the galaxy number density and evolution in the Universe out to z ≈ 3 in both the optical/near-IR and FIR.
We have not taken into account early-type galaxies as no dust was invoked in these in the optical galaxy count models. In particular, we have not included any contribution from dust in the dE population which is invoked to fit the faint optical counts in the q0 = 0.5 model (Metcalfe et al. 1996). If we were to include their possible contribution this would increase our 850µm counts predictions at the faint end since in our models both early-type and dE star formation occurs at high redshift which is the region of greatest sensitivity for the sub-mm counts. At brighter fluxes though, where, in our models, low redshift galaxies are the only possible influence, the inclusion of early-type galaxies would be negligible.
CONCLUSIONS
The aim of this paper was to investigate whether, by reradiating the absorbed spiral galaxy UV flux into the FIR, the dust invoked in the faint blue spirals at high z from the optical galaxy count models of Metcalfe et al.(1996) could have a significant contribution to the sub-mm galaxy counts and also the FIR background at 850µm. We have found that, using a interstellar dust temperature of 15K, a modest circumstellar component of 45K, a beta parameter of 2.0 and a galaxy formation redshift of z f ≈ 4 we can account for a very significant fraction of the faint 850µm source counts, both in the low and high q0 cases when we invoke Bruzual & Charlot evolution (see Fig. 2). These evolutionary models give 5-10 times more contribution to the faint sub-mm counts than the corresponding no-evolution models. At brighter fluxes, we find that the SFR and dust assumed in our normal spiral model are too low to produce the FIR fluxes of the most luminous sources. In the no-evolution cases, we underpredict the number counts, even at the faint end. Our predicted redshift distribution of sub-mm selected faint blue galaxies suggests that the main contribution to the faint counts is in the range 0.5 < z < 3, peaking at z ≈ 1.8. We have shown that our model fits the 60µm IRAS data well, an important local test if we want to assume PLE and extrapolate our optical spiral galaxy luminosity functions out to higher redshift. With the evolution models we can easily account for 50-100% of the FIR background at 850µm but fail the data by nearly an order of magnitude in the 100 − 300µm range. We have shown that the only way to fit these observations using this optically based model is to use assume more dust obscuration (AB = 0.6) and much warmer dust (T=30K). Effectively gray extinction laws such as that of Calzetti et al (1997) may also provide more overall absorption and hence allow more dust temperature components to allow the flexibility to fit the FIR background from 60-850 µm. However, the bright sub-mm counts will still require a further contribution from QSO's or ULIRGs to complement the contribution of the faint blue galaxies at fainter fluxes. | 2014-10-01T00:00:00.000Z | 2000-02-03T00:00:00.000 | {
"year": 2000,
"sha1": "c071b804791cdb6c135e3a7f378eb134dd243419",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/mnras/article-pdf/323/1/67/3233601/323-1-67.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d385cd2c688658b6a5e56d85d378b1ba410d6a84",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257897628 | pes2o/s2orc | v3-fos-license | Case
Criss-cross heart was first described in 1974. It is a rare congenital heart malformation that occurs in 8 cases per 1,000,000 children, and represents only 0.1% of congenital malformations. The diagnostic methods of choice are transthoracic echocardiography, cardiac magnetic resonance (CMR), computed tomography angiography (CT) and, sometimes, cardiac catheterization. This report describes the case of a newborn with a criss-cross heart in addition to double-outlet right ventricle (RV), with poorly positioned vessels, in addition to atrial septal defect (ASD), interventricular septal defect, tricuspid valve dysplasia and persistent left superior vena cava. The exact etiology of this malformation is not known, but it seems to occur due to rotation of the ventricles in their longitudinal axis, not accompanied by rotation of the atrial and atrioventricular (AV) valves. This movement produces abnormal ventricular inlets, determining that the RV be positioned on a superior plane and the left ventricle on an inferior plane. Although the exact cause of this anomaly is still unknown, it is believed that a genetic abnormality may be leading to these cases: mutation of the Cx43 gene. Diagnosis of the case concerned was given by transthoracic echocardiography and computed CT of the aorta and pulmonary arteries, which showed, in addition to the criss-cross heart, other abnormalities, such as double-outlet RV, large ASD and ventricular septal defect (VSD).
Introduction
Criss-cross heart was first described in 1974, although it had been reported in 1961. 1,2 It is a rare congenital heart malformation that occurs in 8 cases per 1,000,000 children, and represents only 0.1% of congenital malformations. 3,4 Criss-cross heart appears when, during the embryonic period, the heart rotates around its own axis, resulting in an anterosuperior RV and a posteroinferior left ventricle. Due to the complex structural alteration, diagnosis is complicated, The diagnostic methods of choice are transthoracic echocardiography, CMR imaging , computed CT and, occasionally, cardiac catheterization. 5 Transthoracic echocardiography is usually the first test to be performed. It identifies the position and morphology of the four chambers and AV valves and the connections between vessels and chambers. 6 Furthermore, in the performance of this method, there is, dynamically, the impression that the atrium empties into the contralateral ventricle due to the crossing of blood flows. 7,8 CMR imaging and computed CT provide more detailed information and on other planes, such as coronal, axial and sagittal positions. 9 Cardiac catheterization may be necessary to assess intracavitary or vessel pressures and oxygenation in different locations, in addition to ruling out septal defects not seen in other scans. 5 The malformation regarding the rotation of the heart itself does not indicate a surgical approach, however, most cases are associated with other anatomic abnormalities, which need to be evaluated individually to determine the conduct. The most frequent associated malformations include: tricuspid valve and right ventricular hypoplasia, VSD, ventricular arterial discordance and pulmonary stenosis. 7 In this report, we describe the case of a newborn with a criss-cross heart in addition to double-outlet RV, with poorly positioned vessels, in addition to ASD, interventricular septal defect, tricuspid valve dysplasia and persistent left superior vena cava.
Case report
Male child born on March 28, 2021, from home birth, was taken to a hospital in Colatina (ES) after birth, for evaluation. When examined by the attending physician, the heart test revealed an abnormality (saturation in the right upper limb = 92% and right lower limb = 92%). Transthoracic echocardiography was performed, suggesting a complex congenital heart disease with transposition of the great vessels with patent foramen ovale (PFO) ASD and large associated VSD. Transthoracic echocardiography was performed on March 31, 2021, which showed: situs solitus, levocardia; two-valve AV concordance with rotation of the AV connection and crossed ventricular inflow streams (Figures 1 and 2); doubleoutlet RV ventricular-arterial coupling (Figure 3), with poorly positioned vessels; wide fossa ovalis ASD, 5.4 mm in its largest measurement, no flow acceleration, mean gradient of 1.7 mmHg, left-right flow; interventricular septum with overriding greater than 50% and apparent double infundibulum; large inlet VSD, 10 mm, no significant gradient; moderate dilation of the right chambers and mild RV hypertrophy; preserved biventricular systolic function assessed by qualitative analysis; dysplastic tricuspid valve with straddling and moderate regurgitation of this valve allowing estimating right ventricular systolic pressure at 55 mmHg, 8.5 mm tricuspid annulus; trivalvular aortic valve anterior and to the right, no significant systolic gradient, mild regurgitation; trivalvular pulmonary valve with no significant systolic gradient at the time, mid-systolic notch and mild regurgitation; discrete stenosis in the left pulmonary artery; persistent left superior vena cava.
While in hospital, the child presented clinical and radiographic signs suggestive of pulmonary hyperflow. Adjustments were made to diuretic doses and computed CT of the thoracic aorta and pulmonary arteries was The patient underwent pulmonary artery banding cardiac surgery on April 20, 2021, uneventfully. In the immediate postoperative period, the patient developed supraventricular tachycardia, which improved after adjusting the temperature, required low-dose epinephrine, and presented oliguria requiring diuretic solution. The patient presented a positive outcome, allowing the diuretic solution and adrenaline to be suspended, and was extubated on April 22, 2021, uneventfully. Control transthoracic echocardiography on April 22, 2021 showed effective pulmonary banding.
Discussion
The exact etiology of this malformation is not known, but it seems to occur due to rotation of the ventricles around their longitudinal axis, not accompanied by atrial rotation and AV valve rotation. This movement produces abnormal ventricular inlets, determining that the RV be positioned on a superior plane and the left ventricle on an inferior plane. 10 The other anomalies normally found are hypoplasia of the right tricuspid valve, pulmonary stenosis, inlet VSD and abnormal ventriculararterial coupling. Discordant coupling is more frequent, and double-outlet RV is rare. 10,11 Case Report Potratz et al.
Criss-cross heart
Although the exact cause of this anomaly is not yet known, it is believed that a genetic abnormality may be leading to these cases -mutation of the Cx43 gene -and the exclusion of this gene would lead to a delay in the dextroposition of the heart, thus causing right ventricular defect, and not taking it into the correct position. 12 The diagnosis of the case in question was given by through transthoracic echocardiography and computed CT of the aorta and pulmonary arteries, which showed, in addition to criss-cross heart, other abnormalities, such as doubleoutlet RV, and large ASD and VSD. Due to the presence of double-outlet RV, in view of the high systemic resistance leading to the flow preferentially through the pulmonary trunk, with exacerbated pulmonary hyperflow, it was decided to perform the banding of the pulmonary arteries in order to increase or, at least, equalize the pulmonary resistance and, thus, cause the blood to be ejected preferentially to the systemic arterial bed instead of the pulmonary venous bed, thus protecting the pulmonary arterial vasculature. 13 The procedure was uneventful, but, due to other neonatal problems, the child had to remain hospitalized after hospital discharge from a cardiovascular point of view, but remained hemodynamically stable and in room air, which demonstrates the effectiveness of the procedure performed, in addition to the echocardiography postoperatively. An outpatient follow-up schedule was created so that, in the future, the best therapeutic strategy could be defined.
Author Contributions
Conception and design of the research: Potratz MO, Garbo LZ, Pessimilio KP, Loss AS, Ambrozim CB, Lima ALTA, Rocha DL; acquisition of data and critical revision of the manuscript for intellectual content: Potratz MO, Garbo LZ, Rocha DL; writing of the manuscript: Lima ALTA.
Potential Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Sources of Funding
There were no external funding sources for this study.
Study Association
This study is not associated with any thesis or dissertation work.
Ethics Approval and Consent to Participate
This article does not contain any studies with human participants or animals performed by any of the authors. | 2023-04-02T15:35:15.092Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "86d58487a22cc41c8de828d3a2f30bfd5172925e",
"oa_license": "CCBY",
"oa_url": "https://www.abcimaging.org/wp-content/uploads/articles_xml/2675-312X-dic-36-01-e282/2675-312X-dic-36-01-e282.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7dece17b67386ccad8b0a66dba62e4e6725775b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
216343157 | pes2o/s2orc | v3-fos-license | Dataset on the influence of relative humidity on the pathogenicity of Metarhizium anisopliae isolates from Thailand and Malaysia against red palm weevil (Rhynchophorus ferrugineus, Olivier) adult
Red palm weevil (RPW), Rhynchophorus ferrugineus, is a polyphagous insect that caused economic damage in various palm species, particularly coconut plantation in Malaysia. Therefore, entomopathogenic fungus Metarhizium anisopliae was being introduced in attempts to control biologically the RPW. The entomopathogenicity of an indigenous (Met-Gra4) and foreign (Met-TH) strains of M. anisopliae isolated from the soil of Malaysia and Thailand, were tested against RPW adults in laboratory bioassays at 50, 70, 90% relative humidity (RH). Bioassays indicate no significance differences in efficacy between both the conidia of M. anisopliae strains against RPW adults. Met-Gra4 showed the highest efficacy at 90% RH (LT50 = 6.17 days). However, LT50 only slightly differed from Met-TH (6.33 days; 90% RH). Scanning el ectron microscopy for the treated RPWs showed that Met-Gra4 (90% RH) was densely sporulated within the abdomen, while Met-TH can be found mainly across cuticular surface of RPW.
Specifications table
Agricultural and Biological Sciences Specific subject area Insect Science Type of data Value of the data • These data provide information regarding geographical regions and fungal habitat condition influencing entomopathogenic fungal efficacy (pathogenesis and epizootiology), in this case, relative humidity that affects fungal germination and host infectivity. • These data show the potential effect of Metarhizium anisopliae MetGra-4 against red palm weevil in Malaysia, as influenced by mycelial growth corresponding to the relative humidity, as the first step to select effective fungal propagule. • Hypervirulent fungal strains will be applicable in further developing suitable mycoinsecticide formulation to improve their shelf life and enhance its viability in fluctuated environmental conditions for insect pest biocontrol.
Data description
Two selected virulent strains of M. anisopliae which isolated from the soil of Felda Tenang, Terengganu (Met-Gra4) and soil of Muang Chum, Kanchanaburi (Met-TH) were tested against the adults of RPW. The subsequent susceptibility test achieved zero control mortality, which confirmed the entomopathogenic effect of M. anisopliae isolates on RPW at three different relative humidity. Overall, at 50 -90% of relative humidity, data showed that Met-Gra4 required the shortest time period to reach 50% mortality of RPW as compared to Met-TH, of which the shortest time period was achieved at 90% RH (LT 50 = 6.17 days) ( Table 1 ). However, LT 50 of Met-TH against the adults of RPW was only slightly longer than that of Meta-G4 which ranged between 0.03 and 0.19 days as RH decreases. Levene's test of equality of error variances indicated that there is insufficient evidence to claim that the variances are not equal ( F 0 . 05 , 5 , 12 = 1 . 785 , p = 0 . 191) . In addition, the two-way ANOVA analysis indicated no significant interaction between both the fungal isolates and relative humidity ( p > 0.05). Subsequently, the actual mortality of treated RPW were determined by observing the mycelial outgrowth of M. anisopliae . The observations on the cadavers of RPW illustrated that Met-Gra4 was able to germinate and cause white (initial stage) and green sporulation (late stage) with slightly higher proportion as compared to Met-TH, recorded 93.33% and 86.67%, at 50% and 90% RH, respectively ( Fig. 1 ). However, Met-TH achieved only 80% fungal-induced mortality of RPW adults at both 50% and 90% RH. While the observation on the treatment at 70% RH showed similar percentage of fungal-infected RPW for both M. anisopliae isolates.
The fungal-induced mortality of treated adult RPW was indicated by the reddish orange color changes on their cuticle. Scanning electron microscopy (SEM) of adult RPW treated with the fungus M. anisopliae (2.0 -2.8 × 10 8 conidia per mL) clearly revealed adhesion and penetration structures in the infected adult. Growth relative humidity was at 90%; temperature 25 -28 °C. Generally, adhesion of the ungerminated conidia for both the Met-Gra4 and Met-TH can be found on the wing scales and the appendage segments of adult RPW ( Figs. 2 and 3 ). The scanning electron micrograph of treated RPW at day nine post-treatment, hyphae of Met-Gra4 was deeply penetrated through the abdominal cuticular layer reaching the inner tissue component ( Fig. 4 b). In the contrary, Met-TH was found nearly penetrated through abdominal cuticle layer ( Fig. 4 a).
On the other hand, at day five after the death of RPW, Met-TH as declared by SEM showed a less dense network of hyphal growth within the abdominal region with least decomposed fat and muscle tissues ( Fig. 5 a). While Met-Gra4 showed a denser hyphal network with more soft tissues being decomposed as indicated by the hollow abdominal cavity ( Fig. 5 b) Although fungal-induced mortality of RPW occurred at all humidity level tested, the formation of the characteristic white mycelial growths and conidia only occurred within the 70 to 90% RH ranges (figures not shown). Mycelial growth first appeared at seven days post-treatment. No external fungal growth of any kind was found on cadavers within the 50% RH.
Source and isolation of M. anisopliae
Hypervirulent fungal strain from Malaysia, named as Met-Gra4 was kindly provided by Grace Lee Ern Lin, Universiti Malaysia Terengganu [1] , which was isolated from FELDA Tenang within the latitudes 05 °312 N and longitudes 102 °32 E. While in retrieving fungal strains from Thailand, sampling of agricultural soil (loam to clay loam soil) was conducted in cassava and sugarcane cultivated areas at Muang Chum, Tha Muang district, Kanchanaburi, Thailand. These areas fall within the latitudes 13 °57 54.7 N and longitudes 99 °37 39.4 E. The soil samples from each field were taken from 15 -20 cm depth below the ground with surface litter removed. Then these soil samples were typically sieved to disintegrate the superficial deposits of gravel, root, grass, litter etc., and to break up large aggregates, and collected in sterile bags. After the sample collection, these were preserved at 4 -8 °C and stored in dark room until used and processed in the laboratory.
Isolation of EPF was performed by using 10X serial dilution method followed by spread plating of soil sample on artificial media. Soil samples were prepared by first grinding to smaller particles or in a powdery form using sterilised mortar and pestle. About 1 g of each soil sample was diluted in a master tube containing 10 mL of distilled water, and these suspensions were diluted up to 10 −8 dilution using standard techniques. Afterwards, 10 μm diluted suspensions of 10 −7 and 10 −8 dilutions were pipetted and spread on PDA medium plates. Two plates for each soil samples were incubated for 5 -7 days at room temperature, and based on the morphological appearance, the potential fungal colonies of readily sporulating M. anisopliae which characterized by green conidia, were then subcultured by aseptically transferring the inoculum for streaking onto the freshly prepared PDA, plating in Petri dishes, and incubated at room temperature until mycelial growth has appeared. The 5th day after incubation, the fungal culture plates were observed and examined for any sorts of undesirable contaminants, in order to obtain a pure fungal culture. Contemporarily, successive monoxenic subculturing on artificial media recurrently leads to attenuation of fungal virulence. In an attempt to restore fungal virulence after prolonged culture on PDA, each isolate was passed through RPWs prior to culture on plates. Each of the dead RPWs was removed and taken to Petri dishes with moist filter paper-lined bottom, and sealed with Parafilm "M" (Bemis R , Neenah, WI 54956) to enhance sporulation, as for the use in the consecutive bioassays.
Source of RPW
RPWs were caught by using pheremone traps placed at coconut plantation in/ near Kampung Kubang Badak, UMT campus and Pantai Tok Jembal. In addition, adult weevils that were field-collected using pheromone traps, were baited with high release formulated lures, including ferrugineol (4-methyl-5-nonanol), Ferrolure + (90% 4-methyl-5-nonanol + 10% 4-methyl-5nonanone) and kairomone-releasing food bait (pineapple). Captured adult weevils were reared under 20 °C in air-conditioned culture room, placing in plastic rearing containers. Sugarcanes were baited and supplanted twice a week. In the subsequent tests, the field-collected RPWs without any defect or damage, with its body length within the range of 3.0 -3.5 cm were selected. Mixed-sex adult RPWs were precleaned with running tap water before surface sterilized with 70% alcohol for 10 s, continued by immersing in sterile distilled water 3 times consecutively.
Preparation of conidial suspension
For conidia production, subcultures of M. anisopliae on PDA plates were incubated at 28 °C for 2 weeks. In preparing and making initial conidia suspension, the conidia was gently scraped off from the PDA surface by adding sterilised 0.02% aqueous Tween 80 R solution into the culture plates. The premixed conidia suspensions were then centrifuged at 50 0 0 rpm for 15 min to separate the conidia from the mycelium and other waste [2] . The suspensions were filtered through sterile cheesecloth to remove hyphal fragments, and regularly hand vortexed whenever needed to decimate conidia clumps in order to obtain homogenous mixtures. The concentration of the stock conidial solution was determined microscopically using haemocytometer (Neubauerimproved chamber) with the aid of a compound microscope. Prior to that, appropriate dilutions were performed to achieve reliable conidial count, in order to reduce statistical error. After the concentration of the stock solution had been determined, its concentration was adjusted to 10 8 conidia mL −1 based on the formula below: where C 1 = Concentration of the stock solution, conidia per mL V 1 = Calculated volume required to be taken out from the stock solution, mL C 2 = Desired concentration to be prepared, conidia per mL V 2 = Volume a of desired concentration, mL
Host susceptibility at different relative humidity
Disinfected RPWs were dipped in the prepared conidial suspension for 120 s, providing a dose of 10 8 conidia/ insect, thereafter inoculated RPWs were maintained in containers at different relative humidity and room temperature. Each container was perforated with only two small holes to minimize gaseous exchange. The deliquescence relative humidity was thermodynamically manipulated using supersaturated salt solutions with pure hygroscopic salt minerals: potassium carbonate (50%), sodium chloride (70%), potassium chloride (90%), in accordance to the experimental value provided [3] . Anhydrous CaCl 2 and distilled water, provided 0% and 100% relative humidity, respectively. Approximately 100 mL of saturated salt solution was placed within the container according to the treatment. Each RH treatment was conducted using: three replicates with five inoculated RPWs per container, and three replicates with five uninoculated RPWs as control treatment. Entomopathogen-induced behavior alterations among the inoculated RPWs for each fungal isolates were observed and assessed daily for 2 weeks post-treatment. Dead cadavers were surface-sterilised for 3 min in 2.5% NaClO solution to avoid secondary fungal contaminants prior to incubation under darkness with corresponding RH condition at 25 ± 3 °C. Observation and photographic record of sporulating cadavers were conducted for 10 days.
Surface morphological assessment
The surface morphological investigation was in accordance to [ 4 , 5 ] with slight modification. Inoculated RPW was collected ten times post-treatment at 24 h interval (during laboratory bioassays). Specimens were preprocessed: (i) fixed using 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.2 for 2-4 h, (ii) rinsed 15 min for 3 times in 0.1 M sodium cacodylate buffer, pH 7.2, (iii) post-fixed in 1% osmodium tetroxide in 0.1 M sodium cacodylate buffer, pH 7.2 for 2-4 h, (iv) repeated step (ii), (v) dehydrated the specimens sequentially from 35% -100% ethanol (EtOH), and each different percent EtOH would take 20 min, (vi) dissect and cut desired parts of RPW, and (vii) sputtered with ultra-thin gold palladium films coating (thickness range of 2-20 nm) using evaporator, in order to reduced SEM beam damage. Dehydrated specimens were examined under scanning electron microscopy (SEM) at low vacuum mode to characterize morphologically the process of infection by M. anisopliae . Micromorphology of the R. ferrugineus cuticular cross-section was observed and photographed using SEM. High quality reference images of fungal germ tube penetration and development were documented.
Statistical analyses
The mortality mean value of the five replicates for each treatment was used throughout the statistical analysis. The fungal infectivity percentages towards RPW in four different treatments was corrected by eliminating the natural mortality in control treatment (0 − 5%) conforming to the Schneider Orelli's formula: Subsequently, the corrected mortality data was subjected to probit analysis to determine the significance for LT 50 at different treatment of relative humidity. Two-way ANOVA with Tukey's post hoc analysis was conducted to determine whether there were significant differences in the total mortality of RPW between fungal isolates at the different treatments. All tests were carried out by using IBM SPSS Statistics 24. | 2020-04-09T09:26:59.501Z | 2020-04-09T00:00:00.000 | {
"year": 2020,
"sha1": "e3327ab3edecfee4229814a3c0b92a38af3fd397",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2020.105482",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f5c7e6f22dfcdcc6fc63840e624280caa2d1036",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237828560 | pes2o/s2orc | v3-fos-license | Investigating the Effect of Viscous Yield Dampers on Concrete Structure Performance
Viscous dampers are one of the most effective devices in the energy consumption of the buildings. The passive hybrid system progressive applications cause each of the dampers to compensate for the weakness of the other system, thus increasing the efficiency of passive control of the structure. Speed-based viscous dampers will adjust the amount of depreciation force based on the acceleration and velocity entering the system. On the other hand, displacement-based surge dampers adjust the amount of depreciation force based on the displacement required. Therefore, considering the different performances of these two dampers, the effect of using both of them in one structure can be investigated. In this study, by combining these two dampers, the seismic behavior of concrete structures has been evaluated. To study them, 5- and 10-story structures have been designed using FE method and have been subjected to earthquake records. Historical analysis shows that the use of hybrid dampers reduces the amount of seismic input force to the structure and also the amount of floor drift is reduced due to the use of dampers and also the capacity of structures for these structures is increased. The results of the study show that the presence of dampers in the structure increases energy absorption and improves performance in the structure.
Introduction
In conventional methods, the building demonstrates earthquake resistance by using a combination of hardness and ductility as well as energy dissipation. An efficient method used to improve seismic performance and damage control in structures is the use of energy consuming systems. In this method, mechanical energy dissipation tools are placed in the structure and deplete the energy. As a result, there is no need to use the high ductility of the structure and the nonlinear behavior of the main members to deplete the input energy. One of the most important mechanical tools is the energy consumption of viscous dampers. e location of these tools and the methods of their placement in the structure have a great impact on their efficiency and effectiveness. Tsai et al. in 1998 [1], using analytical models, showed that the combination of speed-dependent and nonspeed-dependent devices in a structure is a powerful tool to increase seismic protection. ey used a combination of a TPEA metal delivery device (triangular sheet energy absorbing device) as a hysterical element with a viscoelastic (VE) damper. Chen et al. in 2002 [2] used a six-span frame with four floors, and the results proved the strengths of different devices in counteracting each other's weaknesses. Ibrahim et al. in 2009 [3] studied about the elastomeric damping material formed using VDP devices; the VPD increases the damping by increasing the displacement of the tire, and the energy absorption capacity increases as soon as the steel elements are delivered. e damper has a hyperelastic effect when it undergoes large displacement, increasing the stiffness by stiffening the structure during severe seismic events to prevent the structure from collapsing. Murthy in 2000 [4], studied about VHD devices including concentric steel rims which are connected to the center of the structural opening using four braces. It is a multistage device similar to the VPD device, which has a large capacity for energy loss due to the delivery of steel and the geometry of the device. Recent research into composites or composite devices has included the addition of viscous dampers to a rugged lateral system with metal dampers. e goal is to add dampers and small displacements, as well as reduce nonstructural failure and acceleration. Brunea in 2015 [5], in an analytical study of a degree of freedom, found that viscous dampers reduce the effect of metal dampers. ey also found that class accelerations were likely to increase for systems with small strain stiffness ratios. In 2014, Amadio et al. [6] analytically and experimentally investigated a hybrid system using PR joints and viscoelastic dampers.
e advantage of a PR connection is that the damage is minimal, at least to a frame instrument with a connection hysteresis cycle. Viscoelastic dampers were used in conjunction with Chevron braces. e test results showed a significant reduction in displacement demand and instrument failure. is type of system is also able to meet the performance criteria based on the performance. e analysis proved that the best performance is obtained with the lowest cost, with an attenuation ratio of 11% or less.
So far, researchers have reviewed the hybrid passive control devices mentioned above. e idea of a combined HPCD passive control system was first proposed by Justin Marshall in 2013 [7]. e original HPCD demonstrated the phased behavior and energy dissipation of the system, which has the expected properties and behavior of the system, and the finite element models demonstrated its phased behavior and energy dissipation. Seismic structure and hazard provide an exceptional tool for performancebased seismic design. Investigation of the performance of structures under lateral loads, especially earthquake loads, is of great importance. Earthquake control, conduction, and energy dissipation can be a great help in the economic design of structures being designed and built. erefore, in a study, the performance of structures in the state with and without two-level dampers has been investigated. In this research, three frames 5, 10, and 15 with and without dampers are examined. e results of this study show that the use of two-level dampers in 5-story structures increases the capacity of the structure by 4.7% and in 10and 15-story structures, it improves the performance of the structure by 7.72% and 8.1%, respectively. In fact, the damper has been able to increase the capacity of the structure by absorbing lateral forces, and the structure, while enduring many stresses, also leaves acceptable displacement [8]. Few studies have been performed on passive control composite devices, and the combination of speed-dependent devices due to their ability to reduce small vibrations with metal or friction devices (displacement-dependent) due to their high energy absorption capacity has considerable potential for future research [9].
In this study, four concrete structures in the form of two-dimensional frames are numerically modeled. In order to evaluate them under earthquake, the shear force and drift displacement of the floors have been used, which can indicate the seismic response of the structure. Regarding the structural cover diagram, it can be said that the structural cover diagram with damper is higher than the structural cover diagram without damper [10].
Material and Methods
In this research, four concrete structures in the form of two-dimensional frames in the number of 5 and 10 floors with a floor height of 3.2 meters and a distance of openings of 6 meters are considered. e structures are once equipped with a damper and once equipped with a viscous damper and a submersible damper. It should be noted that both dampers are used simultaneously in the concrete frame [9]. After modelling, the frames are analysed and designed. Also, the position of the dampers is in the second and fifth openings, which are specified in the following figures. e view of the modeled concrete frames is given in Figure 1.
Damper-exponential and multilinear plastic nonlinear elements have been used to model the dampers for modelling viscous and yield dampers, respectively [11]. e behavioral model of these elements is shown in Figure 2. e models are subject to earthquake records and dynamic analysis. e records used are shown in Table 1.
Numerical Simulation
In order to evaluate the samples under earthquake records, the basic shear and drift parameters of the floors have been used, which can indicate the seismic response of the structure. Figures 3-6 show the base section of 5-and 10story structures with dampers and without dampers separately and the average base section of structures under 7 earthquake records. As can be seen, with the installation of dampers, the shear base has been significantly reduced in both height levels. Figures 7-10 show the maximum drift of classes. To determine the drift of the floors, the drift history of each floor is specified and then the maximum drift obtained from them is selected as the maximum drift. It can be seen that despite the damper, the maximum drift of the floors has also decreased.
Conclusion
Regarding the amount of shear base, it can be concluded that the amount of shear base of 5-and 10-story structures under the influence of dampers has decreased significantly compared to structures without dampers. According to the average values of the shear base, it can be concluded that the amount of shear base in the structure with dampers in the 5story structure is about 86% and in the 10-story structure is about 62% compared to the structure without dampers. e maximum drift rate of 5-and 10-story structures under the influence of dampers has decreased significantly compared to structures without dampers. According to the average values of drift, it can be concluded that the maximum drift rate in the 5-story structure with dampers is about 62% and in the 10-story structure is about 17% compared to the structure without dampers. Regarding the structural cover diagram, it can be said that the structural cover diagram with damper is higher than the structural cover diagram without damper. e presence of dampers in the structure has increased the amount of energy absorption in the 5-story structure by about 64% and in the 10-story structure by about 60%, which has increased the data. erefore, it can be concluded that the presence of dampers in the structure increases energy absorption and improves performance in the structure.
Data Availability
Requests for access to these data should be made to the corresponding author via e-mail address: nima.marzban@ tabari.ac.ir. | 2021-09-28T01:09:15.778Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "1dd70e57ef398cbf3bd1ce4986d0c5f2a1a82bad",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/sv/2021/8489333.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7fd5c8af1ce12e94da1b92686c0c515f009b1fd5",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
236944297 | pes2o/s2orc | v3-fos-license | Sacral neuromodulation implanted patients: Patient concerns during the COVID-19 pandemic and practical modifications
Objective: To study the effect of the COVID-19 pandemic on sacral neuromodulation (SNM) implanted patients and examine patient concerns. Methodology: A web-based survey was sent to all SNM patients, including those with implants and who had a cancelled operation because of the pandemic. The survey consisted of 15 questions in Arabic language, which sought to evaluate outcomes, as well as patient concerns and preferences during the COVID-19 pandemic. Results: A total of 66 patients were contacted, and of which, 62 replied. Most of the patients (n = 51; 82.3%) had the device implanted, and 11 (17.7%) patients had a postponed operation secondary to the pandemic. There were 20 males and 42 females. The mean age was 34 years ± SD 16.5 (9–62 years). Indications for sacral neuromodulation therapy were refractory overactive bladder OAB 35 (56.5%), retention 17 (27.4%), OAB + retention 3 (4.8%). When questioning the effect of the lockdown on patients, most reported no effect (43.5%), while 14.5% had some programming difficulties. The patients preferred telephone calls for device emergencies and clinic follow-up with 88.7% and 98.4%, respectively. Most patients had no concerns regarding their Interstim device during the pandemic and found it manageable; 8.1% had insurance concerns due to the economic changes. Conclusion: Patients with implanted SNM for lower urinary tract symptoms were mainly concerned with device programming. Telemedicine is a great solution for continuous care in this group.
Introduction
The healthcare system has faced many challenges throughout the COVID-19 pandemic. COVID-19 was first reported as pneumonia from Wuhan, China. 1,2 By March, the World Health Organization announced that COVID-19 was a pandemic with a recommendation that all countries take immediate action. 3 The pandemic led to many changes in healthcare, with treatment priority given to those with urgent medical conditions. Urology practice was affected as emergency and oncology patients were prioritized. 4 Patients with lower urinary tract symptoms were delayed management until the pandemic was under control. Treatment for patients with urinary incontinence, non-obstructive retention, frequency urgency, operative interventions were all postponed.
Sacral neuromodulation (SNM) therapy involves implantation of an electrode and battery. This is performed in two phases with 2 weeks between them. The first phase is electrode implantation under fluoroscopy. Patients need to complete a voiding diary prior to surgery and during the test period (after phase I). When patients show ⩾50% improvement, the second operation (phase II) is done for battery. This procedure needs further programming to initiate the benefits of therapy. Both procedures (phases I and II) are performed under general anesthesia in our center. However, programming sessions might require several repeated attempts until the best program for symptom control is defined. Complications of SNM include loss of efficacy, battery depletion, as well as electrode migration, breakage, and erosion. These require surgical intervention.
Sacral neuromodulation was introduced in Saudi Arabia in the last 10 years. As any new introduced therapy, close patients care, frequent programing, and good follow-up is needed. The COVID-19 pandemic caused Saudi Arabia to announce a lockdown, and elective operations were postponed to reduce the risk of patients being exposed to COVID-19. The International Neuromodulation Society has published several reports on postponed Interstim cases because of the COVID pandemic. We conducted our study to assess the impact of COVID-19 lockdown on sacral neuromodulation implant patients in our center.
Materials and methods
This study was approved by our hospitals ethics review board (Unit of Biomedical Ethics Research Committee of King Abdulaziz University, Approval number: Reference No 395-20), and written informed consent, was obtained electronically from patients. A web-based survey consisting of 15 questions in the Arabic language was sent to patients assessing the effect of the COVID-19 lockdown on both implanted and booked patients whose operation was postponed due to the lockdown. The survey evaluated the effect of lockdown from March 2 to May 30, 2020. The questionnaire assessed patient demographics, COVID-19 infection status, insurance service, clinic follow-up preference, potential effects on therapy, programming issues, and lockdown concerns. The questionnaire was developed using Google documents. The hospital secretary contacted both implanted and patients whose operation was cancelled. The questionnaire was sent over social media (WhatsApp) and reminder was sent to patients two weeks later. Parents of paediatric patients provided consent on behalf of their children and also completed the survey.
Data was collected and exported as Excel sheets; it was then coded for statistical analysis using IBM SPSS 26 software. p-values below 0.05 were considered statistically significant. Data were reported as frequencies in each question and reported as a percentage.
When questioning the effect of the COVID-19 lockdown on patients, most reported no effect (43.5%) while 14.5% had some programming difficulties, and 17.7% cancelled their operation. Programming is usually done by company personnel, and most patients preferred to delay their programming sessions (32.3%); 25% agreed to programming while following COVID-19 precautions. Patients preferred telephone calls and virtual clinic for both emergency issues with the device and clinic visits for follow-up (88.7% and 98.4%, respectively). Patients preferred to delay their implantation and surgical intervention (88.7%) and preferred other less effective alternative therapies (intermittent catheterization in the retention group and using multiple anticholinergic medications in refractory OAB despite limited benefit). A total of 69.4% of cases there was a preference for general anesthesia (56.5%) over local anesthesia (43.5%) for surgical intervention. Most patients had no concerns regarding their implanted device during the pandemic; they found that it was manageable, but 8.1% had concerns regarding insurance coverage issues; 9.7% were concerned about the time delay until COVID-19 issues were resolved (Tables 1 and 2; and Figures 1 and 2). There were no reported complications by any patient during the 3-month lockdown.
Discussion
This paper presents the results of a survey study examining the effect of the COVID-19 pandemic on patients who underwent SNM surgeries and programming at a single clinical center in Saudi Arabia. The survey specifically asked about care during a 3-month lockdown window from March to May 2020.
Most SNM implanted patients reported no effect on their condition (43.5%), but programming was a major problem in 14%. Programming requires direct contact with trained personnel, which poses an infection risk, and 32.3% of patients preferred to delay their programming session or used alternative therapy to cope with their condition (43%). During the lockdown period, no patients developed any erosion, 13,14 Pediatric sacral neuromodulation is not FDA approved but showed favourable results in multiple studies. 14 In our study we included pediatric cases who were implanted for refractory overactive bladder and urinary retention, neurogenic bladder secondary to spina bifida.
Bekkers and Koopman 15 and Evenett et al. 16 predict the COVID-19 pandemic will have major effects on the world economy, which might affect patient decisions. As sacral neuromodulation can be an expensive option, our patients had concerns about private insurance coverage in the future.
Study limitations include the single center and small sample size, included in the study. It would be preferable to increase the sample size and perform a multicenter study, worldwide. The questionnaire is also not validated, and the authors want to consider this in the future. The age range of our patients is 9-62 years as the use of this therapy in children is relatively, which limits the generalizability of the results.
Conclusion
We found that patients with implanted SNM for lower urinary tract symptoms were facing major issues with device programming during the COVID-19 lockdown.
Consent
All patients gave written consent, parents of paediatric patients gave consent on behalf of children.
All consents were obtained electronically.
Author contributions
Mai Banakhar: data collection, design, writing; ethical approval request: statistical analysis; review.
Conflict of interest statement
The author(s) declared that there is no conflict of interest.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 2021-08-05T13:12:27.657Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1256c82e542f3cf0c14b3b7a10aa9ce94d208398",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1756287221998135",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d814bd38c794c9ed5ef70c6398b7e8343dadd72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225122350 | pes2o/s2orc | v3-fos-license | The use of fuzzy logic for the clean-up systems control for bunkers, containing bulk solids
The issues of eliminating the bulk solids overhanging in bunkers using pneumatic pulse devices are considered in the following article. An algorithm for automatic control of pneumatic collapse systems based on the use of fuzzy logic methods is proposed. A constructive solution for this algorithm is given, based on the assessment of the level of bulk material on the conveyor installed at the exit of the hopper. The Mamdani algorithm is used as a fuzzy inference algorithm. The fuzzy Logic Toolbox package of the MATLAB computing environment was used for modeling fuzzy systems. The use of mid-maximum and center-of-gravity methods as a defuzzification method is considered. It is demonstrated that the use of fuzzy logic makes it possible to significantly simplify the development of the control system algorithm.
Introduction
In various industries, both domestic and foreign, various bunkers are widely used for storing a variety of bulk solids. The volume of such bunkers allows to place from several liters to thousands of cubic meters of bulk solids. Bunkers are used, for example, for storing flour, fertilizers, ore, etc. They can be arranged both indoors and outdoors. Moreover, in very rare cases, they provide the necessary temperature and humidity conditions; basically, these are natural parameters of the environment.
Depending on the material to be loaded, the size of the bunkers and their shape, and the state of the environment, some of the material may be deposited on the walls of the bunkers due to sticking, freezing, etc., or the material particles may affect each other and form an arch [1]. If a funnel is formed or the material freezes (figure 1), it is possible to stop the material supply from the bunker, which can occur relatively quickly and cause the entire process to stop, even on equipment placed in a warm and dry room. It is necessary to ensure uninterrupted supply of bulk material from the bunker and this is ensured by the use of various systems and devices that provide mechanical shock or vibration effects on the walls of the bunker, as well as pneumatic pulse effects on the material itself placed in the hopper. These systems and devices include: vibrators, magnetic pulse systems, and pneumatic pulse systems [2]. Manual cleaning is also used, which is time-consuming and sometimes dangerous. As practice shows, the use of pneumatic pulse devices is less energy-intensive and more effective in combating overhanging in bunkers. The destruction of the bulk materials formed in the bunker occurs due to the impact of compressed air or nitrogen created by the action of a pneumatic pulse device that generates gas pulses. The pneumatic pulse device is filled with compressed gas, and then the accumulated gas is ejected in a fraction of a second, creating a shock effect on the bulk material.
The pneumatic pulse bunker cleaning system is demonstrated in figure 2. Usually the bunker is equipped with from one to forty pneumatic pulse devices. The number of devices depends on the shape and size of the bunker, as well as on the characteristics of the material being loaded. The efficiency of the system depends largely on the pneumatic pulse devices place of installation. They must be installed in places where the bulk material freezes. Also, much depends on the power of pneumatic pulse devices. When there is insufficient power, the efficiency of using devices drops sharply.
The processes of pneumatic pulse destruction of material hangings in bunkers and selection of criteria for evaluating the quality of the processes
The process of stagnant areas pneumatic collapse of the material in the bunker occurs due to the impact of a shock gas wave formed by a pneumatic pulse device on the material. The barrel, through which compressed gas escapes from the pneumatic pulse device, is directed to the place of possible stagnation in the bulk material bunker. After a pneumatic pulse shock, the stagnant zone may collapse immediately or after subsequent impacts. After the destruction of stagnant zones, the material begins to flow to the exit of the bunker.
However, the choice of the possible formation place of overhangings and material arches is made by the service personnel on the basis of existing practical experience in the operation of the equipment.
Most quality criteria for emptying bulk material bunkers are based on several variables [3]. Usually, a graded evaluation of such qualitative variables is performed and used to evaluate the process.
Bunkers (especially small sized), as a rule, do not provide sensors for the level of bulk material in the bunker, and if they are provided, they determine the level of material only in certain places, without giving a complete picture of the overhanging formation. Visual inspection usually does not produce results due to the high dust content inside the bunker.
Therefore, the simplest way to judge the formation of overhanging is by the absence of material at the exit of the bunker. Usually, under the exit, a conveyor is installed, where the material that came out of the bunker is fed to the consumer. According to the amount of material on the conveyor, a conclusion is made about the material leaving the bunker. The amount of material on the conveyor is estimated by the level of material on the conveyor belt using a simple sensor, which is a swinging plate resting on the material on the conveyor. If there is no material on the conveyor, the plate goes down and acts on the sensor. The signal from this sensor indicates that there is no material at the outlet of the bunker.
Control of the level of bulk materials in the bunker is carried out using several sensors, for example, ultrasonic [4]. These sensors can be used to assess the level of the substance in the bunker. The accuracy of the estimation depends on the number of sensors per unit area of the bunker section. In this way, you can estimate the amount of substance in the bunker and its distribution, but it is difficult to predict the dynamics of emptying. Therefore, the first criterion for evaluating the formation of material stagnation in the bunker after the material exits the bunker is used more often.
The proposed solution is based on the use of a sensor interacting with the plate to control the level of bulk material, similar to the one described above, but having an analog signal at the output for the entire range of plate movement, or monitoring several specific positions of the plate corresponding to specific levels of bulk material on the conveyor (figure 3).
Figure 3.
Types of bulk material sensors on the conveyor (a -using a sensor with an analog сonverter, b -using a sensor with multiple relay-type converters).
Let us consider using an analog converter. The sensor plate changes its angular position depending on the thickness of the material layer on the conveyor, as a result, the sensor outputs an analog signal, the value of which is proportional to the angle of rotation of the plate and is associated with the thickness of the bulk material layer. When using multiple relay converters, each of them is adjusted to a specific level of material on the conveyor.
Control of bunker cleaning systems using fuzzy logic methods
The control algorithm can be built on the basis of the results obtained by mathematical modeling of the processes of formation of hangings or arches. However, process modeling is a complex task that requires solving a large number of equations together. Solving the problem requires a lot of computing time and resources, and the resulting recommendations can only be presented in a certain range or with a certain probability, and not always high [3].
In many ways, obtaining information to control the processes of collapse of vaults or destruction of hangings by mathematical modeling of the process is inefficient. Recommendations for managing the collapse of hangings and vaults obtained during modeling can only be of a recommendatory nature.
Currently, the most commonly used algorithm is that pneumatic pulse devices start cycling simultaneously or sequentially when there is no material at the outlet of the bunker. This algorithm is easy to implement, but leads to a high consumption of compressed gas, because the system starts working only when a hang or arch has already formed. To increase efficiency, it is necessary to monitor the dynamics of changes in material consumption from the bunker and when there is a pronounced trend of reducing material consumption, it is necessary to turn on pneumatic pulse devices [5].
In this case, it is advisable to use fuzzy logic methods. The sensors demonstrated in figure 3 produce a signal proportional to the thickness of the bulk material layer on the conveyor. If there is no material on the conveyor, the sensor will show the minimum value, and if the layer thickness is close to the maximum -the maximum value. To ensure efficient discharge of bulk material, it is necessary to maintain a large volume of material being discharged per unit of time. This mode of operation corresponds to the maximum reading of the sensor. If the thickness of the material layer on the conveyor decreases, the sensor reading will also decrease, and it is necessary to strengthen the operation of pneumatic pulse devices.
Effective management of this system requires monitoring the dynamics of changes in the thickness of the material on the conveyor. For this purpose, you can install a second sensor located at a certain distance from the first one, or you can do it virtually using a time delay. In this way, you can track the system's response to the use of air crash systems.
The signal received from the sensor requires fuzzification, i.e. reduction to fuzziness. For the value of the analog signal from the sensor, a linguistic variable is created that contains terms describing the signal strength. The degree of belonging of the received signal to the specified terms is determined by the set membership functions. Then a fuzzy logical output is made using the Mamdani algorithm based on a pre-formed rule base. The database of fuzzy rules is formed on the basis of expert assessments and can be adjusted when new experimental data is obtained. Next, defuzzification is performed using one of the known methods. The choice of defuzzification method depends on many factors and is selected depending on the customer's needs. After receiving a clear value of the output variable, the pneumatic pulse device generates shock pneumatic signals to eliminate the stagnation of bulk material in the bunker. The system responds to changes in the sensor readings, which ensures uninterrupted unloading of material from the bunker.
Various software packages can be used to create models of fuzzy systems. We use the FuzzyLogicToolbox package of the MATLAB computing environment. The FIS fuzzy output system editor is the main tool for creating and editing fuzzy output systems in graphic mode.
We will develop an algorithm for controlling the pneumatic pulse system for cleaning bunkers for bulk materials based on fuzzy logic. The inputs are the sensor readings and the value of the sensor readings 5 seconds before the current value, which allows tracking the dynamics of changes in the height of the bulk material layer. The values of the sensors are in the range from 0 to 10 V. As an output, we use the duration of the pneumatic pulse and the frequency of their occurrence. However, at the first stage, we will consider the pulse duration constant, since the pulse duration is selected from ISPCIET 2020 IOP Conf. Series: Materials Science and Engineering 939 (2020) 012001 IOP Publishing doi:10.1088/1757-899X/939/1/012001 5 the condition of emptying the receiver with compressed gas. Therefore, the output variable will be the frequency of pneumatic pulses.
Let's denote the first input variable "Current sensor value", and the second "Previous sensor value". Let's denote the output variable as "Pneumatic pulse frequency". After that, the FIS editor window will look like this ( figure 4). Let's denote the terms and their membership functions for each of the variables. For the sensor values, the terms "Low", "Medium" and "High" are set. For the values of the frequency of pneumatic pulses, the terms "Absent", "Low", "Medium", "High" and "Maximum" are set.
For the terms "Low" and "High" of the input variables, select the trapezoidal membership functions with the parameters [0 0 1 4] for the term "Low" and [6 9 10 10] for the term "High". For the term "Medium" of input variables, we define a triangular membership function with parameters [2 5 8].
Membership functions can be easily adjusted after analyzing the process being implemented.
When creating rules, you should rely on common sense and expert assessments. Let's set the first rule as if the current sensor value is high and the previous sensor value is high, then the pneumatic pulse device does not work (the frequency is zero -the term is "Absent").
Since this system has two input variables, each of which has three terms, it is possible to define nine unique rules. The resulting rules are shown in table 1 and figure 5. After setting all the parameters, we run the modeling. Some of the results obtained are presented in figure 6.
The value of the output variable for the system under consideration, as can be seen from the results, changes unevenly, there are flat sections (figure 6a). By modifying the membership functions of the terms of input variables, you can achieve a smoother change in the output variable (figure 6b). The surface of the fuzzy output is noticeably smoothed, which corresponds to a smoother change in the output variable.
In the surface demonstrated in figures 6a and 6b, the center of gravity method was used as a defuzzification method, which provides smooth control and takes into account all active rules in its operation. However, when using this method, it is not possible to obtain the zero frequency of the pneumatic pulse device.
Using the mid-maximum method allows you to use the entire definition area. The membership functions remain the same (figure 6c), only the defuzzification method changes.
Conclusion
The use of fuzzy logic methods allows us to obtain new algorithms for controlling the operation of pneumatic pulse devices. The choice of the defuzzification method and the type of accessory functions depends on the material to be loaded, the shape and size of the bunker, and the customer's requirements. The mid-highs method will provide a great benefit if the pneumatic pulse device rarely works. When the device is operating in a mode that is close to constant, using the center of gravity method will be effective.
Using fuzzy logic allows you to significantly speed up the development of the control system, and also provides tools for fine-tuning the resulting system after conducting pilot tests. The control system determines the desired frequency of operation of pneumatic pulse devices based on the readings of a single level sensor of bulk material on the conveyor installed at the outlet of the bunker. Despite its simplicity, the system can provide high control accuracy and high response speed.
Creating control algorithms based on fuzzy logic is much easier and more demonstrative than traditional methods that use the results of modeling complex mathematical models. Fuzzy logic methods allow getting results that meet the set tasks with significantly less effort. | 2020-10-28T19:12:57.823Z | 2020-10-08T00:00:00.000 | {
"year": 2020,
"sha1": "f140e90c697f79532664e7230c66d89105846698",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/939/1/012001",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "37039f4aeea99b6e69217a0dcbf2853b84687c7b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
209427487 | pes2o/s2orc | v3-fos-license | Reframing healthy food choices: a content analysis of Australian healthy eating blogs
Background Blogs are widely being used by health professionals and consumers to communicate and access nutrition information. There are numerous benefits for dietitians to establish and contribute to healthy eating blogs. In particular, to disseminate evidence-based nutrition information to promote healthier dietary practices. The aim of this study was to explore characteristics of popular healthy eating blogs and inform the provision of healthy eating information in the Australian context. Methods A content analysis approach was used to identify characteristics of popular Australian healthy eating blogs. A purposive and snowball sampling approach was used to identify healthy eating blogs from search engines including Google, Bing and Yahoo. Blogs were deemed eligible if: (1) the author self-identified as a health professional; (2) the blog was written by a single author; (3) the blog was written by an Australian author; (4) the blog had a minimum of one post per month, and (5) the blog focused on communicating healthy eating information to the general adult population. Results Five popular blogs were followed over a three-month period (December 2017–March 2018), with 76 blog posts included for analysis. Characteristics of these popular blogs were examined and four main features were identified: (i) clearly conveying the purpose of each post; (ii) developing a strong understanding of the reader base and their preferences; (iii) employing a consistent writing style; use of vocabulary and layout; and (iv) communicating healthy eating information in a practical manner. These findings reveal important insight into the features that promote effective nutrition communication within this context. Conclusion Findings from this study highlight common characteristics of popular healthy eating blogs. Future research into the development of blog guidelines which incorporate the characteristics identified in this study can support dietitians in establishing or contributing to the successful provision of evidence-based nutritional information through blogs.
Background
In Australia, it is estimated that 83% of the population have access to and use the internet [1,2]. In 2011, the Pew Research Centre reported that seven in ten adults searched for health-related information online [3]. The same report also indicated that from a sample of 2065 internet users, 44% specifically searched for nutritionrelated information [3]. Additionally, it has been suggested that consumers are proactively seeking online nutritionrelated information with social networking sites being a preferred platform to access this information [4][5][6]. Therefore, to reach and communicate health and nutrition information to consumers online, health professionals are commonly utilising social networking sites, including Facebook, Instagram, Twitter, and blogs [7].
Blogs are increasingly growing in interest and popularity, and are commonly defined as websites containing posts which are presented in reverse chronological order [8][9][10]. Health-related blogs are currently used in a variety of ways including as a tool for health promotion, for discipline-focused information provision, to offer support for individuals with chronic disease, as a carer support network, a tool to understand mental health concerns, and to explore motivations and expectations of individuals [11]. A recent review of social media practices within the dietetic profession identified discussion forums, blogs and other social networking sites to be common tools for the dissemination of nutrition information [12]. Whilst blogs are being increasingly used by dietetic and health practitioners, they are also becoming an accepted platform for accessing nutrition information by consumers [9]. This is important as blogs focused on communicating knowledge about healthy eating have the capacity to reach a diverse sample of consumers who regularly use the internet [7][8][9][10]. In addition, there are many benefits for dietitians to establish or contribute to blogs including to promote healthier food choices and dietary behaviours [11][12][13][14], as an inexpensive means of communication [15,16], and for the continual access to evidenced-based nutritional information for the consumer [17].
Disseminating healthy eating information in an effective manner is integral in informing consumers about healthier dietary choices to reduce chronic disease risk [14]. Previous research has focused on factors that influence the effectiveness of communicating healthy eating educational materials on websites and in print media, rather than in healthy eating blogs specifically [18][19][20][21]. Factors influencing the effectiveness of educational material include readability, writing style [18][19][20][22][23][24] and appropriate use of vocabulary [21]. To account for poor literacy levels, there is consensus within the literature that education materials should be written to the standard of a sixth-grade reading level (equivalent to an 11-12 year old child with six years of education) [21,22,25]. Writing style is also an important factor, and it is recommended that an active and conversational tone should be adopted [24]. This recommendation is believed to encourage the reader's interest through the use of familiar language and also increases readability [24]. In addition, it has been suggested that appropriate use of vocabulary is an essential feature when communicating written education, as it is recommended that educational materials should not use technical jargon and abbreviations, as unfamiliar terms can deter the reader [24]. This study explores characteristics of popular healthy eating blogs and the ways in which healthy eating messages are communicated within the Australian context. This study also aims to offer suggestions for how this information can be communicated to guide dietitians in creating popular (successful) healthy eating blogs.
Overview
This study was conducted using a content analysis approach to identify characteristics of popular healthy eating blogs. Content analysis is a research method that has been increasingly used to analyse digital content, including written and visual content in internet forums, websites and in social media platforms [26][27][28][29]. Australian healthy eating blogs authored by self-identified health professionals were identified and followed for three months (December 2017-March 2018).
Sampling and selection
A purposive and snowball sampling approach was used to identify healthy eating blogs from search engines including Google, Bing and Yahoo (Fig. 1). The search terms used in each search engine included 'Australian Healthy Eating Blogs' and 'Top 100 Australian Healthy Eating Blogs'. There is no precise method to estimate the number of online healthy eating blogs [30,31]. Therefore, blogs were identified by focusing on the first page of each search engine, as it was assumed they are the most widely read and influential [30][31][32]. This decision was driven by a study conducted by Jansen and Spink (2009) that investigated consumer search engine click-through behaviour and reported 73% of consumers did not move beyond the first page of search engine results [32]. Additionally, current knowledge of search engine optimization (SEO) suggests that first page search engine results have higher website traffic [33].
Each webpage included was systematically searched and assessed for eligibility. Healthy Eating blogs were deemed eligible if: [1] the author self-identified as a health professional [2]; the blog was written by a single author [3]; the blog was written by an Australian author [4]; the blog had a minimum of one post per month, and [5] the blog focused on communicating healthy eating information to the general adult population. Blogs were excluded if they focused on a specific condition or disease state, if they could not be accessed due to a broken web link or if they were privately accessible. Additionally, micro-blogs which are defined as short posts restricted to 140 characters, were not included for analysis, as the primary focus of this study is to explore blog characteristics and the ways in which healthy eating messages are communicated [34,35]. Due to the character restrictions, the intended purpose of micro-blogs is to provide brief updates [34,35].
In this study, authors who identified as a health professional were chosen because they are already an accepted source of nutrition and health information. It was assumed that blogs written for the exclusive purpose of providing healthy eating information by health professionals would aim to provide evidence-based information, which may not be case for personal blogs. Health professionals in this study included, Accredited Practicing Dietitians (APDs), Nutritionists, Wellness and/or Health Coaches, Personal Trainers, General Practitioners, or any combination of the above. The qualifications of authors were self-reported and recorded on the 'about me' section of each blog; they were not further verified by the research team.
Procedure
A total of 14 healthy eating blogs were identified for analysis. Each blog was routinely viewed over the threemonth collection period to ensure blogging frequency met inclusion criteria (9 blogs did not meet criteria and were subsequently excluded). Additionally, due to the temporary nature of the Internet and to ensure no adaptations to posts occurred, screenshots were taken of each blog post and stored as separate Word Documents.
Ethics
The study was approved by the relevant institutional Human Research Ethics Committee [removed for blind peer-review]. Conducting internet research presents challenges that need to be acknowledged, including the boundary between public and private content in relation to consent [36]. Hookway (2008) suggests that 'accessible blogs may be personal but they are not private' [37]. In this study, blogs were considered public if they were not privately accessible or password protected, therefore the institutional ethics committee did not require individual consent from blog authors. However, to preserve anonymity, blogs were de-identified for the purpose of the content analysis and reporting.
Data analysis
A coding scheme was derived from previous research on the content of information communicated within health websites [19,20,38,39]. The Health-Related Website Evaluation Form (HRWEF) and the Suitability Assessment of Material (SAM) were adapted to create a coding scheme to help guide analysis [19,20,[38][39][40][41]. The HRWEF evaluates the content, credibility, currency, accuracy, reliability, readability and design of healthrelated websites, based on a scoring system [19,20,38,39] . The SAM tool evaluates literacy, readability, writing style, vocabulary, context, graphics and illustrations, layout, typography, interaction and cultural appropriateness of health-related websites [19,20]. Each variable was coded through a scoring system which reflected the degree to which they aligned with criteria adapted from the HRWEF and SAM. A priori content analysis was conducted on four blog posts from identified healthy eating blogs that were dated prior to the commencement of data collection to ensure applicability of the coding scheme and for coding training purposes.
Coding was conducted by two researchers [blinded for peer-review] using Microsoft Excel. Both researchers had formal University training within the field of Nutrition and Dietetics. Coding was undertaken collaboratively to aid reliability and consistency of the coding process. Each blog was assigned an identification letter and each post within a blog was assigned with an identification number. This allowed for the two researchers to discuss and compare coding during analysis. Differences in code assignments were resolved by the two researchers by discussing the rationale behind the coding decision and determining an agreed coding assignment. Additionally, in order to facilitate transparency, researchers documented their own thoughts about the purpose of the blog post at the completion of analysis which were then collaboratively shared between the researchers.
General blog characteristics
A total of five blogs were followed over a three-month period from which 90 blog posts were identified. From the 90 blog posts, 14 blog posts were excluded due to content being non-nutrition related, resulting in a total Successful healthy eating blogs were defined in this study as those listed on the first page of a major search engine (Google, Bing and Yahoo) [30][31][32]. Healthy eating blogs identified by the search strategy did not include those authored by self-reported APDs. Despite healthy eating blogs being written by different authors, there were common characteristics between blog posts. These common characteristics included: (i) clearly conveying the purpose of each post, (ii) an understanding of the reader, (iii) consistent use of writing style, vocabulary and layout; and (iv) communicating healthy eating information in a practical manner (see Table 1).
Most blog posts focused on a single targeted healthy eating message, directly in line with the purpose of the post, rather than containing multiple healthy eating messages. Authors communicated the purpose early in the blog post using catchy headings and within the first few sentences of the first paragraph. Blog post headings were in the style of practical ways to increase fruit and vegetable intake; simple time saving food preparation recommendations; and practical ways to improve or add excitement into mealtimes. Readers were given a clear sense of the purpose of the post and this established expectations.
Findings indicated that authors conveyed a sense of understanding within blog posts to their reader's needs. This was identified by the author directly acknowledging reader comments and concerns as the basis for a particular post. Communication was also encouraged by the author through statements that encouraged and directed the reader to provide feedback using the comment box. However, despite author encouragement, there were Use of common words and technical jargon not explained 0% Use of uncommon words and extensive jargon is used Layout 100% Illustrations are used, layout is consistent, visual cues, adequate white space and appropriate use of colour Some criteria of the above are present 0% Layout is uninviting or hard to read 0% (a) Percentage of blog posts relating to each criterion differences between reader participation using the comment box between blogs (see Table 2). Reader participation using the comment box ranged from 14 to 100%, with only Blog E achieving reader participation in all blog posts posted. The writing style, vocabulary and layout of posts also appeared to encourage a relationship between the author and the reader. Posts were written in a conversational manner, with technical jargon explained using metaphors and simple language. Authors commonly wrote in the first person and positioned themselves as similar to their readers as an individual who also faces positive and challenging food and nutrition-related experiences. Example statements include: '… I'm passionate about healthy eating …' (Blog A) and 'I've personally watched [removed to de-identify]' (Blog B). Both the conversational writing style and use of simple vocabulary complemented the translation of nutrition knowledge by allowing readers to understand nutrition-related concepts in non-technical language. In addition, blog posts focused on conveying a positive message about the benefits the food(s), rather than framing topics in negative terms.
Blogs incorporated the use of a consistent visual layout and structure from post to post within each blog. Commonly the visual layout of a post guided the reader's attention from a bolded heading (emphasising the purpose of the post) to a complementary eye-catching photo directly below. The reader was guided towards the introduction of the post which was usually accompanied by another visual cue separating the introduction from the body of the post. The body of the post commonly focused on communicating healthy eating information. The reader was then guided to the end of the post which was identified by the authors' sign off and encouragement to communicate via the comment box.
It was found that 64% of blog posts communicated healthy eating information in a practical manner by using recipes, practical tips and author recommendations (see Table 3). Typically, these techniques would be integrated into the structure of a post. Posts would commonly begin with an introductory paragraph which would state the purpose and/or associated health benefit(s). This would lead onto the next paragraph which would incorporate personal narratives or opinions from the author's perspective, with practical information then provided. Recipes were used to communicate healthy eating messages in several ways including: a solution for readers to incorporate a specific food with health benefits into their diet; providing healthier alternatives to a dessert or snack; and food alternatives catering for a variety of food preferences. Recipes were influenced by season and the latest food trends, for example: coconut oil, maple syrup, rice malt syrup, avocado, organic coconut sugar, spelt flour and apple cider vinegar were incorporated into recipes. Vegetarian recipes were commonly posted along with current trends and 'superfoods', including 'bowls' (e.g. Buddha bowls, smoothie bowls), 'smoothies' (e.g. green smoothies, matcha smoothies) and 'guilt free' desserts (paleo bars, raw bars); further highlighting the up-to-date nature of blog posts. Practical tips and recommendations were commonly communicated in a straight-forward manner by bullet points or sequential steps. Recommendations were not exclusive to nutrition and often incorporated behavioural and motivational recommendations. For example, 'Make healthy swaps' (nutrition-related), 'Inspire your friends and family' (motivational) and 'Don't deprive yourself' (behavioural) (Blog D).
A comparison between healthy eating information from blog posts with recommendations from the Australian Dietary Guidelines (ADG) suggested that only 43% of information clearly aligned with recommendations (see Additional file 1: Table S1). Most of the healthy eating information aligned with guidelines two and three, 'Enjoy a wide variety of nutritious foods from the five food groups every day' and 'Limit intake of foods containing saturated fat, added salt, added sugars and alcohol', respectively [42]. Posts often encouraged the consumption of fruit, vegetables, legumes and lean meats; and encouraged the concept of moderation, whilst permitting the occasional consumption of foods that do not fall within the five food groups.
Discussion
This study identifies the main features of a popular healthy eating blog, and the various ways in which healthy eating messages are communicated in this context. This study acknowledges the growing popularity of healthy eating blogs as a means of accessing healthy eating information, and the growing use of healthy eating blogs by dietitians, as a means to communicate this information [12]. This growth highlights the need to consider the development of healthy eating blog guidelines to support dietitians in communicating appropriately and effectively in this context through the presentation of nutritional information online.
Communicating the purpose of each blog post was identified as a main characteristic of successful healthy eating blogs. Authors explicitly stated the purpose of the post in either the title, within the first few sentences of the first paragraph or through the use of heading. Hoffmann and Worrall (2004) highlighted the importance of conveying the intended purpose of educational materials to allow the reader to assess whether the material is of value [24]. Clearly conveying the purpose of the post through headings is strongly supported by advertising research that suggest that headings which offer a desired benefit and arouse curiosity are more likely to be engaging [43]. Additionally, headings that incorporate a desirable and believable action, like those identified in the healthy eating blogs included in this study, are also more likely to be engaging and read [43].
A distinguishable feature of all blogs, unlike websites, is the capacity for continuous conversation and communication, which is primarily facilitated by a comment box [21,44]. Findings from this study highlighted varying participation levels of readers and authors communicating through the comment box. It has been reported that a comment box can provide insights into the views and perspectives of the reader, which can contribute to the development of posts that directly relates to the reader's desire for information, ensuring their long-term readership and commitment [21,43] [45]. However, despite the benefits of the comment box in facilitating communication, it is unclear what relationship there is between the author and the reader and what factors facilitate or impede that relationship.
This study also reported that a similar choice of writing style, vocabulary and layout were common characteristics within the structure of the blog posts. While there has been little empirical evidence investigating writing style and use of vocabulary in healthy eating blog posts, some studies have suggested that online healthrelated information aligns with eighth grade reading level and above, as compared to the recommended sixth-grade reading level [19,20,22,25,38,39]. A review of patient education materials for rheumatic disease reported variability in writing style across resources, with readability of some resources aligned with an eighth grade reading level and above [19]. Similarly, a review of web-based colorectal cancer screening education also reported that the readability surpassed the recommended sixth-grade reading level [46]. This finding was also supported by Chen and Dunn (2015) who, after analysing 250 Australian-based websites containing online health information, reported that the average reading level also surpassed the recommended level [22]. While readability of blog posts was not analysed in this study, blog authors consistently used conversational language, short sentence structure and traditionally presented information using bullet-points.
The use of a consistent layout creates a sense of familiarity for the reader, allowing the reader to easily navigate through the information within each blog post [41]. Consistent with the literature, blogs included for analysis presented information in a predictable manner, with key messages and information appearing at the beginning of a post [24,43]. Presenting key messages within the first paragraph is believed to effectively capture the reader's attention, as the first paragraph is the most read part of any written material [43]. In addition to the location of key messages, the way they are communicated is also important. It has been suggested that positive key messages, compared to messages that are perceived to have a negative consequence or outcome, are more likely to capture the readers interest [47]. Interestingly, the blog posts included in this study focused on a positive aspect or outcome of food and nutrition in promoting health and wellbeing, rather than concentrating on a negative outcome.
Blog authors translated healthy eating knowledge in simple terms by generating practical recommendations and ideas for the reader to implement in their everyday life. These findings are supported by a study investigating health and fitness social media use in young adults [48]. The study identified that readers sought and valued practical information and identified the use of social media as a supportive means to encourage behaviour change through the communication of practical information, which then inspired and motivated healthier behaviours [48]. It has been reported that practical information communicated to readers by healthy eating blogs, including recipe ideas, tips to improve diet-related lifestyle issues and general healthy eating tips, are valued by readers and influence returned readership [7,48]. The desire for practical information emphasises the importance of communicating procedural healthy eating information, rather than declarative [49]. While the latter describes the ability to recall and state facts, the former refers to the ability to apply facts in everyday life [49]. Previous research has emphasised the need for nutrition education to communicate procedural information, rather than only declarative education, which has traditionally been communicated [49][50][51][52]. However, while procedural knowledge seems to be advocated within the literature, further investigation is warranted to assess whether procedural knowledge facilitates behaviour change, especially in an online setting [53]. This study was subject to some limitations. First, despite conducting an extensive search in major search engines, healthy eating blogs were mainly identified through snowball sampling. There is no gold standard for identifying healthy eating blogs due to the changing nature of the blogosphere, hence it was assumed that if the blog did not appear in the first page of an engine search it is not frequently viewed [30][31][32]. Therefore, it is likely that not all healthy eating blogs that adhered to the inclusion criteria were captured by our search strategy. Seasonality is another consideration as this study was conducted through the festive season in Australia and therefore this may have influenced the content of blog posts. A similar investigation over another time period would identify whether seasonal factors influenced the identification of healthy eating blogs and post content. While the aim of the study was to investigate successful healthy eating blogs written by health professionals, the self-reported credentials were not verified, therefore questioning the assumption of evidencedbased information communicated within each blog. This study did not investigate the intended audiences of each healthy eating blog, a future direction for research in this area is to explore whether there are differences in how healthy eating messages are communicated for different sociodemographic groups. Additionally, it is recommended that future research also examines the reliability of nutrition information provided by healthy eating blogs.
Recommendations
Blogs are fast becoming a popular medium to access and communicate healthy eating information [1,6,16,38,54]. Although more research is needed to assess the effectiveness and impact of healthy eating blogs on behavioural change, it is recommended that the dietetic profession embrace this medium as an avenue to disseminate current, evidence-based and practical healthy eating information. To do so, the development of blog guidelines may be useful in providing dietitiansand potentially health professionals in other disciplineswith a framework to create long-term successful blog posts. Guidelines should consider findings from this study which suggest that successful healthy eating blogs should clearly convey the purpose of each post, understand reader interest by clarifying their needs, maintain consistency of writing style, vocabulary and layout, and communicate nutrition messages in a variety of practical and simple ways.
Conclusion
This study identifies the main features of a successful healthy eating blog, and the various ways in which healthy eating messages are communicated in this context. This study acknowledges the growing popularity of healthy eating blogs as a means of accessing healthy eating information, and the growing use of healthy eating blogs by dietitians, as a means to communicate this information. This growth highlights the need to consider the development of healthy eating blog guidelines to support dietitians in communicating appropriately and effectively in this context through the presentation of nutritional information online.
Additional file 1: Table S1. Comparison between healthy eating information from blog posts with recommendations from the Australian Dietary Guidelines (ADG). | 2019-12-21T14:04:21.537Z | 2019-12-19T00:00:00.000 | {
"year": 2019,
"sha1": "07dde15aa425f3e08b993936b462bdbca4b36d13",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-8064-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f96c3793fba8638c45a3387ae248aa6c90a13b93",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233424227 | pes2o/s2orc | v3-fos-license | Opening Gated Communities and Neighborhood Accessibility Benefits: The Case of Seoul, Korea
The level of spatial accessibility is directly related to how street networks are connected. Connected or so-called “permeable” network systems encourage walking, cycling, and riding public transit. Fast urbanization during the recent decades in the world metropolises has created separated urban areas. Gated-style apartment complexes have led this segregation more obviously with their inaccessible internal networks. Opening the internal network of apartment complexes and redesigning the pedestrian paths among apartment buildings will significantly mitigate these networks’ adverse effects on network permeability and increase spatial accessibility. This paper analyzes how such an opening design proposal for apartment complexes can change spatial accessibility using the case study of Mapo-gu, Seoul, Korea. It simulates three types of street networks and compares the results of accessibility in three conditions: (1) the internal networks of apartment complexes are not used by outsiders; (2) the internal networks of apartment complexes are open to outsiders with its existing entrances and path; and (3) the internal networks of sites are opened and redesigned by the Voronoi diagram method, which generates the optimal shortest path. An urban network analysis tool, Rhinoceros three-dimensional software, and Grashopper3D visual programming language have been used for the study results, which shows that a policy change in opening the intra-network of apartment complexes is likely to make the city more permeable. In addition, this study suggests extra modification on the pedestrian path for a higher level of accessibility in neighborhoods.
Introduction
Walking distance is an essential factor influencing whether or not people choose public transport, especially in areas of large residential sites and non-central suburbs [1,2]. Some studies found that it has the highest proportion among the factors influencing transit ridership [3]. In this sense, the characteristics of a built environment that increase the distances in walking routes to public transit should be well examined to make better policies. As an example of such characteristics, large apartment complexes are expected to increase walking distances to public transit.
The apartment is a representative type of housing in Korea. According to the Korean Population and Housing Census [4], more than half the country's population live in apartments. An apartment has the characteristic of many households occupying a single building, which justifies the independence of individual families and private ownership of the shared space within the apartment complex. Individual households' autonomy and the shared space for residents within the complex provide a pleasant and intimate living environment. Still, due to the lack of consideration for the apartment complex's external environment, apartment sites have been criticized in that they promote a physical disconnection from their surrounding area to cause a closed community.
Another aspect of Korean apartment complexes is that they occupy larger areas than do examples in other countries. This increases the impact of disconnection [5]. The privatization of excessively occupied spaces weakens the sense of urban community and Int. J. Environ. Res. Public Health 2021, 18, 4255 2 of 15 causes the disconnection of neighboring social networks, making the neighborhood's social sustainability at risk. In addition, the large-scale apartment complexes degrade neighborhood environmental sustainability with disconnected open spaces. Studies show that the percentage of students walking to school has decreased from 49% in 1969 to only 13% in 2009 [6]. This result is related to both the change in transportation systems and the influence of urban forms, which hinder walking activity. In Dogan et al. [5], we focused on the impact of large-size apartment complex developments and their influence on walking environments in neighborhoods. We found that with the development of large-scale apartment complexes, the walking time to schools increased, causing a decline in walking rate and more automobile dependency.
The internal pedestrian paths of apartment complexes are mostly different from the typical surrounding neighborhood's pedestrian networks by design. Apartment complexes have more irregular and ornamented paths that are shaded or lighted. Some may claim that this increases walking comfort, but such an irregular path causes unfamiliarity, because pedestrians from outside may feel more unaware of the place, which is a reason not to choose such a walking route. In addition, the isolated walking path of the apartment complex's residents robs the building of a sense of community by reducing social contact and hindering the formation of relationships with neighbors. In addition, privatizing roads and public spaces in the complex, limiting cars and pedestrians' inflow, and emphasizing security cause a massive segregation in the city center.
Internal streets of Chinese, South American, and Middle Eastern gated communities are segregated from surrounding street networks by gates and walls, whereas some apartment complexes in Korea do not hinder pedestrians from passing through the gates into their internal network. However, this does not mean a good connection between the neighborhood pedestrian network and the apartment complexes' internal network. Although some local people may be familiar with apartment complexes' internal street networks, many may not use pedestrian paths inside apartment sites. It constitutes barriers for the strangers to the neighborhood because the online maps do not provide a route option for apartment complexes' internal streets. In this way, isolated apartment complexes are distributed in urban spaces to create effectively urban islands. The discontinuity of pedestrian paths in the city due to apartment complexes' development directly and indirectly affects the physical and social network. Less connected pedestrian paths between apartment complexes and surrounding neighborhoods negatively affect people's walking activities and increase the use of vehicles to bridge the disconnection of urban spaces, hindering social and physical sustainability.
Meanwhile, permeability is the core theory of new urbanism, which favors urban design based on the "traditional" (especially in the North American context) street grid. The new urbanist view also affects the policy of some governments. According to the UK's Street Traffic Guidance Manual, the government encourages connected street networks by emphasizing that they encourage walking and biking and make places more comfortable to explore [7]. However, despite the criticism that such large-scale apartment complexes accelerate the separation of urban spaces, apartment complexes are still the preferred residential type. In particular, the construction of an apartment complex may renovate the old road network and improve the pedestrian environment through an increase in the road's width and straightness. In addition, harmful effects of the existing apartment complex construction are mostly dealt with in the social aspect. In that case, it is necessary to objectively review the impact of the apartment complex construction on the physical environment of surrounding neighborhoods.
Therefore, this study aims to analyze the effect of the policy change to open apartment complexes on the pedestrian permeability and pedestrian accessibility and its influence on pedestrian efficiency of the city through these questions. The relative differences in accessibility measurements are considered as pedestrian efficiency (PE) in this study. This study also aims to determine whether there are better solutions to increasing accessibility by testing the new path planning inside apartment complexes using Voronoi diagrams. This paper explores how the spatial potential of different network configurations in the internal area of apartment complexes can generate better accessibility. The specific research questions are as follows. How do the spatial characteristics of the apartment complex affect the transmittance? How and to what extent does the road opening in the apartment complex change the apartment complex's permeability and the surrounding area? Furthermore, how can the accessibility change by some design proposals to modify pedestrian networks inside apartment complexes using Voronoi diagrams?
Literature Review
Cities' openness has been discussed by different urbanists. Modern urbanists claimed that their form of city is the best option for open city, because it offers more ground space for green or for solar [8]. However, planners such as Jane Jacobs claimed that openness is more related to social activity on the street; the city is open if social interaction is possible [9]. Therefore, Jacobs objects to modern planners such as Le Corbusier, since their concept is hindering the social activities and social interactions on street [9]. Social interaction increases in cities where walking is the chosen method of movement. Therefore, new urbanism suggests walkability as one of the key principles of a sustainable neighborhood [10]. In particular, walking has a positive influence on the formation of community consciousness in the neighborhood by inducing both movement through physical space and social and spatial interaction between people [11].
Unlike in the past, when there were many social and physical restrictions on movement, the development of transportation technology has enabled better movement options between spaces. However, walking is still a basic means of transportation that enables movement between two spaces while minimizing cost if physical strength permits. More significant, walking is a tool of ridership for accessing transit. With this point, the issue of accessibility becomes critical, because it determines whether people choose walking on the street. Meanwhile, accessibility measurements are used both for walkability studies and help to understand different aspects of an urban form such as its compactness, functionality, sustainability, equity, and centers of social interaction [12].
As an element of conflict caused by apartment complexes, there is an increase in walking distance due to the disconnection of urban pedestrian path. As a result of this closed form of the apartment complex, outside residents may bypass the apartment complex and try to access their destination using public transit. Briefly, the physical walking distance becomes longer, which acts as an obstacle to the purpose of passage through walking, which is one cause of reduction in walking activity [13,14]. Studies on how to integrate and increase the usefulness of the internal network of apartment complexes include Yang and Yu [15]. They use the Delanuay triangulation method to evaluate and compare average walking distances for currently used and potentially designed pedestrian networks for the apartment complexes in Seoul, South Korea. They present three strategies: transitoriented development, complete street, and mobility enhancement. Similarly, Ai et al. [16] used Delanuay triangulation to test their experimental study on the spatial neighborhood relationship representation. Improvement of the pedestrian environment promotes the walking activity of urban residents, which is likely to decrease the use of automobiles, thereby improving the urban environment. In particular, the improvement of pedestrian accessibility in neighborhoods due to the improvement of the pedestrian environment outside of the apartment complex affects the housing price [17,18]. Housing preference in areas with good public transportation and pedestrian access to school increases, so there is a positive correlation between pedestrian friendliness and housing prices [19,20]. There is also a statistically negative correlation between crime incidence rates and the walking activity occurrence on a certain street [21]. Jacobs' theory of "eyes on the street" is another source that supports this negative correlation [9]. The excellent pedestrian environment in the neighborhood enhances walking accessibility to neighboring facilities and ensures pedestrian safety, so the willingness to live in the area increases [22]. Furthermore, the increase of citizens' walking activities in external spaces due to pedestrian activation increases accessibility for the use of neighboring facilities in areas with excellent pedestrian environments. However, the construction of a large-scale apartment complex has both positive and negative aspects in terms of the pedestrian environment. Apartment sites negatively affect the walking experience for transit riders. This increases the length of the route, and the fences of gated communities create a tedious experience for pedestrians [23].
An early comprehensive study on neighborhood characteristics, street form, connectivity, and urban community identified spatial typologies and analyzed the patterns of growth, land use, and street layouts in the case study of the San Francisco Bay area [24]. Among the patterns, gridiron form allows the highest level of accessibility, and the form of lollipops on sticks (i.e., "cul-de-sacs") has the lowest level of accessibility. However, high connectivity has a disadvantage of privacy, which decreases parallel to accessibility.
There are two main approaches to walkability in the literature: network connectivity and audit tools [25]. Briefly, connectivity analysis uses street networks to characterize the pattern of walkability, and audit tools help to document walkability from the perspective of pedestrians by using some standard forms. The former is more universal and can be easily adopted, whereas local characteristics may have different influence in the latter. In addition, the centrality concept has been used in walkability studies. Many of the centrality concepts were first developed in social network analysis [26]. However, the methods analyzing built environment and social capital are quite new. Liu et al. [27] investigated the street centrality and its impact on land use intensity in Wuhan, China. The authors suggested the walking network has a direct effect on activating walking. Similarly, Sun et al. [23] studied China's policy to open gated communities to outsiders with a permeability perspective. The changes in betweenness analysis after opening the gated communities give some insights about how permeability of the city will affect the vitality of streets when there is a policy to un-gating apartment complexes. Yue et al. [28] also focused on the street centrality and its influence on urban vitality using social network review data in Wuhan, China. The straightness and betweenness measures have a direct positive effect on urban vitality. Instead of only main arteries, all streets become lively with a permeable neighborhood. In addition, better street centrality and accessibility directly affect housing values in neighborhoods [29].
There are some different approaches on the impact and benefits of opening policies. For instance, Dong et al. [30] approached the issue with a focus on its benefit for relieving the traffic burden. Their results showed that an opening policy will help decrease traffic congestion around gated community sites. Yang et al. [31] emphasized the benefit for the pedestrian and cyclist accessibility and route choices by measuring second-level scenarios of opening gated communities. However, to give a comprehensive answer to the benefits of opening the gated communities, additional analyses should be done in terms of accessibility changes by the strategies of opening the gated communities.
Study Area
The purpose of this study is to analyze based on the network connectivity the effect of the openness of an apartment complex on the pedestrian environment in a neighborhood. To this end, the impact of the road-opening policy in the apartment complex on the permeability of the pedestrian environment was compared using data on the progress of the 2018 Seoul Maintenance Project. The area of the east side of Mapo-gu, an apartment complex cluster, derived through analysis of k-mean clustering, was selected as the target area for analysis ( Figure 1). Another reason to choose this area as the case is that the apartment complexes are distributed among the non-gated traditional neighborhoods ( Figure 2). This would improve our understanding of how apartment complexes are affecting their surroundings. After determining the apartment complexes that are made of more than two blocks, each apartment complex was digitized by identifying the size of the development, the road network in the complex, the entrance, and the locational characteristics of facilities in the surrounding area using Google satellite images and Google Street View data. The total apartment sites analyzed in the Mapo-gu case area number 44, ranging from sites of three blocks to the sites of 40 blocks. area for analysis ( Figure 1). Another reason to choose this area as the case is that the apartment complexes are distributed among the non-gated traditional neighborhoods ( Figure 2). This would improve our understanding of how apartment complexes are affecting their surroundings. After determining the apartment complexes that are made of more than two blocks, each apartment complex was digitized by identifying the size of the development, the road network in the complex, the entrance, and the locational characteristics of facilities in the surrounding area using Google satellite images and Google Street View data. The total apartment sites analyzed in the Mapo-gu case area number 44, ranging from sites of three blocks to the sites of 40 blocks.
Data Sources
There are three main components of origin, network, and destination in this study. First, origin is the initial location of human a movement, which is usually considered to start from the place of residency. In our study, regardless of building type, we used the whole building stock in the case study area as origins. The building stock data used in this study were received from the source of "Road Name Address DB," provided by the Ministry of Public Administration and Security. Second, the mean of transport has two different impacts for pedestrian activity; one of them is about walking through (betweenness and straightness measurement in our study are related to this) and the other is about walking to (gravity and reach measurements in our study are related to this). The data of area for analysis ( Figure 1). Another reason to choose this area as the case is that the apartment complexes are distributed among the non-gated traditional neighborhoods ( Figure 2). This would improve our understanding of how apartment complexes are affecting their surroundings. After determining the apartment complexes that are made of more than two blocks, each apartment complex was digitized by identifying the size of the development, the road network in the complex, the entrance, and the locational characteristics of facilities in the surrounding area using Google satellite images and Google Street View data. The total apartment sites analyzed in the Mapo-gu case area number 44, ranging from sites of three blocks to the sites of 40 blocks.
Data Sources
There are three main components of origin, network, and destination in this study. First, origin is the initial location of human a movement, which is usually considered to start from the place of residency. In our study, regardless of building type, we used the whole building stock in the case study area as origins. The building stock data used in this study were received from the source of "Road Name Address DB," provided by the Ministry of Public Administration and Security. Second, the mean of transport has two different impacts for pedestrian activity; one of them is about walking through (betweenness and straightness measurement in our study are related to this) and the other is about walking to (gravity and reach measurements in our study are related to this). The data of
Data Sources
There are three main components of origin, network, and destination in this study. First, origin is the initial location of human a movement, which is usually considered to start from the place of residency. In our study, regardless of building type, we used the whole building stock in the case study area as origins. The building stock data used in this study were received from the source of "Road Name Address DB," provided by the Ministry of Public Administration and Security. Second, the mean of transport has two different impacts for pedestrian activity; one of them is about walking through (betweenness and straightness measurement in our study are related to this) and the other is about walking to (gravity and reach measurements in our study are related to this). The data of network, which is the mean of transport, were generated based on street centerlines from Seoul road maps. Third, subway stations were chosen as the destination because subway stations are distributed in every part of city, and subways are the most used public transportation method in Seoul. According to the Korean Statistical Information Service's Annual Modal Share in 2018, the share of subway on public transportation is 41%, whereas the share of bus is only 24% [4].
Methodology
One of the most common methodologies for network and accessibility studies is space syntax. There are various software and tools for space syntax methodology. The urban network analysis (UNA) framework has been used for this study. Contrary to deficiencies in space syntax and other network analysis methodologies, UNA has a useful modification in that it adds buildings (housing or other infrastructures) to representations, adopting a tripartite system that consists of three basic elements: edges, nodes, and buildings [32]. In the conception of this method, buildings are represented as a point, and each point is connected to the nearest street segment (edge) by the shortest line. Figure 2 presents the conceptual framework. In this study's measurements, two walking distances were considered: 400 m and 800 m. The first is the highest distance for comfortable walking, and the second is the highest distance level for acceptable walking. Studies suggest that up to 800-1000 m distance is acceptable for walking [31]. However, the most affordable walking distance, which is accepted as a standard by many authorities, is 400 m [33]. The indicators measured in this study include reach, gravity, straightness, and betweenness. Among these indicators, reach and gravity comprise the accessibility index, whereas betweenness and straightness comprise the centrality index. The higher the improvement in the values of these four measurements, the higher the flow of people on the street and the higher mobility and vitality along the streets.
The definitions of the measurements are as follows. First, "reach" measures the number of destinations that can be accessible in a certain radius area. Second, "gravity" measures the travel cost to access to destinations in a certain radius area. It can also be regarded as attractiveness, as being closer to the destination means higher attractiveness. Three, "straightness" measures how directly a route is being generated between origin and destination. Finally, "betweenness" measures to which degree a node is being used when a configuration of routes geo-located between all origin and destination.
Three different scenarios of an internal pedestrian network of apartment sites and their connection to their surroundings are considered in the measurement: gated, partially opened, and fully opened with Voronoi modification (Figure 3). First, scenario 1 assumes that the internal network of apartment complexes is not used by outsiders (because it is gated or because the lack of awareness about the internal network of the site). Scenario 2 assumes that the internal network of apartment complexes is open to outsiders as it is (including small entrances, gates, and all pedestrian paths inside the complexes). Scenario 3 assumes the use of a Voronoi diagram to find an optimal network system to minimize the pedestrian walking time in accessing public transportation. In this hypothetical scenario, we re-design the entrances and internal network by a Voronoi diagram, considering the locations of buildings and outside street network.
The Voronoi diagram is defined as a cell formed by dividing the Euclidian line connecting the two most adjacent points by vertically dividing the lines. A Voronoi diagram consists of a Voronoi cell, the Voronoi space that surrounds a Voronoi cell, a Voronoi vertex, and a Voronoi foam [34]. Voronoi diagrams have been used widely for the delimitation of maritime zones and mapping coastal boundaries [35] and for mechanical engineering and robotics [36,37] as well as in urban planning and architecture for analyzing or designing indoor circulation, path planning, and service area studies [38][39][40][41][42]. In particular, the Voronoi diagram can describe the service area of the city network more accurately than can the traditional Euclidian network method [43]. This means that the space divided by a Voronoi diagram can simulate human walking activities more realistically. Additionally, in urban design, a Voronoi diagram gives the opportunity to draw the optimum shortest routes among the buildings or other obstacles of a built environment. Optimum shortest routes are different from geometrical shortest connections by not closely approaching obstacles among the route. Moreover, a Voronoi diagram helps to create environments for each building by not interrupting the others (Figure 4). To create a Voronoi diagram, we used the Voronoi plugin on Grasshopper3D visual programming language. Network studies have two elements in their models: nodes and routes. Thus, we define a network with our Voronoi diagram whose vertices are nodes and whose edges are routes.
Pedestrian Efficiency
In this study, the measurements of reach, gravity, straightness, and betweenness were considered as positive indicators for pedestrian environment. Therefore, the relative differences of these measurements in three scenarios were considered as pedestrian efficiency (PE). S1 stands for the measurement result of scenario 1 (gated). S2 indicates the measurement result of scenario 2 (partially opened). S3 means the measurement result of scenario 3 (fully opened with Voronoi modification). Meanwhile, PE1 indicates a pedestrian efficiency change for the hypothetical scenario 2 (partially opened with existing internal paths). PE2 means pedestrian efficiency for hypothetical scenario 3 (fully opened Additionally, in urban design, a Voronoi diagram gives the opportunity to draw the optimum shortest routes among the buildings or other obstacles of a built environment. Optimum shortest routes are different from geometrical shortest connections by not closely approaching obstacles among the route. Moreover, a Voronoi diagram helps to create environments for each building by not interrupting the others (Figure 4). To create a Voronoi diagram, we used the Voronoi plugin on Grasshopper3D visual programming language. Network studies have two elements in their models: nodes and routes. Thus, we define a network with our Voronoi diagram whose vertices are nodes and whose edges are routes. Additionally, in urban design, a Voronoi diagram gives the opportunity to draw the optimum shortest routes among the buildings or other obstacles of a built environment. Optimum shortest routes are different from geometrical shortest connections by not closely approaching obstacles among the route. Moreover, a Voronoi diagram helps to create environments for each building by not interrupting the others (Figure 4). To create a Voronoi diagram, we used the Voronoi plugin on Grasshopper3D visual programming language. Network studies have two elements in their models: nodes and routes. Thus, we define a network with our Voronoi diagram whose vertices are nodes and whose edges are routes.
Pedestrian Efficiency
In this study, the measurements of reach, gravity, straightness, and betweenness were considered as positive indicators for pedestrian environment. Therefore, the relative differences of these measurements in three scenarios were considered as pedestrian efficiency (PE). S1 stands for the measurement result of scenario 1 (gated). S2 indicates the measurement result of scenario 2 (partially opened). S3 means the measurement result of scenario 3 (fully opened with Voronoi modification). Meanwhile, PE1 indicates a pedestrian efficiency change for the hypothetical scenario 2 (partially opened with existing internal paths). PE2 means pedestrian efficiency for hypothetical scenario 3 (fully opened
Pedestrian Efficiency
In this study, the measurements of reach, gravity, straightness, and betweenness were considered as positive indicators for pedestrian environment. Therefore, the relative differences of these measurements in three scenarios were considered as pedestrian efficiency (PE). S1 stands for the measurement result of scenario 1 (gated). S2 indicates the measurement result of scenario 2 (partially opened). S3 means the measurement result of scenario 3 (fully opened with Voronoi modification). Meanwhile, PE1 indicates a pedestrian efficiency change for the hypothetical scenario 2 (partially opened with existing internal paths). PE2 means pedestrian efficiency for hypothetical scenario 3 (fully opened with modifying internal paths with Voronoi diagram). In addition, we examined the number of buildings with increasing accessibility and centrality in the case study area. PE1(%) = S2 − S1 S1 × 100, PE2(%) = S3 − S1 S1 × 100
Analysis Results
The study focused on accessibility and centrality measurements of street networks by considering these measurements as an evaluation of pedestrian efficiency. Values of accessibility, namely reach and gravity, and centrality, namely straightness and betweenness, measurements to examine the spatial potential for pedestrian efficiency assessment have been calculated for three scenarios of pedestrian network: (1) the internal networks of apartment complexes are not used by outsiders; (2) the internal network of apartment complexes are open to outsiders with its existing entrances and path; and (3) the internal network of sites are opened and redesigned by the Voronoi diagram method, which generates the optimal shortest path. Accessibility and centrality measurements have been conducted for 13,405 buildings based on the destination of 10 subway stations in the case area.
In Table 1, the average values of reach, gravity, straightness, and betweenness of 13,405 buildings for radius area of 400 m and 800 m, and the percentages of PE1 (relative difference between scenario 1 and scenario 2) and PE2 (relative difference between scenario 1 and scenario 3) are listed. In addition, the number and percentage of buildings having accessibility and centrality gain in scenario 2 and scenario 3 compared to scenario 1 are added to the table. In general, we find that the average values of all reach, gravity, straightness, and betweenness, as well as the number of buildings having accessibility and centrality gain, respectively increase in scenarios 2 and 3. First, comparing scenarios 1 and 2, we find that the opening of internal network of complexes has a positive effect on all four indicators of reach, gravity, straightness, and betweenness, from 2.42 to 2.56% in a radius of 400 m and from 1.48 to 3.20% in a radius of 800 m. Furthermore, the values of four indicators of reach, gravity, straightness, and betweenness keep increasing in scenario 3, which is the scenario of redesigning a pedestrian path for optimum shortest distance by a Voronoi diagram, from 2.81 to 3.99% in a radius of 400 m and from 2.99 to 6.16% in a radius of 800 m. The results clearly indicate that not only opening gated communities but also modifying their internal network with a Voronoi diagram will increase all the accessibility and centrality values. Among the indicators, straightness has the highest percentages of change in three configurations, especially in a radius area of 800 m, as it increases up to 6.16% in scenario 3. It can be explained as that apartment complexes are hindering direct access to facilities by their gated conditions. Particularly, this is more visible in the case of longer destinations than shorter ones.
We analyze whole buildings in the case area, which includes 13,405 buildings. To understand how many of the 13,405 buildings in the case area have accessibility and centrality benefiting from open gates and redesigning pedestrian path, we calculated the number and percentage of buildings having accessibility and centrality gains in scenarios 2 and 3 by considering scenario 1 as the base. The number of buildings benefiting from such an opening policy increases drastically in all four measurements. As an example, for the radius area of 800 m in scenario 3 (fully opened with Voronoi modification), the number of buildings benefitting increases up as following: 830 units (6.19%) Figure 5) in dark pink indicates the highest reach value, followed by light pink and brown. Gray shows the lowest reach measurements. Buildings located between more than one destination in a short distance have the highest value of reach. Second, in the visualization of gravity (Figure 6), dark pink is the indicator of high accessibility level, while gray indicates less accessible origins. Different from the reach measurement, buildings that are the closest to any of destination mostly have the highest values of gravity. Last (see Figure 7), dark brown indicates the highest straightness value, following yellow and light blue, and dark blue shows the lowest straightness measurements. Gray indicates the origins beyond 800 m from destinations.
According to the results, the street pattern generated by the Voronoi diagram provides the best accessibility result. In addition, it is one of the best options for walkability, because it has smaller building unit and more intersections. As mentioned in the literature review, small units and more interactions provide walkers direct travel and more route choices. The visual results in Figures 5-7 also indicate that the benefits of accessibility and centrality by opening policies are not only for the buildings outside of apartment complexes but also buildings inside complexes. Several points inside the complexes are turning brown or pink from gray in scenarios 2 and 3. Briefly, the results of analysis proved our hypothesis that the opening policies of apartment complexes would help the improvement in accessibility and centrality measurements. Pedestrians can reach their destinations in a shorter time, and the vitality of streets should increase by the increasing centrality of a neighborhood and the permeability of a city. Furthermore, it also shows that better design proposals can create shorter distances for pedestrian networks.
Conclusions
Pedestrian accessibility to public transit is essential for increasing public transit ridership in general and walkability in particular. Improved accessibility influences the local and regional economics as well as social health of urban communities. This study analyzed the effect of the closed-ness and open-ness of apartment complexes on the connectivity of urban spaces. The change in accessibility of public transport facilities in the surrounding area of apartment complexes was spatially visualized using three scenarios. First, the internal networks of apartment complexes are not used by outsiders. Second, the internal networks of apartment complexes are open to outsiders with existing entrances and paths. Third, the internal network of sites is opened and redesigned by a Voronoi diagram method generating the shortest optimal path.
The analysis confirmed that the policy of opening the pedestrian path from the neighborhood around the complex to the interior of the apartment complex improved the accessibility and centrality of the network including the apartment complex, thereby increasing the accessibility to public transportation facilities. Therefore, to secure public access to urban spaces in the future development of large-scale apartment complexes and to increase equity in the use of public transportation, this study presented the basis for introducing an open policy of apartment complexes. In addition, the results of this study can help urban policy makers and urban planners to increase the sustainability of their neighborhoods by creating a spatial layout to enhance urban spaces and providing a pedestrian-centered urban space rather than one that is car-centered.
The pattern of streets has a strong impact on the quality of the urban environment. The spatial pattern of urban streets directly affects connectivity and accessibility as well as privacy and safety. As this study shows, accessibility can be improved significantly with different configurations. However, it would be incomplete to evaluate the issue of walkability only in the context of accessibility based on street network patterns. The spatial pattern of streets may affect opposite impacts. For instance, increasing accessibility may mean decreasing privacy and safety. Therefore, urban designers and policy makers need to devise a legible street pattern that provides pedestrian, bicycle, and transit access without sacrificing privacy and safety. In addition, this study focused on the issue of accessibility to subway stations, which is one of the most important amenities. However, there are other essential facilities of commercial, education, medical, parks, and public services in the neighborhoods. Finally, the problem may not be only the closed-ness of apartment complexes by a fence or wall. It needs to be studied whether superblock-centered urbanism is good for pedestrianism and lively streets. The opening of gated community may not be enough to make streets lively unless they are designed organically with the surrounding urban fabric. Hence, the opening of gated communities may not be the only solution; we may also need to quantify the impact of superblock-centered urbanism on the quality of life in the surrounding neighborhoods.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-04-29T05:18:21.736Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "56628b41226a0985f0d213d5960aa1b7cb55af68",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/8/4255/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56628b41226a0985f0d213d5960aa1b7cb55af68",
"s2fieldsofstudy": [
"Geography",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214699245 | pes2o/s2orc | v3-fos-license | Image-based Individual Cow Recognition using Body Patterns
The existence of illumination variation, non-rigid object, occlusion, non-linear motion, and real-time implementation requirement has made tracking in computer vision a challenging task. In order to recognize individual cow and to mitigate all the challenging tasks, an image processing system is proposed using the body pattern images of the cow. This system accepts an input image, performs processing operation on the image, and output results in form of classification under certain categories. Technically, convolutional neural network is modeled for the training and testing of each pattern image of 1000 acquired images of 10 species of cow which will pass it through a series of convolution layers with filters, pooling, fully connected layers and softmax function for the pattern images classification with probabilistic values between 0 and 1. The performance evaluation of the proposed system for both training and testing data was carried out for each cow’s identification and 92.59% and 89.95% accuracies were achieved
I. INTRODUCTION
Cows in the past were classically monitored with the sole aim of aiding tracking, health information, performance recording, prevention against manipulation and swapping, and verification of false insurance claims. There are basically two recognition techniques employed for the identification of the animal. One recognition technique leaves a permanent mark on the animal for identification while the other recognition technique leaves a temporary mark. Examples of the recognition technique that leaves a permanent mark are found in [1], [2], [3], [4], [5] with their drawbacks. The tattooing of ears, tagging of ears, microchips implant and branding are popular invasive identification techniques that leave a permanent mark on the animal's body with so many challenges such as animal infections, mild sepsis, and hemorrhaging [2], [3].
Examples of the recognition technique that leaves a temporary mark on the body of the animal for identification purposes are found in the work of Barron et al. [6] with their drawbacks. Among the classical methods of animal identification are drawing, tagging, tattooing, branding, notching, and Radio Frequency Identification (RFID). However, classical methods of animal identification have notable adoption problems which have contributed to the low acceptance rate of the methods among the cow breeders. The classical methods of animal identification are not reliable; they are prone to fraudulent activities such as swapping, duplication and forgery of the so called unique identification numbers tagged on the animal's body [7], [8], and therefore cannot meet the required level expected from them for the monitoring and identification of animal [9].
Many automatic systems have been proposed recently for the monitoring and identification of cow however, most of these devices are sensor based and sometimes become burden and injurious when worn on the body of the animal [10]. There is need for automatic cow monitoring system in livestock farm to be developed as there is uprising in the number of cow year in year out in almost every part of the world and there is great task involved in monitoring cow manually. Lu et al. [11] proposed cow traceability system that was based on the iris analysis for the enhancement of cow management. The image quality assessment of the captured iris sequences was firstly made before the clear iris was selected. By using segmentation that was based on edge detection, the inner and outer boundaries of the iris of the cow were fitted as ellipse form. The iris image was normalized using geometric method and both the local and the global features of the iris of the cow were extracted using 2D complex wavelet transform. However, in an unconstrained environment where there is greater possibility of getting poor quality image of cow's iris, this method may not be appropriate for a reliable traceability.
By using video data, there is every possibility that the problems attributed to the classical methods can be mitigated using the visual based automatic cow recognizing system. The recognition of individual cow in the automatic cow monitoring implementation process enables behavior monitoring of individual cow at long run for body condition score which plays important role in the health condition of individual cow. The system proposed in this paper is image-based individual cow recognition using body patterns. The rest of the paper is organized as follow. Presented in Section 2 are the literature review, followed by the material and method in Section 3, the results and discussion are in Section 4. The conclusion is in Section 5.
II. LITERATURE REVIEW
The conventional constructs of identifying animal can be categorized into: (1) permanent recognition construct (PRC); www.ijacsa.thesai.org (2) semi-permanent recognition construct (SRC); and (3) temporary recognition construct (TRC); [12], [13]. The tattooing of ear and body, tagging of ear, microchip implants and branding are referred to as PRC recognition methods [14] but with several limitations [15] such as: (1) lack of large scale production of various metal clips and plastic tags that can be enough for the identification of large-scale animal; (2) easy lost of the available ear tags due to ear tearing; (3) infections of animals such as cattle and other ruminant animals due to notches [16], [17], [18], moreover, more than half percentage of the animals are infected from the injury sustained on their ear due to the implanted plastic ear-tags, reason being that, the ear-tags cause various health challenges such as local inflammation, thickening of the flesh, presence of pus-forming bacteria, and loss of blood through the notch [17], [13]. Cattle recognition using methods such as pattern sketching and collar is SRC method. Furthermore, the use of dye or paint and radio frequency identification (RFID) based recognition are referred to as TRC for the recognition of animal [12], [19].
According to [20], the sketching pattern is applied for the recognition of cattle such as Holsteins and Guernsey with broken color. High drawing skills of an individual for sketching is needed which should be comparable to standard image quality and positively affect the cattle identification process. However, this method cannot be used for the identification of solid collared breeds such as Red Poll and Brown Swiss breed as some artificial marking methods such as ear tagging and tattooing that are discrimination based are needed. However, the method of ear tagging damages the cattle's ear at the long run. As iterated in Petersen's work [21], muzzle print-based cattle recognition method using blue ink and A-5 paper [22] was the first attempt to get permanent recognition method for cattle. In the method, skills are required to acquire the muzzle pattern's print image, by holding firm the cattle.
Lately, the research community has shifted attention to advancing cattle recognition using image of muzzle print as a new paradigm for cattle identification [22], [20]. According to [23], print image of muzzle pattern is made up of beads and ridges patterns. Muzzle dermatoglyphics such as granola, ridges, and vibrissae from various breeds are not the same [16]. Similarly, proposed in Mishra et al. [24] is method of cattle breeds recognition using the beads and ridges features of muzzle print images. Similar to the work of Mishra et al. [14] is Minagawa et al. [22], they proposed a cattle identification method using muzzle print, the performance evaluation was made using filtering techniques for muzzle image analysis and morphological approaches. Equal Error Rate (EER) of 0.419 was reported by them.
Contrary to Minagawa et al. [22] is a framework proposed by Barry et al. [25]. The framework is a cattle recognition using muzzle print images. They reported the 241 false nonmatch rates (FNMR) over 560 genuine acceptance rate (GAR) and 5197 false matches over 12,160 impostors matching closely with the same value of EER of 0.429, respectively. In their cattle identification effort, Kim et al. [26] proposed a method that could recognize the Japanese black cattle using the cattle face's pixel intensity [26]. Proposed in [27] is a local binary pattern (LBP) based model for recognition of cattle using the texture features of cattle facial representation. Proposed in [28] is an approach for cattle recognition based on Speeded Up Robust Feature (SURF) descriptor. The approach was an enhancement of Petersen's method for cattle identification. The results of experiment was reported based on the image datasets of 4 cattle breeds used which were captured on A-5 paper with blue inked for the purpose of cattle recognition. Proposed in [20] is a matching refinement technique in scale invariant feature transform (SIFT) descriptor for cattle recognition using database of 160 muzzle print images. By the application of matching refinement technique in SIFT approach, the matching scores of the keypoints of muzzle print images were computed. Nevertheless, the performance of the matching refinement approach and the original SIFT approach were compared, and the value of EER equal to 0.0167 was achieved.
Proposed in Awad et al. [29] is a framework for recognizing cattle using SIFT descriptor approach. The approach is used for localizing and detecting the beads and ridges' key points in the images of muzzle print for the cattle identification. The RANdom SAmple Consensus (RANSAC) technique incorporated in the SIFT algorithm is used for the palliation of the outliers in muzzle image for an improved, robust, and reliable cattle identification. Database of 90 muzzle images was used for the experiment where 15 muzzle images were captured from each cattle of 6 in number. Proposed in Tharwat et al. [23] is an approach of cattle recognition that was based on muzzle image using the technique of local texture descriptor. The technique works in such a way that the texture extraction algorithm that was based on local binary pattern used the local texture features extraction from the images of muzzle point. The involvement of more processing time in the cattle recognition process is a major limitation of the technique.
Object recognition method that is based on CNN was proposed in [30]. The proposed architecture which combines RGB image and its corresponding depth image for object recognition is made up of two unconnected CNN processing streams, which are sequentially integrated with a late fusion network. ImageNet [31] is employed for the training of the CNNs in which the depth image is encoded as a rendered RGB image, making the information that is contained in the depth data to go round over all the three RGB channels, and subsequently, a standard and pre-trained CNN is employed for the recognition. Due to limited availability of large scale depth datasets that are labeled, CNNs that are pre-trained on ImageNet [32] are employed. Proposed in [33], is another object recognition method, which employs deep CNN. The proposed method also uses CNN, which is pre-trained for image classification and provides a robust, semantically meaningful feature set. The depth information is integrated by rendering objects from a canonical perspective and getting the depth channel colorized according to distance from the object center.
Jingqui et al. [34] proposed the method of object recognition based on image entropy; this was aimed at identifying the behavior of cow object that is on the motion against a complicated background. They used the minimum www.ijacsa.thesai.org bounding box and contour mapping for the real-time capture of behavioral and characteristic features displayed by the cow. Although the approach used has time-saving advantage for cow breeders and yields a high recognition rate of estrous and hoof-disease not less than 80%, the time correlation of cow behaviors was not integrated.
Andrew et al. [35] demonstrated the suitability of computer vision pipelines that utilize deep neural architectures to carry out automated Holstein Friesian cattle detection in addition to individual identification in a farm set up. They showed that it is possible to perform robustly Friesian cattle detection and localization with an accuracy of 99.3% on the available dataset. Although they showed the capability of their method in the scenarios presented, they did not consider complicated setups such as faster moving, larger herds and tight animal gatherings.
In the process of extracting features from an image, Kumar et al. [36] posited that pre-processing is important for object tracking accuracy but feature extraction and representation algorithms that are based on appearance are unable to perform the recognition of object as a result of image blurriness due to noise, low illumination and the unconstrained environment under which the images were captured. Therefore a method based on feature descriptor techniques is utilized for the unique identification of individual object. Based on the preprocessing process, reliable results were obtained from the tracking process of the object. Pre-processing which majorly involves particle filtering and segmentation of muzzle point images is necessary in the features extraction process. The primary aim of undergoing pre-processing of the muzzle images using enhancement algorithms before the feature extraction and matching process take place is to ensure that the muzzle images are enhanced before the analysis of the extracted texture features and for better representation in the feature space.
A. Equipment for Experiment
Ten (10) species of cow were examined in recognizing the characteristic of individual cow, each having 100 images making 1000 images in total. The patterns of the black and white body of the 10 species of cow were used for the calculation of the input parameters values for training. 400 images of body patterns (40 cows (subject) × 10 images of each subject) were used for the training of the proposed deep learning approach in the training phase. 600 pairs of testing (60 cows (subject) × 10 images of each subject) of the body patterns images in each fold were used for testing the probe images in the testing phase. By middle of September 2018, a test was performed in order to get the image data and the image data was analyzed accordingly by image processing. A charged coupled device (CCD) camera was employed for the side image capture of each cow. In order to obtain images of required width (235-270cm), the CCD camera was placed on a high pole away from the experimental system centerline. The image processing system was strategically placed in a location through which the cow passed everyday with minimized illumination variation for the production of noiseless and clear images as shown in Fig. 1. The cow recognition and identification system can run on any Windows-based personal computer. A faster computer system is recommended for the processing of the images that involves calculations and processing on the go. The personal computer specifications for development of the cow recognition and identification system are Intel core i5 Processor, 8 Gigabyte of RAM, Graphics card, 2 terabytes of hard disk space, a CCD digital camera, and a computer monitor for digitizing, displaying, and processing multiple images. The specification for the execution of the imageprocessing and computer vision elements is OpenCV and its library.
B. Processing of Images
The filtration technique used for this work is Gaussian filtering technique while multi-layer deep learning neural network was used as a classifier for the cow identification and contrast limited adaptive histogram equalization (CLAHE) was used for enhancement of the contrast between the cow's body patterns. The difference of the Gaussian filter was got by finding the difference between two Gaussian functions [37]. Fig. 2 shows some image samples of cow's body patterns from the database. Fig. 3 shows the database containing blurred image patterns of the cow's body affected by the unconstrained environment and postures of the cow leading to poor quality of the images. Using Norouzzadeh et al. [38], we filtered the images to get rid of the blurriness, background patches and low illumination. In order to enhance the identification process and remove the patches and the noises from the captured images that were collected, various image pre-processing techniques were applied. Low illumination and poor image quality are the most two fundamental challenges confronting image acquisition especially images of cow's body patterns. The images captured in an unconstrained environment were converted to grayscale images in order to reduce the patches and the noises captured with them. The converted images were improved upon by contrast limited adaptive histogram equalization based image processing technique.
The pre-processing technique accepts the images in their color form and converts them to grayscale before fetching them into the filter for removal of the patches and the noises contained in the captured images. The feature extraction involves the convolution and pooling operations on the images until the images get to the classifier for classification analysis for the generation of the desired output (Fig. 4). The removal of the noises was carried out using an auto-encoding technique. Stacked denoising auto-encoder (SDAE) technique initializes deep network and it is applicable for encoding and decoding the texture features of the image patterns that were extracted and encoding the extracted sets of features for optimum representation of the feature [39].
Technically, convolutional neural network (CNN) is modeled for the training and testing of each input image which will pass it through a series of convolution layers with filters, pooling, fully connected layers and softmax function for the image classification with probabilistic values between 0 and 1. As shown in Fig. 4, the first layer to extract features from the input image is convolution. Convolution primarily conserves the relationship between pixels by learning the image features using squares of input data. It involves a mathematical operation with two inputs such as image matrix and a filter. When there are too large images, pooling layers primarily reduce the number of parameters (dimensionality size). In the proposed CNN as seen in Fig. 4, the operation of the pooling is applied individually to each feature map. Generally, the more the convolutional steps become, the more the complex features possibility of being recognized becomes using the proposed network. Until the system can dependably recognize objects, the whole process is repeated in successive layers. Each layer's neurons of the CNN as seen in Fig. 4 are in 3D arrangement, making a transformation of a 3D input to a 3D output. For instance, for an input image, the first layer which is the input layer takes the images as 3D inputs, with height, width and color channels as the dimensions of the image. The first convolutional layer's neurons connect to the input images' regions and change them into a 3D output. Each layer hidden units learn nonlinear combinations of the original inputs which becomes the inputs for the layer that follows. By this, at the end of the network, the learned features become the inputs to the classifier.
The intensity values of the gray scale of the background images are more than 100 but less than 150 in respect to the colors of the cows' body surface. 128 was fixed as the pixel's threshold value for the whole image. While 1 is assigned as the binary values for the intensities that are greater than the threshold value of 128, 0 is assigned as the binary values for the intensities that are less than the threshold value of 128. Because the threshold value could be changed with illumination and noise, it becomes very important. Individual cow's image is captured for the identification of their individual characteristics. Individual cow identification using unique body patterns is made possible because of the invariant of the body patterns to growth. This uniqueness enables the patterns to be used as the input layer values in the neural network algorithm.
IV. RESULTS AND DISCUSSION
Having tried out the effectuality of the proposed approach using images of cow's body patterns for the recognition and identification of the cow, the comparison with other recognition algorithms is attained in order to evaluate the accuracy of the identification in proliferation settings. Evaluating the performance of the experimental results, the database of the cow's body images is segmented as follows: (1) the training phase; and (2) the testing phase. 400 body images of different subjects (40 cows (subject) × 10 images of each subject) were used for the training of the proposed approach in the training phase. 600 pairs of testing (60 cows (subject) × 10 images of each subject) of the body patterns images in each fold were used for testing the probe images in the testing phase. www.ijacsa.thesai.org For the training of the proposed deep learning framework using deep belief network (DBN) as shown in Fig. 5, there is a need for a monolithic database amount. Although the number of cow's body images in the database is encouraging, it is not satisfactory enough to train the stacked denoising autoencoder with a database of 1000 worth of cow's body patterns images. Therefore a transfer learning approach is needed to fine-tune the weight between the input and the hidden layer and determine the pre-training of the proposed deep learning approach.
The basic mathematical steps that are involved in using the deep belief network for this work are as follows: Problems setting: Given a training set of pre-processed body pattern image data of which ( 1 1 ), = 1, 2, … , denotes the sample point, ∈ ⊆ is the sample image data while ∈ is the corresponding tag of the label; the recognition procedure of proposed system is to input data set to the network, find the mapping between input and output to form a generative joint probability distribution model formula ( , ), generate the output +1 by for a given prediction sample +1 , and judge the image classification of +1 according to +1 . The system contains the following parts as shown in Fig. 4: The proposed cow's body patterns image identification using deep belief network and a back propagation (BP) network layer, wherein the multi-layer RBM is used to input data feature learning to achieve abstraction and dimensionality reduction of data through the hierarchical feature learning is as shown in Fig. 5; BP network layer is a categorical network, and it is to categorize the abstracted higher-level features through softmax function. The softmax function, also known as softargmax or normalized exponential function, is a function that takes as input a vector of K real numbers, and normalizes it into a probability distribution consisting of K probabilities proportional to the exponentials of the input numbers.
The first part of the processes as shown in Fig. 4 is "preprocessed cow's body patterns images" which are introduced as inputs to the proposed networks for features extraction and classification.
The second part is "pre-training." For a given training set of image data = { 1 , 2 , … , } , the learning system obtains a model through learning (or training) to describe the mapping relationship between input and output variables. This work assumed that RBM model has this descriptive ability, therefore it consists of several layers, through which the input is the image expression data vector while the output is the abstracted higher-level feature vector. Each layer of RBM networks undergoes individually unsupervised training to ensure that feature information is preserved to the uttermost as feature vectors are mapped to different feature spaces. To construct the joint distribution model of visible layer and the hidden layer through energy function, the joint probability maximum likelihood of training sample under model parameter ̂i s calculated by The third part is "fine tuning." Fine-tuning is a common strategy in deep learning to carry out supervised learning through tagged sample training set = {( 1 ′ , 1 ), ( 2 ′ , 2 ), … , ( ′ , ) } . After that, the top feature vectors corresponding to sample output by the multi RBM network are formed based on the training set of statistical classification structure. This part is a BP network; it takes a specific dimension feature vector to a softmax function. In order to get the best connection weights, this work considered solving the following optimization problem using particle swarm optimization (PSO), so that the loss of function in the training set is minimized.
The last part is the "class identification." Tested sample +1 as network input is subjected to feature learning and abstraction through a network model training to produce a corresponding output +1 by and thus achieve classification.
For the evaluation of performance, the local feature descriptor technique was used to extract and encode texture features of the cow's body patterns. As earlier mentioned, the normalization and the descriptor process help in mitigating the external factors such as low illumination, poor image quality, and background patches affecting the captured images. In performing the tasks involved in this process, cells are converted to blocks. During this process, blocks are overlapped and cells shared among the blocks and normalized separately. Scale-invariant Feature Transform (SIFT) and Rectangular-Histogram of Oriented Gradients (R-HOG) are similar though, they don't align to their dominant orientation ( Fig. 8(b)). SDAE produced the best experimental results ( Fig. 6 and Fig. 7) when compared to other approaches used in this work making it fit the most for the denoising. 400 body images equivalent to (40 cows (subject) × 10 images of each subject) were chosen randomly for system training and 600 body images equivalent to (60 cows (subject) × 10 images of each subject) were used for the testing. The experimental results are reported and analyzed as found in Table I. www.ijacsa.thesai.org As it is shown in Table I, the evaluation of the system performance was carried out on the cropping, the training data, and the testing data for the overall achievement of the research objective. The average cropping accuracy of the captured video data is 79.45%, and the identification accuracy of the training data is 92.59% with the testing data having the identification accuracy of 89.95%. The significant reason for binary patterns (Fig. 8(a)) is to sum up the local structure in a block through comparison of each pixel with its neighborhood [40]. Each pixel coded with a sequence of bits is colligated with the connection between the pixel and one of its neighbors. The center pixel's intensity is denoted with 1 if it is greater than or equal to its neighbor, and denoted with 0 if otherwise with a binary number at the end created for each pixel.
V. CONCLUSION
Image-based individual cow recognition using body patterns was the main work carried out in this research. Cows usually are identified to prevent them from being stolen or protect them from danger, and in many agricultural settings, their behaviors are usually studied using imaging technology to enable timely monitoring and identification of health challenges. CNN and some other popular image recognizing techniques such as DBN, SDAE, CLAHE, Gaussian filter, binary pattern, were employed in this work for the cow recognition. The various techniques were discussed in details as they are applicable to the cow recognition process. Datasets of 1000 images of cow's body patterns from 10 species of cow were created for this work where 400 images were employed for the training and 600 images were used for the testing. The advantage of using this datasets is the various species of cow whose images are contained in the database used for the recognition. Gaussian filtering technique was used as the filtration technique; this was supported by SDAE for denoising while multi-layer convolutional neural network was used as a classifier in comparison to deep belief network which needs a monolithic database amount for the cow identification, and contrast limited adaptive histogram equalization (CLAHE) was used for enhancement of the contrast between the cow's body patterns. The performance evaluation of the proposed system for both training and testing data was carried out for each cow's identification and 92.59% and 89.95% accuracies were achieved respectively. Although this work has been able to apply modern image-based identification method for the recognition of cow using body patterns, recognition of occluded and non-linear moving object such as cow in real-time using the object's multifeatures is a work that we consider worthy of investigating in the future. | 2020-03-30T11:43:22.011Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "2d307ed74e487c523f82c8ae9fe8c6be57c360e7",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume11No3/Paper_11-Image_based_Individual_Cow_Recognition.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2d307ed74e487c523f82c8ae9fe8c6be57c360e7",
"s2fieldsofstudy": [
"Computer Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
128130823 | pes2o/s2orc | v3-fos-license | Positive solutions for nonlinear singular elliptic equations of p-Laplacian type with dependence on the gradient
In this paper, we study a nonlinear Dirichlet problem of p-Laplacian type with combined effects of nonlinear singular and convection terms. An existence theorem for positive solutions is established as well as the compactness of solution set. Our approach is based on Leray–Schauder alternative principle, method of sub-supersolution, nonlinear regularity, truncation techniques, and set-valued analysis.
Introduction
Let ⊂ R N (N ≥ 3) be a bounded domain with C 2 boundary. In this paper, we investigate the following singular elliptic equation with Dirichlet boundary condition, p-Laplace differential operator, and a nonlinear convection term (i.e., the reaction function depends on the solution u and its gradient ∇u): (x, u(x), ∇u(x)) + g(x, u(x)) in u > 0 i n u = 0 o n ∂ . Here p stands for the p-Laplace differential operator defined by p u = div(|∇u| p−2 ∇u) for all u ∈ W 1, p 0 ( ) with 1 < p < ∞ and gradient operator ∇. For the convection term f : × R × R N → R, a suitable growth condition H ( f ) in Sect. 3 is required. The semilinear function g : × (0, ∞) → R is singular at s = 0, that is, lim s→0 + g(x, s) = +∞.
In order to emphasize the main ideas, we suppose that p < N . The case N ≤ p can be handled along the same lines. As usual, we denote p * := N p N − p , which is the Sobolev critical exponent. The solution of problem (1) is understood in the weak sense as described below.
Definition 1 We say that u ∈ W 1, p 0 ( ) is a (weak) solution of problem (1) if If p = 2, problem (1) reduces to the semilinear Dirichlet elliptic equation with a singular term and gradient dependence considered by Faraci and Puglisi [14]: A typical case in (1) and (2) Elliptic equations with singular terms represent a class of hot-point problems because they appear in applications to chemical catalysts processes, non-Newtonian fluids, and in models for the temperature of electrical conductors, see, e.g., [4,11]. An extensive literature is devoted to such problems, especially from the point of view of theoretical analysis. For instance, Ghergu and Rȃdulescu [21] established several existence and nonexistence results for boundary value problems with singular term and parameters; Gasínski and Papageorgiou [20] studied a nonlinear Dirichlet problem with a singular term, a ( p − 1)-sublinear term, and a Carathéodory perturbation; Hirano et al. [23] proved Brezis-Nirenberg type theorems for a singular elliptic problem. More details on the topics related to singular problems can be found in Crandall et al. [8], Cîrstea et al. [7], Dupaigne et al. [12], Kaufmann and Medri [25], D'Ambrosio and Mitidieri [9], Carl et al. [6], Giacomoni et al. [22], Gasiński and Papageorgiou [19], Bai et al. [2], Carl [5] and the references therein.
On the other hand, as another challenging topic, elliptic problems with convection terms have been considered in various frameworks. Amongst the results we mention: Faraci et al. [13] proved the existence of a positive solution and of a negative solution for a quasilinear elliptic problem with dependence on the gradient; Faria et al. [15] proved the existence of a positive solution for a quasi-linear elliptic problem involving the ( p, q)-Laplacian and a convection term; Zeng et al. [39] proved the existence of positive solutions for a generalized elliptic inclusion problem driven by a nonhomogeneous partial differential operator with Dirichlet boundary condition and a convection multivalued term; Papageorgiou et al. [35] proved that a nonlinear boundary value problem driven by a nonhomogeneous differential operator has at least five nontrivial smooth solutions, four of constant sign, and one nodal. For other results in this area the reader may consult: Motreanu et al. [32], Motreanu and Tanaka [33], Averna et al. [1], Faria et al. [16], Gasiński and Papageorgiou [18], and the references therein.
In this paper, under verifiable conditions, we provide the existence of positive solutions for problem (1). It is for the first time when such a result is obtained for problem (1), in particular (3), exhibiting singular and convection terms in the nonlinear case p = 2. The approach uses the method of subsolution-supersolution, truncation techniques, nonlinear regularity theory, Leray-Schauder alternative principle, and set-valued analysis. It is worth mentioning that in our analysis of problem (1) we strongly rely on multi-valued mappings arguments. Specifically, the multi-valued setting offers an efficient way to handle the smallest solution of the constructed auxiliary problem. This is another trait of novelty in our paper. The compactness of the solution set of problem (1) is proved, too.
We briefly describe the main ideas in our approach. Corresponding to a fixed smooth function w, we associate to the original statement (1) an intermediate problem replacing the gradient ∇u in f (x, u, ∇u) with ∇w and keeping unchanged the singular term. For the intermediate problem, a positive subsolution u is constructed independently of w and is shown the existence of a solution greater than u. We are thus enabled to consider the set-valued mapping S assigning to w the set S (w) of all such solutions of the intermediate problem. On the basis of the properties of the set-valued mapping S we can prove that the mapping defined by (w) equal to the minimal element of S (w) is compact. The positive solution of the original problem is obtained by applying Leray-Schauder alternative principle to the mapping . At this point we need the following smallness condition where the constants c 1 > 0 and c 2 > 0 are the coefficients of |u| and |∇u|, respectively, in the subcritical growth condition of f (x, u, ∇u), while λ 1 is the first eigenvalue of − u on W 1, p 0 ( ). This condition requires a certain compatibility between the growth of f (x, u, ∇u) and the geometry of the bounded domain imposing some restrictions on as can be seen from known estimates from above and from below for λ 1 . For instance, if is the ball B(0, R) in R N of radius R > 0 and centered at the origin, we have the estimates We refer to Benedikt and Drábek [3] and Kajikiya [24] for estimates of λ 1 on different bounded domains ⊂ R N related to geometric quantities. The rest of the paper is organized as follows. In Sect. 2 we present the needed preliminary material. Section 3 is devoted to establishing our results.
Mathematical background
Let 1 < p < ∞ and p defined by 1 p + 1 p = 1. The Lebesgue space L p ( ) is endowed with the standard norm We denote by C k ( ) for k ∈ N the space of real-valued k-times continuously differentiable functions u in such that the partial derivatives D α u continuously extend to for all |α| ≤ k.
The space C k ( ) is endowed with the norm We shall also use the Banach space where the notation ∂u/∂ν stands for the normal derivative of u with the unit outer normal ν to ∂ . For clarity regarding arguments that involve order we recall the following notions.
Definition 2 Let (P, ≤) be a partially ordered set.
For any s ∈ R, we set s ± = max{±s, 0}. If u ∈ W 1, p 0 ( ), one has The gradients of these functions are equal to Given the functions u 1 , u 2 : → R, we utilize the notation {u 1 > u 2 } = {x ∈ : u 1 (x) > u 2 (x)}, and accordingly {u 1 ≥ u 2 }. For a subset K ⊂ , its characteristic function is denoted by χ K , which means We recall the eigenvalue problem for the p-Laplacian with Dirichlet boundary condition The first eigenvalue denoted λ 1 is positive, isolated, simple, and has the following variational characterization Finally, we review some background material of set-valued analysis. More details can be found in [10,17,31,34,38].
when this holds for every x ∈ X , F is called upper semicontinuous; (ii) lower semicontinuous (l.s.c., for short) at when this holds for every x ∈ X , F is called lower semicontinuous; (iii) continuous at x ∈ X , if F is both upper semicontinuous and lower semicontinuous at x ∈ X ; when this holds for every x ∈ X , F is called continuous.
Proposition 4
The following properties are equivalent:
Proposition 5
The following properties are equivalent:
Proposition 6 Let X , Y be topological spaces and let F
An essential tool in the sequel is the Leray-Schauder alternative principle (or Schaefer's fixed point theorem), see, e.g., Gasiński and Papageorgiou [17, p. 827].
Theorem 7 Let X be a Banach space and let C ⊂ X be nonempty and convex. Assume that : C → C is a compact mapping, i.e., is continuous and maps bounded sets into relatively compact sets. Then it holds exactly one of the following statements: (a) has a fixed point;
Existence of positive solutions
Our assumptions on the data in problem (1) are as follows.
Remark 8
Hypotheses H ( f ) and H (g) permit to construct a sub-supersolution for intermediate problem (4), see below. Condition H ( f ) was employed in Faraci et al. [13], whereas condition H (g) was dealt with in Faraci and Puglisi [14] and goes back to Perera and Silva [36] and Perera and Zhang [37].
Examples of singular functions fulfilling all the requirements in H (g) can be constructed with any γ > 0 and h ∈ L q ( ) + . For instance, one can take to be an open ball in R N and choose any function as with s ∈ (0, 1) and appropriately extending for s > 1, and suitable corresponding functions h on (see [36,37]).
For w ∈ C 1 0 ( ) fixed, we first focus on an intermediate singular Dirichlet (4)
Definition 9
We say that The next lemma is essential for our development.
Proof Let u 1 , u 2 ∈ W 1, p ( ) be supersolutions for problem (4). Corresponding to any ε > 0, consider the truncation η ε : R → R given by which is Lipschitz continuous. From Marcus and Mizel [30], we know about the composition The definition of supersolution for problem (4) yields , and then summing up the resulting inequalities, we get We note that Altogether, we obtain Now we pass to the limit as ε → 0 + . Using Lebesgue's Dominated Convergence Theorem (see, e.g., [31,Theorem 2.38]) and Combining with (5) leads to (6) holds true for all v ∈ W 1, p 0 ( ) + , so u is also a supersolution of problem (4).
Similarly, we can prove the corresponding statement for subsolutions.
Denote by U w ⊂ W 1, p ( ) and U w ⊂ W 1, p ( ) the supersolution set and subsolution set of problem (4), respectively. The following result is a direct consequence gathering Lemmata 10 and 11.
Corollary 12
The sets U w and U w are upward directed and downward directed, respectively.
Next we establish the existence of subsolutions of problem (4).
Condition H (g)(iii) entitles that for each ε ∈ (0, ε 0 ) the function x → g(x, εϑ(x)) belongs to L q ( ) for some q > N , which results in According to H (g)(iii), the function x → g(x, 1) is not identically zero. Then there exists a unique u * ∈ int(C 1 0 ( ) + ) that resolves the Dirichlet problem Using the monotonicity of g on (0, 1) again, it turns out Because of ϑ, u ∈ int(C 1 0 ( ) + ), we can choose ε > 0 small enough to fulfill u − εϑ ∈ int(C 1 0 ( ) + ). Taking into account hypothesis H (g), we derive Since q > N > (p * ) , we have q < p * , where for r > 1, we denote r = r /(r − 1). Therefore we can use the Sobolev embedding theorem (see, e.g., [17, Theorem 2.5.3]), to infer that the embedding of W 1, p 0 ( ) into L q ( ) is continuous. On account of (8) we deduce and due to α ≤ 1 and (7), one has In view of f (x, s, ξ) ≥ 0 for a.e. x ∈ , all s ∈ R and ξ ∈ R N , by (9) it holds 0 and for a.e. x ∈ . Consequently, u is a subsolution of problem (4), which completes the proof.
Remark 14
From the proof Lemma 13 it is clear that the obtained subsolution u is independent of function w and belongs to int(C 1 0 ( ) + ).
We are able to show the existence of positive solutions to auxiliary problem (4).
Lemma 15 Assume that conditions H (g) and H ( f ) hold. Then problem (4) admits a positive solution u with regularity u ∈ int(C 1 0 ( ) + ), which is greater than the subsolution u.
Proof Consider the nonlinear singular truncated Dirichlet problem where f : × R → R and g : × R → R are truncated functions corresponding to f and g defined by for a.e. x ∈ and s ∈ R. Consider also the primitives G : × R → R and F : × R → R given by for a.e. x ∈ and s ∈ R. The energy functional E w : W 1, p 0 → R associated to problem (10) has the expression
Claim 1 The energy functional E w is of class C 1 .
Let u, v ∈ W 1, p 0 ( ) and t > 0. By Mean Value Theorem we may write with some τ ∈ (0, 1). Using Lebesgue's Dominated Convergence Theorem entails The expressions of G and F imply with some τ 1 , τ 2 ∈ (0, t). Invoking Lebesgue's Dominated Convergence Theorem again, we obtain Thus it holds true We can conclude that E w is of class C 1 because g(x, ·) and f (x, ·) are continuous.
Claim 2 The energy functional E w is coercive.
Through the definition of g, hypothesis H (g)(ii) and u ≤ 1, the following estimate is valid The definition of f and growth condition with a constant C 1 > 0. Therefore we get ) determines that the energy functional E w is coercive.
Claim 3 The energy functional E w is weakly sequentially lower semicontinuous.
Let u n u in W Claim 3 ensues. On the basis of Claims 1-3 we are able to apply Weierstrass-Tonelli Theorem finding (4).
Claim 4 If u is a critical point of E w , then u ≥ u and u is a solution of problem
This reads as (10). Inserting v = (u − u) + in the above equality and in (9) produces x ∈ , all s ∈ R and ξ ∈ R N . We are led to On the basis of Claim 4, by virtue of the definitions of g and f , the solution u of (10) becomes a solution of problem (4). This completes the proof.
Via Lemma 15 and Remark 16 we see that S is well-defined meaning that its values are nonempty.
Lemma 17
Assume that H (g) and H ( f ) hold. Then the set-valued mapping S is compact, that is, S maps the bounded sets in C 1 0 ( ) into relatively compact subsets of C 1 0 ( ).
Proof. Let B be a bounded subset of C 1 0 ( ), so there is a constant M > 0 such that For w ∈ B and u ∈ S (w), we have (8), and Hölder's inequality that Thanks to u p p ≤ u p /λ 1 , we obtain The smallness condition d M < λ 1 allows us to derive that S (B) is bounded in W 1, p 0 ( ). Through the nonlinear regularity theory in [26,28,29], there exists α ∈ (0, 1) such that S (B) ⊂ C 1,α ( ) is bounded as well. Since C 1,α ( ) is compactly embedded in C 1 ( ), we infer that S (B) is relatively compact in C 1 0 ( ). The next results establish the continuity of S .
Lemma 18 Assume that H (g) and H ( f ) hold. Then the set-valued mapping S is upper semicontinuous.
Proof According to Proposition 4, we must prove that for any closed subset C of C 1 0 ( ), the set is closed in C 1 0 ( ). To this end, let {w n } ⊂ S − (C) satisfy w n → w in C 1 0 ( ). For each n ∈ N there exists u n ∈ S (w n ) ∩ C, so for all v ∈ W 1, p 0 ( ). It follows from Lemma 17 that the sequence {u n } is relatively compact in C 1 0 ( ). Passing to a relabeled subsequence, we may assume that u n → u in C 1 0 ( ). Recall that u n ≥ u and C is closed in C 1 0 ( ). Hence we have The continuity of f (x, ·, ·) and g(x, ·) implies for a.e. x ∈ because u n → u and w n → w in C 1 0 ( ). From (8), for a.e. x ∈ . Letting n → ∞ in (11), by means of Lebesgue's Dominated Convergence Theorem, we see that for all v ∈ W 1, p 0 ( ), thus u is a solution of problem (4). The latter and (12) reveal that u ∈ S (w) ∩ C, or in other terms, w ∈ S − (C), achieving the proof that S is upper semicontinuous.
Corollary 19
Assume that H (g) and H ( f ) hold. If {w n } and {u n } are sequences in C 1 0 ( ) satisfying w n → w as n → ∞ and u n ∈ S (w n ) for all n ∈ N, then there exist u ∈ S (w) and a subsequence {u n k } of {u n } such that u n k → u in C 1 0 ( ) as k → ∞.
Proof It is straightforward to check that S has closed values. Then Lemma 17 guarantees that S has compact values. The desired conclusion is readily obtained from Lemma 18 and Proposition 6.
Lemma 20 Assume that H (g) and H ( f ) hold. Then the set-valued mapping S is lower semicontinuous.
Proof In order to invoke Proposition 5, let {w n } ⊂ C 1 0 ( ) satisfy w n → w in C 1 0 ( ) and let v ∈ S (w). For each n ∈ N, we formulate the Dirichlet problem In view of v ≥ u, (8) and it is clear that problem (13) has a unique solution u 0 n ∈ int(C 1 0 ( ) + ). As in the proof of Lemma 17, we can verify that, since w n → w in C 1 0 ( ), then the sequence {u 0 n } is relatively compact in C 1 0 ( ). So, there exists a subsequence {u 0 n k } of {u n } such that u 0 n k → u in C 1 0 ( ) as k → ∞ and u is the unique solution of the problem A simple comparison gives u = v. Since every subsequence {u 0 n k } of {u n } converges to the same limit v, it is true that Next, for each n ∈ N, we consider the Dirichlet problem Proceeding as before, we show that this problem has a unique solution u 1 n , which belongs to int(C 1 0 ( ) + ), and lim n→∞ u 1 n = v.
Continuing the process, we generate a sequence {u k n } k,n≥1 such that Fix n ≥ 1. As in the proof of Lemma 17, we notice that the sequence {u k n } k≥1 is relatively compact in C 1 0 ( ), so we may suppose u k n → u n in C 1 0 ( ) as k → ∞. Then it appears that ⎧ ⎨ ⎩ − p u n (x) = g(x, u n (x)) + f (x, u n (x), ∇w n (x)) in u n > 0 i n u n = 0 o n ∂ , and u n ≥ u (see Lemma 15), which amounts to saying that u n ∈ S (w n ).
We carry on the proof by the nonlinear regularity [26,28,29], the convergence in (14), and the double limit lemma (see, e.g., [17,p. Proposition A.2.35]) to obtain u n → v in C 1 0 ( ) as n → ∞. We conclude that for every sequence {w n } in C 1 0 ( ) such that w n → w in C 1 0 ( ) and for every v ∈ S (w) we can find a sequence {u n } ⊂ C 1 0 ( ) satisfying u n ∈ S (w n ) for each n ∈ N and u n → u in C 1 0 ( ). Consequently, by Proposition 5, S is lower semicontinuous.
Corollary 21 Assume that H (g)
and H ( f ) hold. Then the set-valued mapping S : C 1 0 ( ) → 2 C 1 0 ( ) is continuous in the sense of Definition 3(iii) and has compact values.
For each w ∈ C 1 0 ( ), the set S (w) has a rich order structure.
Lemma 22
Assume that H (g) and H ( f ) hold. Then for each w ∈ C 1 0 ( ), the set S (w) is downward directed in the sense of Definition 2.
Proof For any w ∈ C 1 0 ( ), let u 1 , u 2 ∈ S (w) and u := min{u 1 , where f : × R → R is defined by Arguing as in the proof of Lemma 15, we see that problem (15) admits a positive solution u with u ≥ u.
We now show that u ≤ u. Since The last inequality holds because, by Lemma 10, u is a supersolution of problem (4). Observe that the obtained inequality ensures that u ≤ u. Then from (15) and the definition of f we deduce that u ∈ S (w), which completes the proof.
Theorem 23 Assume that H (g)
and H ( f ) hold. Then, for each w ∈ C 1 0 ( ) problem (4) admits a smallest solution u w greater than the subsolution u.
Proof Lemma 22 asserts that for each w ∈ C 1 0 ( ) the ordered set S (w) is downward directed. Let B be a chain in S (w). We can find a sequence {u n } ⊂ B such that lim n→∞ u n = inf B.
Since every u n is a solution of (4) with u n ≥ u, Lemma 17 claims that the sequence {u n } is relatively compact in C 1 0 ( ). So, passing to a subsequence if necessary, there exists v ∈ C 1 0 ( ) such that u n → v in C 1 0 ( ) and v ≥ u. Therefore v = inf B, which allows us to apply Zorn's Lemma (see, e.g., [38]) to provide a minimal element u w for S (w).
We check that u w is the smallest solution of (4) greater than the subsolution u. Let u ∈ S (w). Since, as known from Lemma 22, the ordered set S (w) is downward directed, we can find u ∈ S (w) verifying u ≤ min{u w , u}. However, the minimality of u w ∈ S (w) entails which yields that u w is the smallest solution greater than the subsolution u.
Proof The fact that maps the bounded subsets of C 1 0 ( ) into relatively compact subsets in C 1 0 ( ) is the direct consequence of Lemma 17. Indeed, if B is a bounded subset of C 1 0 ( ), then S (B) is relatively compact in C 1 0 ( ), so does (B) ⊂ S (B). It remains to verify that is continuous. Let {w n } ⊂ C 1 0 ( ) satisfy w n → w and denote u n = (w n ), which reads as ⎧ ⎨ ⎩ − p u n (x) = f (x, u n (x), ∇w n (x)) + g(x, u n (x)) in u n > 0 i n u n = 0 o n ∂ .
Invoking Lemma 17 again, the sequence {u n } is relatively compact in C 1 0 ( ). Up to a subsequence, we may assume that u n → u in C 1 0 ( ). It is obvious that u ≥ u owing to u n ≥ u. On the other hand, in the limit (16) yields ⎧ ⎨ ⎩ − p u(x) = f (x, u(x), ∇w(x)) + g(x, u(x)) in u > 0 i n u = 0 o n ∂ , thus u ∈ S (w). The lower semicontinuity of S proved in Lemma 20 and the characterization of semicontinuity in Proposition 5 ensure that there exists a sequence {v n } ⊂ C 1 0 ( ) with the properties v n ∈ S (w n ) for each n ∈ N, and v n → (w) ∈ S (w).
Notice that u n = (w n ) ≤ v n and u ∈ S (w). Letting n → ∞ implies (w) ≤ u = lim n→∞ u n ≤ lim n→∞ v n = (w), that is u = (w), so the map is continuous.
We are now in a position to prove our main result. Proof First, let us emphasize that every solution of problem (1) must be positive. We claim that each solution of problem (1) is greater than the subsolution u of problem (4) constructed in Lemma 13. Let u be a solution of (1). This is expressed by From the above inequality we deduce that u ≥ u.
In order to justify that problem (1) possesses a (positive) solution we make use of Theorem 7. From Lemma 24, we know that is a compact map. It remains to prove that the set ( ) := u ∈ C 1 0 ( ) : u = t (u) for some t ∈ (0, 1) is bounded in C 1 0 ( ). For any u ∈ ( ), we have u = t (u) for some t ∈ (0, 1), or equivalently | 2019-04-23T13:21:45.674Z | 2019-01-03T00:00:00.000 | {
"year": 2019,
"sha1": "ab3f4e198ac377fe0296b9c87a4442b282c94641",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00526-018-1472-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3fcb7220a5c2557ea874d194b005afc9cebcade3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
211535037 | pes2o/s2orc | v3-fos-license | Characterization of the IncFII-IncFIB(pB171) Plasmid Carrying blaNDM-5 in Escherichia coli ST405 Clinical Isolate in Japan
Purpose New Delhi metallo-β-lactamase 5 (NDM-5) shows stronger resistance to carbapenems and broad-spectrum cephalosporins than NDM-1 because NDM-5 differs from NDM-1 by two amino acid substitutions. In this study, our aim was to characterize a NDM-5-producing Escherichia coli isolate KY1497 from a patient with urinary tract infection in Japan, who had no recent history of overseas travel. Patients and Methods NDM-5-producing E. coli isolate KY1497 was detected in the urine sample of a patient hospitalized in a tertiary hospital in Japan. The complete genome sequence of isolate KY1497 was determined by short- and long-read sequencing with hybrid assembly, followed by multilocus sequence typing (MLST), core-genome phylogeny analysis, plasmid analysis, and transconjugation experiments. Results KY1497 was classified as ST405 by MLST, and core-genome phylogeny exhibited the closest lineage to the clinical isolates in Nepal (IOMTU605) and Canada (FDAARGOS_448). KY1497 harbors blaNDM-5 in the IncFII-IncFIB(pB171) replicon plasmid (pKY1497_1, 123,767 base pairs). Plasmid analysis suggested that the cognate plasmids of pKY1497_1 have a minor plasmid background, rather than the globally disseminated IncX3 plasmid carrying blaNDM-5. Transconjugation analysis revealed that pKY1497_1 is transmissible to the recipient E. coli J53 strain. Conclusion We characterized a novel Inc replicon plasmid (IncFII-IncFIB[pB171]) carrying blaNDM-5 and its host E. coli strain. NDMs are associated with a high risk of infection worldwide because of their antibiotic resistance and untreatable and hard-to-treat infections. Other patients in the hospital showed negative results for carbapenem-resistant Enterobacteriaceae. As NDM-producing strains are only sporadically detected in Japan, attention should be provided to the community prevalence of NDM-producing E. coli strains to prevent nosocomial infections.
Introduction
Bacterial resistance due to β-lactamase is increasingly associated with carbapenemases encoded by various plasmids. Among these newly emerging carbapenemases, New Delhi metallo-β-lactamase 1 (NDM-1) was first reported in 2009. 1 NDMs can hydrolyze all β-lactams, but not monobactams, and are associated with a high risk of causing a global health crisis.
NDM-5 is a variant that differs from other NDM enzymes because it contains two substitutions (Val88Leu and Met154Leu) and shows increased resistance to carbapenems and broad-spectrum cephalosporins. 2 In 2011, NDM-5 was first identified in the UK in a strain of Escherichia coli isolated from a patient with a recent history of hospitalization in India. 2 Escherichia coli strains possessing NDM-5 were subsequently reported to be prevalent in Denmark, France, and Algeria. 3 In Japan, detection of an NDM-5-producing clinical isolate of E. coli was first reported in 2014; this isolate belonged to sequence type (ST) 540, and the patient had traveled to Bangladesh. 4 Herein, we report the first detection of an NDM-5-producing E. coli strain belonging to ST405 in Japan.
Bacterial Isolates
In October 2015, a 79-year-old man was admitted to Kitasato University Hospital with cervical spinal cord injury causing respiratory muscle paralysis and upper and lower limb weakness. The patient developed pneumonia on day 5 of hospitalization; therefore, empirical antimicrobial treatment with vancomycin (VCM) (1 g twice daily) and tazobactam/piperacillin (TAZ/PIPC) (3.5 g three times daily) was initiated. Notable pathogens including gram-positive pathogenic bacteria such as methicillin-resistant Staphylococcus aureus were not cultured from the blood, sputum, and pleural effusion specimens. Therefore, VCM treatment was discontinued after 6 days, while TAZ/PIPC treatment was continued to manage the patient's condition for 29 days. On hospital day 52, the patient developed a urinary tract infection, and a strain of carbapenem-resistant E. coli (KY1497 strain) was isolated from a urine specimen, although the other patients in the hospital showed negative results for carbapenem-resistant Enterobacteriaceae. The carbapenem-resistant E. coli was continuously isolated from the patient's stool and urine specimens as colonies. The cardiopulmonary function of the patient gradually weakened, respiratory failure progressed, and the patient died on day 149. He had been admitted directly to our hospital without a history of international travel.
Antimicrobial Susceptibility Testing
Antimicrobial susceptibility of the isolate was determined by microdilution according to the Clinical and Laboratory Standards Institute (CLSI) reference methods, 5 except that European Committee on Antimicrobial Susceptibility Testing (EUCAST) breakpoints 6 were used to evaluate tigecycline and polymyxin B. Two disks of ceftazidime and sodium mercaptoacetic acid (Eiken Chemical Co., Ltd., Tokyo, Japan) were used as indicators of metallo-βlactamase production.
Antimicrobial Resistance Gene Screening and Molecular Typing
Polymerase chain reaction (PCR) was performed to detect the bla IMP , bla VIM , bla NDM , bla OXA-48 , and CTX-M-1 group genes in the isolate. [7][8][9] The bacterial PFGE plug was digested with S1 nuclease, followed by PFGE using a previously reported method with some modifications, 10 and visible DNA bands, which were possible plasmids, were excised to extract DNA.
Whole-Genome Sequencing
DNA libraries were constructed using the Nextera XT sample prep kit according to the manufacturer's instructions (Illumina, Inc., San Diego, CA, USA), followed by next-generation sequencing (Miseq, Illumina, Inc.). 11 We performed long-read sequencing with PacBio RSII and obtained the resulting unitigs with HGAP v. 4.0 de novo assembler, followed by error-correction and complete genome sequence determination by Illumina short-read sequencing. Genome annotation was carried out using DFAST. 12 Strain genotyping was determined in silico by MLST (http://cge.cbs.dtu.dk/services/MLST/).
Plasmid Conjugation
Plasmid conjugation using the broth method 13 was carried out between the bla NDM5 -positive isolate KY1497 and sodium azide-resistant E. coli J53 as the recipient strain. Transconjugants were selected on selection plates supplemented with a combination of ceftriaxone (8 mg/L) and sodium azide (100 mg/L), and the presence of the NDM-5 gene was confirmed by PCR.
Ethics Statement
This study was approved by the research ethics committee of Kitasato University Hospital (approval no. B17-123) and complied with the Declaration of Helsinki. Written informed consent was obtained from the patient for publication of this case report.
Results and Discussion
Comparative Analysis of the IncFII-IncFIB Plasmids Harboring Carbapenemases The complete genome sequence of KY1497 suggested that it belongs to ST405 and O102: H6 ( Figure 1A) and carries three plasmids. bla NDM-5 is located on the 123.7-kb plasmid pKY1497_1, which is an IncFII-IncFIB(pB171) replicon type ( Figure 1B). S1-PFGE revealed three plasmid bands with lengths corresponding to those of the complete genome sequences ( Figure 1C). Core-genome phylogeny of bla NDM-5 -positive E. coli ST405 (19 strains in total) suggested that KY1497 had the most similar lineage to clinical isolate IOMTU605 in Nepal and FDAARGOS_448 in Canada, with 38 and 39 single nucleotide variants (SNVs), respectively ( Figure 1A). These two strains carry the homologous bla NDM-5 -positive plasmid ( Figure 1B). Pair-wise alignment of pKY1467_1 displayed homologous regions with other IncFII-IncFIB(pB171) plasmids except for the qepA4 quinolone-resistance gene ( Figure 1B). Regarding the IncFII-IncFIB(pB171) background, pKY1497_1 shares most of its plasmid backbone, whereas pJJ1887-5 in E. coli JJ1887 (ST131) carries other antimicrobial resistance genes rather than bla NDM-5 , indicating that pJJ1887-5 is one of the most common ancestral plasmids for bla NDM-5 acquisition. 14 We detected bla NDM variants on plasmids >100-kb in size, with IncF, IncA/C, and untypeable replicons. Previous reports have indicated that the IncX3-type plasmid plays a major role in the global dissemination of NDM-producing Enterobacteriaceae. 11,15,16 In this study, molecular characterization of KY1497 revealed that it carried bla NDM-5 in a 123.7-kb plasmid harboring IncFII-IncFIB(pB171), suggesting that IncF has a similar role of dissemination as IncX3.
Strain Features
In the isolate, bla NDM was the only carbapenem-resistance gene detected. The MLST analysis classified E. coli (KY1497 strain) as ST405, suggesting that E. coli ST405 strains have the potential to become a reservoir for the bla NDM-5 gene. NDM-5-producing E. coli belonging to ST405 was previously detected in Spain and Italy. 17 Escherichia coli ST405 was found to carry bla CTX−M , bla NDM , and a repertoire of virulence genes comparable to those in O25b: H4ST131. 18 According to a previous study, among NDM-producing E. coli, ST405 was the fourth most common reported ST and the most abundant ST in Nepal and Europe, showing the highest distribution in the UK and Italy. 19 Sporadic occurrence of NDM-5 producers in Japan has been reported, 3,4 however, NDM-5-producing E. coli belonging to ST405 has not been detected.
Transferability of bla KY1497 was resistant to fluoroquinolones and all β-lactams, including broad-spectrum cephalosporins and carbapenems, whereas it remained susceptible to tigecycline. The antimicrobial susceptibilities of transconjugants derived from E. coli J53 were similar to those of the donor KY1497 strain, particularly for penicillin, cephalosporin, and tigecycline ( Table 1). The KY1497 strain successfully transferred the resistance plasmid at a frequency of 8.3 × 10 -6 , creating E. coli J53 KY1497T, suggesting the horizontal transfer of bla NDM-5 in the IncFII-IncFIB(pB171) plasmid. The IncF type plasmids were conjugatable, which may explain the rapid spread of the NDM-carrying isolates. Therefore, effective and feasible measures must be taken immediately to control the dissemination of these resistant plasmids.
Travelers contribute significantly to the global spread of microbes and resistance genes. KY1497 was isolated from a non-traveler, suggesting that it is caused by an autochthonous strain or transmission by undetected carriers. In this case, TAZ/PIPC was prescribed for 29 days before detecting KY1497. Long-term broad-spectrum antibiotics may enable the detection of carbapenem-resistant strains. 20, 21 As the patient stayed in a private room, environmental investigation was performed after his death. Swab samples from a shelf close to the bed, hand-wash sink, drain ditch, inside of bedpan, and toilet-cleaning apparatus were tested by cultivation as possible sources of NDM-5 transmission; however, NDM-5-producing bacteria were not isolated. There have been previous reports of community-acquired NDM-producing isolates, indicating the existence of an undetected reservoir and potential transmission among colonized carriers in hospitals. 22,23 One limitation of this study was that the undetected reservoirs formed by NDM-5-producing isolates were not investigated. Our results strongly emphasize that while strains producing NDM enzymes are rarely reported among hospitalized patients in Japan, attention should be paid to the community prevalence of such strains to monitor future trends and prevent further horizontal spread.
Conclusion
In this study, a self-transmissible IncFII-IncFIB plasmid carrying bla NDM-5 was detected in ST405 E. coli, which is a novel Inc replicon plasmid. We highlighted the dissemination potential of the IncFII-IncFIB plasmids harboring bla NDM-5 . Effective infection control steps should be taken to prevent nosocomial infections. | 2020-02-20T09:05:59.717Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "9288f6cc705ea7c681082e2056e00a54105e3b6e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=56188",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56d5e631d89669b4b9b054086ee1f4c025c875d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
249889607 | pes2o/s2orc | v3-fos-license | A complete metric space without non-trivial separable Lipschitz retracts
We construct a complete metric space $M$ of cardinality continuum such that every non-singleton closed separable subset of $M$ fails to be a Lipschitz retract of $M$. This provides a metric analogue to the various classical and recent examples of Banach spaces failing to have linearly complemented subspaces of prescribed smaller density character.
Introduction
Given two metric spaces M and N , a map F : M → N is said to be Lipschitz if there exists a constant C > 0 such that d F (x), F (y) ≤ Cd(x, y) for all x, y ∈ M . The Lipschitz constant of F , denoted F Lip , is the smallest number verifying this inequality, i.e.: We say that a map F is K-Lipschitz for K > 0 if F Lip ≤ K. Given a metric space M and its closed subset S, we say that a Lipschitz map R : M → S is a (Lipschitz) retraction from M onto S if R(x) = x for all x ∈ S. If there exists a K-Lipschitz retraction R : M → S for some K ≥ 1, then we say that S is a K-Lipschitz retract of M . Every singleton is trivially a Lipschitz retract in every metric space.
A search for nontrivial retracts is very natural, as they provide the grip on the structure of the original metric space M . In the linear setting (when the metric spaces are Banach spaces and retractions are bounded projections) the study of projections (i.e complemented subspaces) is one of the main themes of the theory. The main result of this paper can be stated as follows.
Theorem A. There exists a complete metric space M of cardinality continuum such that every non-singleton closed separable subset of M fails to be a Lipschitz retract of M .
The basic ingredient for our construction is provided by a modified example from Theorem 3.7 in [HQ22]. In the first two sections we generalize some of the arguments from [HQ22] and then we pass to the transfinite construction of the final example M . Our construction is self-contained but quite technical.
Let us put this example into context of the mentioned (linear or nonlinear) Banach space situation. In nonlinear Banach space theory, the study of the Lipschitz structure of Banach spaces is a classical topic with many deep results and open questions (we refer to [BL00] for a comprehensive exposition of nonlinear Banach space theory). One of such important problems, going back at least to the seminal paper [Lin64] by J. Lindenstrauss is whether every Banach space is a Lipschitz retraction of its bidual. In [Kal11], N. Kalton proved that this fails for nonseparable spaces, while the separable case remains open. Let us briefly discuss a consequence of this conjecture for separable Banach spaces, which illustrates one of the main motivations the result in this paper: Given a Banach space X and λ ≥ 0, a subspace Y of X is said to be λ-locally complemented if Y * * is linearly λ-complemented in X * * . It follows from the J. Lindenstrauss and L. Tzafriri characterization of Hilbert spaces by closed subspaces with the Compact Extension Property (CEP) (in [LT71]) and the equivalence between local complementability and the CEP (due to Kalton in [Kal84]) that in a non-Hilbert Banach space there always exist separable closed subspaces that are not locally complemented.
It can be shown that every Lipschitz retract of a Banach space is locally complemented (see for instance [HQ22]). Conversely, if a subspace Y is locally complemented in a Banach space X, then Y is a Lipschitz retract of X whenever Y is a Lipschitz retract of its bidual Y * * . This follows directly considering the restriction of the map P • R Y : X * * → Y to X, where P : X * * → Y * * is a linear projection onto Y * * and R Y : Y * * → Y is a Lipschitz retraction onto Y . This is an interesting consequence because every Banach space has a relatively rich structure of locally complemented subspaces of any density character. Indeed, it is a classic result of S. Heinrich and P. Mankiewicz ( [HM82]) that in every Banach space X, given a closed subspace Z one can always find a closed subspace Y containing Z and with dens(Y ) = dens(Z) such that Y is 1-locally complemented in X.
On the other hand, the situation is quite different for the linear structure of Banach spaces. It is well-known (see [Lin67]) that all infinite-dimensional complemented subspaces of the classical non-separable Banach space ℓ ∞ are again isomorphic to ℓ ∞ , in particular they are non-separable. A remarkable recent result of P. Koszmider, S. Shelah and M.Świȩtek in [KSŚ18] shows that, assuming the Generalized Continuum Hypothesis, for every cardinal κ there exists a compact topological space K such that the Banach space C(K) does not have any non-trivial complemented subspaces of density character less than or equal to κ.
Hence, at least in the separable case, the Lipschitz retractional structure of a Banach space could range from being as rich as the locally complemented structure (if the long standing conjecture of separable Banach spaces being Lipschitz retracts of their bidual holds), to being more similar to the linearly complemented case.
The main result of this paper shows that, in the more general setting of metric spaces where Lipschitz functions are the natural morphisms, there exist metric spaces with no non-trivial separable Lipschitz retracts. The density character of the metric space we construct is the continuum.
To finish the discussion about the context of this result, we will touch on how it relates to the current research on Lipschitz-free Banach spaces. . This Banach space is also known as the Arens-Eells space (defined in [AE56]). It has been studied extensively in the last decades, especially after the publication of [GK03] by G. Godefroy and N. Kalton in 2003. The main property of Lipschitz-free spaces is the fact that given two metric spaces M, N and a Lipschitz map F : M → N there exists a linear operator F : F(M ) → F(N ) with norm F = F Lip such that F • δ M = δ N • F . If the space N is a subset of M , then the identity map restricted to N yields an isometric embedding of F(N ) into F(M ), and if R : M → N is a Lipschitz retraction, the associated map R : F(M ) → F(N ) is a linear and bounded projection from F(M ) onto F(N ) (when the latter is considered as a subspace of the former). This fact implies that the Lipschitz retractional structure of a metric space passes on to the linear structure of the associated Lipschitz-free space.
The linear structure of Lipschitz-free spaces has been an active topic of research in the past two decades, starting with [GK03], where it is proven that every separable Banach space can be seen as a 1-complemented linear subspace of its Lipschitz-free space. Other recent results include for instance the fact that every Lipschitz-free space of a metric space M contains a complemented copy of ℓ 1 (Γ), where Γ is the density character of M (see [CDW16] and [HN17]), and that there exists a universal constant K ≥ 1 such that F(C) is K √ N -linearly complemented in F(R N ) for every closed subset C of the N -dimensional euclidean space R N ( [LP13]). A very small sample of articles that deal with the structure of Lipschitz-free spaces include [Ali+21; Dal15; DKP16; DF06; GO14; Kal04; Kal11;Kau15].
A Banach space X has the Separable Complementation Property (SCP for short) if every separable subspace Z is contained in a separable subspace which is complemented in X. A Banach space is said to be Plichko if there exists a pair (∆, N ) where ∆ is a linearly dense subset of X and N is a norming subspace of X * such that the set {x ∈ ∆ : f, x = 0} is countable for all f ∈ N . All Plichko Banach spaces have the SCP, and we do not know of any examples of Lipschitz-free spaces (over a metric space) which fail to be Plichko.
To obtain such an example, a necessary (but not sufficient) condition on the underlying metric space is of course that it must fail to have a nontrivial separable Lipschitz retractional structure. We do not know if the Lipschitzfree space associated to the metric space constructed in this article is Plichko (or even if it has the SCP).
The space F(ℓ ∞ ) could be a natural candidate for a Lipschitz-free space failing to have the SCP and thus failing to be Plichko, and it seems to be unknown at the moment whether F(ℓ ∞ ) does indeed fail these properties. However, P. Kaufmann and L. Candido have recently shown in [CK21] that its dual space Lip 0 (ℓ ∞ ) is linearly isomorphic to Lip 0 (c 0 (c)), where c denotes the cardinality of the continuum. Since c 0 (c) is Plichko, we have by Corollary 2.9 in [HQ22] that F(c 0 (c)) is Plichko as well. A similar remark to this effect was already made in [CK21].
Finally, let us discuss the structure of this article. As mentioned, the construction of our metric space is self contained though technical. It is divided into three sections, going from section 2 to section 4. In section 2 we define the basic pieces of the construction, called threads. These threads are isometric to subsets of one-dimensional circles with the distance given by the arc-length. We will define uncountable families of totally disconnected threads which verify certain metric properties related to Lipschitz functions between these threads.
In section 3 we will use these uncountable families of threads to define the building blocks of the final metric space. These building blocks are called threading spaces, and each block is built from one of the uncountable families defined in the previous section. All threads that form each one of these threading spaces are attached to two anchor points {0, 1} in the threading space, and every one of these threading spaces verifies the weaker property that every separable subset containing both anchor points is not a Lipschitz retraction of the whole threading space.
In section 4 we finish by using these threading spaces to construct the final metric space via a transfinite inductive process of length ω 1 . We call the resulting complete metric space the skein space. Very informally, the skein space verifies that any pair of points behaves as the pair of anchor points of one of the threading spaces constructed in section 3. This way we have that any separable space with more than one point contains two anchor points of a threading space, and hence it is not a Lipschitz retract of the whole skein. Although the inductive construction of the skein space is relatively straightforward, proving that it verifies the thesis of Theorem A is quite a technical process, and requires introducing some concepts and using a wide array of techniques, all of which are introduced when needed.
2. Construction of the fundamental pieces: Threads with infinitely many gaps 2.1. Threads. Let l, a > 0 with a ≤ l. We say that a metric space (T, d l,a ) is an R-thread of length l and width a if T is a closed subset of the real segment [0, l] containing 0 and l, and the metric d l,a is defined by for every x, y ∈ T . Our main example will be constructed inductively by repeated adjoining of metric spaces, isometric to a thread described above, to the previous space. In this sense, the adjoined new pieces are certainly meant to be distinct sets. However, keeping in mind this feature, there is no danger of confusion if we simply call any metric space T a thread of length l and width a if T is isometric to an R-thread of length l and width a as defined above (and work with it using the above description). Let us mention some basic facts about threads. First, notice that every thread is a compact metric space. Also, we may define in every thread (T, d l,a ) the natural order and the Lebesgue measure since the set T is a subset of the real line. Then, for every x, y ∈ T with x ≤ y we define the set [x, y] T ⊂ T as [x, y] ∩ T , where [x, y] is the usual real segment. The set [x, y] T with the inherited metric is again a thread.
If T is a thread of length l, we say that a closed subset I of T is an extended interval of T if I is of the form [p, q] T = [p, q] ∩ T or [0, p] T ∪ [q, l] T for a pair of points p, q ∈ T with p < q. In either case, the points p and q are called the extreme points of I. See Figure 1 for a representation of a thread and the two kinds of extended intervals it contains.
Notice as well that in a thread of length l and width a, the distance between the extreme points 0 and l is exactly the width a. We can also realize that every thread is locally isometric to T with the usual metric inherited from the real line; indeed, if the distance between two points of T is less than the width of the thread, then this distance coincides with the usual metric. As a consequence, we have that if the length and the width of a thread coincide, then the thread is isometric to a subset of the real segment [0, l].
The way we compute the distance in threads implies that Lipschitz functions from threads into other metric spaces are similar to Lipschitz functions from intervals. Specifically we have the following result: Proposition 2.1. Let T be a thread of length l T and width a T , let M be a metric space, and let K ≥ 0. A function F : T → M is K-Lipschitz if and only if d(F (0), F (l T )) ≤ Ka T , and for every x, y ∈ T we have d F (x), F (y) ≤ K|y − x|. Proof. Evidently, if F is K-Lipschitz, we directly obtain that d(F (0), F (l T )) ≤ Ka T and the inequality: Suppose now that the inequality is true for every pair of points in T , and take x ≤ y ∈ T . If d(x, y) = y − x, then we obtain directly that d F (x), F (y) ≤ Kd(x, y). Otherwise, we have that d(x, y) = x + a T + (l T − y). Therefore Hence, F is K-Lipschitz.
2.2.
Lipschitz functions between threads with gaps. In a thread T , we say that a non-trivial open interval (x, y) ⊂ R is a gap of T if x, y ∈ T and (x, y) ∩ T = ∅. The points x, y of a gap C = (x, y) in a thread T are called the endpoints of C, and the value d(x, y) is the length of the gap. It is readily seen that a closed subset of R can have at most countably many distinct gaps. Hence, given any complete thread T ⊂ R, we may consider the sequence {C T k } k∈N of gaps in T . Moreover, since every thread T is bounded, its sequence of gaps can be ordered so that length(C T k+1 ) ≤ length(C T k ) for all k ∈ N.
We are going to study in detail the behavior of Lipschitz maps between threads with infinitely many gaps. We have the following property.
Lemma 2.2. Let T and S be two threads of length l T , l S and width a T , a S respectively. Let K ≥ 1, and suppose that there is no gap in T with length greater than or equal to a S /K. Then for every K-Lipschitz function F : T → S we have that for all p, q ∈ T.
Proof. For any pair of points p, q ∈ T with p ≤ q there exists an increasing finite sequence (x k ) n k=1 ⊂ T with x 1 = p and x n = q such that d( Hence, we have The result is proven.
Next, we are going to prove an elementary proposition which will allow us to assume without loss of generality that the Lipschitz maps we consider are non-decreasing.
Proposition 2.3. Let K ≥ 1. Let T, S be two threads of length l T , l S and width a T , a S respectively, and let F : T → S be a K-Lipschitz function such that F (0) = 0 and F (l T ) = l S . Then there exists a non-decreasing Lipschitz function F : T → S with F Lip ≤ F Lip such that F (0) = 0 and F (l T ) = l S .
Proof. Put K = F Lip . Notice that if T has a gap (p, q) of length greater than or equal to a S /K, then the result follows directly putting F (x) = 0 if x ≤ p, and F (x) = l S if x ≥ q. Suppose then that there are no gaps in T with length greater than or equal to a S /K. Now define F : T → S by Clearly, F is non-decreasing with F ≤ F , F (0) = 0 and F (l T ) = l S . It only remains to see that F Lip ≤ K. Using Proposition 2.1, we only need to prove that given p, q ∈ T with p ≤ q, we have Observe that F (q) = F (z) for some z ≤ q. If z ≤ p we necessarily have that F (q) = F (p) and the equation is trivially verified. Otherwise, using and equation (1) is proven.
We conclude that F Lip ≤ K and the result is proven.
We can also use Lemma 2.2 to prove a similar result to the one above.
Proposition 2.4. Let T and S be two threads with length l T and l S , and width a T and a S respectively. Let K ≥ 1. Suppose there exists a K-Lipschitz function F : T → S such that F (0) = A and F (l T ) = B, for two points A, B ∈ S with A < B. If T does not have any gap of length greater than or equal to a S /K, then the function F : is K-Lipschitz as well.
Proof. As before, by Proposition 2.1 we only need to check that for every p, q ∈ T with p ≤ q, we have: We will only prove the case when F (p) ≤ A and F (q) ∈ [A, B] S , since the remaining possibilities are shown similarly. By Lemma 2.2, we have in this case that We conclude that F Lip ≤ K.
Let us now give some definitions and prove some technical results which will be heavily used in the proof of the main theorem of the section. Let T and S be two threads, and suppose there is a Lipschitz function F : T → S which is non-decreasing. We say that a gap (p T , q T ) in T jumps over a gap (p S , q S ) in S with respect to F if F (p T ) ≤ p S and F (q T ) ≥ q S (see Figure 2).
The first lemma we prove says intuitively that if we have a non-decreasing Lipschitz function F between two threads T and S that fixes the extreme points of the threads, then every gap in S must be jumped by a gap in T with respect to F . Although this result is fairly intuitive, we include the (simple) proof for completeness.
The gap (p, q) jumps over (x, y) with respect to F .
Lemma 2.5. Let T and S be two threads of length l T and l S respectively. Suppose that there is a non-decreasing Lipschitz function F : T → S such that F (0) = 0 and F (l T ) = l S . Let C S be a gap in S. Then there exists a gap in T that jumps over C S with respect to F .
Proof. Define p S , q S ∈ S such that C S = (p S , q S ). Consider the points: e + = min{F (y) ∈ S : y ∈ T, F (y) ≥ q S }.
These minimum and maximum values always exist since we have that F (0) = 0 and F (l T ) = l S , and T is compact. Hence, we can find y + = min{y ∈ T : F (y) = e + }.
Since F is non-decreasing and e − < e + , we have that x − < y + and (x − , y + ) T = ∅. Moreover, both x − and y + belong to T again by compactness, so (x − , y + ) is a gap in T . The gap (x − , y + ) jumps over (p S , q S ) with respect to F . The second lemma we prove can also be easily deduced and is intuitively clear. It shows that if a small enough gap (in a sense that is made explicit in the statement of the lemma) C T in a thread T jumps over several gaps in a thread S simultaneously with respect to a Lipschitz function F , then the length of C T must be bigger than the length of the smallest subinterval of [0, 1] that contains all the gaps C T jumps over, divided by the Lipschitz constant of F . Lemma 2.6. Let K > 1, and let T and S be two threads of length l T and l S respectively. Denote by a S the width of S. Suppose that there is a nondecreasing Lipschitz function F : T → S with F Lip = K such that F (0) = 0 and F (l T ) = l S . Let C T be a gap in T such that length(C T ) < a S /K, and let (x j , y j ) k j=1 be a finite collection of different gaps in S. If C T jumps over (x j , y j ) with respect to F for all 1 ≤ j ≤ k, then Proof. Put C T = (p, q) with p, q ∈ T . Since C T jumps over (x j , y j ) for all 1 ≤ j ≤ k, we have that F (p) ≤ x j and F (q) ≥ y j . Hence, we have that Since d(p, q) < a S /K, when computing the distance between F (p) and F (q) in the thread S, we necessarily have that d F (p), F (q) = F (q) − F (p). Therefore, applying that F is K-Lipschitz we obtain: , and the result is proven.
We are going to define also a particular kind of intervals which will be useful in the proof of Theorem 2.8. Let (a, b) ⊂ [0, 1] be a nontrivial open interval, and let r > 0. We define the sweeping of [a, b] by r as the interval Notice that if r ≤ (b − a)/2, then D r (a, b) = ∅. We can prove two simple properties about this concept.
Proposition 2.7. The following properties are verified: (1) Let (a, b) ⊂ [0, 1] be a nontrivial open interval, and let r > 0. Then the Lebesgue measure of the sweeping D r (a, b) is less than 2r.
(2) Let r > 0, and let T and S be threads of length l T and l S respectively. Let F : T → S be a non-decreasing K-Lipschitz map such that F (0) = 0 and F (l T ) = l S . Suppose that there is a gap C T in T such that length(C T ) < a S /K, and such that C T jumps over two gaps C S 1 , C S 2 in S with respect to F . Moreover, suppose that C S 2 D r (C S 1 ). Then K · length(C T ) > r. Proof. Statement (1) is easy to see. For statement (2), put C S 1 = (x 1 , y 1 ), C S 2 = (x 2 , y 2 ) with x 1 , x 2 , y 1 , y 2 ∈ S. Notice that if C S 2 D r (C S 1 ), this means that either y 1 − r − x 2 > 0, or y 2 − x 1 − r > 0. In any case, we obtain that Now, since C T jumps over C S 1 and C S 2 simultaneously and length(C T ) < a S /K, the result follows from Lemma 2.6.
In figure 3 we have a representation of the situation in (2) of the previous result.
Theorem 2.8. Let K 0 ≥ 1, let {T n } n∈N be a countable family of threads of length l n and width a n respectively for each n ∈ N, and let ε > 0 be such that for every n ∈ N: • The Lebesgue measure of T n is bigger than ε.
• If I is an open interval such that I ∩ T n is nonempty, then there exist infinitely many gaps of T n contained in I. Then, there exists a decreasing sequence γ * = (γ * k ) k∈N of positive real numbers with the following property: Let 1 ≤ K ′ ≤ K, let S be a thread of length l S , and let {C S k } k∈N be the sequence of gaps of S ordered decreasingly according to their length. If length(C S k ) ≤ γ * k for all k ∈ N, then for every n ∈ N such that length(C S 1 ) < a n /K ′ there does not exist any K ′ -Lipschitz function S → T n such that F (0) = 0 and F (l S ) = l n .
Proof. For each n ∈ N, let {G n k } k∈N be the sequence of gaps of T n ordered decreasingly according to their length, and put α n k = length(G n k ). Then α n = (α n k ) k∈N is the decreasing sequence of lengths of the gaps in the thread T n for each n ∈ N.
We are going to construct inductively by a diagonal method the sequence γ * = (γ * k ) k∈N with the following properties: (2) Let 1 ≤ K ′ ≤ K, let S be a thread, and let {C S i } i∈N be the sequence of gaps of S ordered decreasingly according to their length.
i } i∈N be the sequence of its gaps in decreasing length order, and suppose that length(C S 1 ) ≤ γ * 1 . Suppose by contradiction that length(C S 1 ) < a 1 /K ′ , and that there exists a Lipschitz map F : S → T 1 with F Lip ≤ K ′ and F (0) = 0 and F (l S ) = l 1 . We assume F to be non-decreasing by Proposition 2.3.
The thread T 1 has the gap G 1 1 of length α 1 1 . By Lemma 2.5, there exists i ∈ N such that the gap C S i in S jumps over G 1 1 . Then, by Lemma 2.6, since a contradiction. The first step of the induction is done.
verifying the desired properties for k ∈ N. Before continuing with the proof, let us informally give some intuition of the technical argument that follows. We want to define the next element in the sequence (that is, γ * k+1 ) small enough so that any thread S with gaps smaller than the first k + 1 elements of γ * , and smaller than a k+1 , cannot be mapped with a K-Lipschitz function that preserves the extremes into the thread T k+1 . However, since the first k elements of γ * are already set and do not depend on the next thread T k+1 , the first k gaps in S could jump over many gaps in T k+1 , and in many different ways. Nevertheless, as we are going to see, the fact that T k+1 is of large enough measure and contains infinitely many gaps in each intersecting open interval, and the way in which we chose the first k elements in γ * , ensures that there will always be infinitely many gaps in T k+1 that cannot be jumped over in any way with the first k gaps of S with any suitable K-Lipschitz function. Hence, we will be able to define γ k+1 small enough so that the biggest of these "unjumped" gaps in T k+1 cannot be jumped over either by the remaining gaps of S. We just need to account for all the possibilities in which the first k gaps of S might behave under a K-Lipschitz function. Let be an ordering of the sequence {1, . . . , k}. For convenience of the notation, put j 0 = 0, and n j 0 = 1, and define is a finite union of open intervals, by hypothesis there must exist infinitely many gaps in T k+1 \ D (j 0 ,j 1 ) . We can then consider Intuitively, G k+1 n (j 0 ,j 1 ) is the biggest gap of T k+1 smaller than G k+1 n j 0 which is not contained in the sweeping D (j 0 ,j 1 ) . We continue the process defining The measure of D (j 0 ,j 1 ,j 2 ) is at most (2 −j 1 +2 −j 2 )ε < ε, and its complement in [0, l k+1 ] is still a finite union of open intervals, so by the properties of T k+1 we can make the same argument as before to find which will be the biggest gap of T k+1 smaller than G k+1 n (j 0 ,j 1 ) not contained in n (j 0 ,j 1 ) ). Repeating this process k times, we can define ..,j i−1 ) ) for any 1 ≤ i ≤ k, and smaller than G k+1 n (j 0 ,...,j i−1 ) for every 1 ≤ i ≤ k − 1. Notice that this last condition can be written as: Clearly Ω k is a finite set, so we can define n Ω k = max{n σ : σ ∈ Ω k }. The corresponding gap G k+1 n Ω k is smaller than or equal to each G k+1 nσ . Equivalently, we have that n Ω k }. Again, property (1) of the induction is verified. Let 1 ≤ K ′ ≤ K, let S be a thread such that its gaps sequence of gaps {C S i } i∈N ordered decreasingly in length, verify that length(C S i ) ≤ γ * i for all i ≤ k + 1. Applying inductive hypothesis, since the result is assumed to be true for k, we only need to prove that if length(C S 1 ) < a k+1 /K ′ , there is no Lipschitz map F : S → T k+1 with F Lip ≤ K ′ and F (0) = 0 and F (l S ) = l k+1 . Suppose by contradiction that such a map F exists. Again, we may assume F to be non-decreasing.
Put again j 0 = 0 and n j 0 = 1, and consider the gap G k+1 in T k+1 (that is, the biggest gap of T k+1 ). By Lemma 2.5, there exists j 1 ∈ N such that the gap C S j 1 in S jumps over G k+1 , which has length α k+1 , by Lemma 2.6 we have that j 1 ≤ k.
Therefore, we can define n (j 0 ,j 1 ) = min{n > n j 0 : G k+1 n D (j 0 ,j 1 ) } as before. Consider now the gap G k+1 n (j 0 ,j 1 ) in T k+1 , and take j 2 such that C S j 2 jumps over G k+1 n (j 0 ,j 1 ) . Again, since K ′ · length(C S k+1 ) < α k+1 n Ω k < α k+1 n (j 0 ,j 1 ) , we obtain that j 2 ≤ k. Moreover, j 2 is different from j 1 . Indeed, if j 2 = j 1 , then C S j 1 jumps over both G k+1 n j 0 and G k+1 n (j 0 ,j 1 ) . By the choice of n (j 0 ,j 1 ) , G k+1 We can repeat this process k-times until we obtain a sequence To finish the proof, consider the gap G k+1 nσ in T k+1 , where n σ is defined as above for σ ∈ Ω k , and takeĩ such that C S i jumps over G k+1 nσ . Reasoning as before, by the choice of γ * k+1 and using equation (3) The induction is finished and the result follows.
Compact subsets of the real line with positive measure that contain no nontrivial intervals have been considered many times before: the well known fat Cantor sets are examples of this kind of objects. For our purposes, we need to find threads with these properties and whose gaps are smaller than any given decreasing sequence of positive real numbers. For completeness, we include the construction of such threads in the following subsection.
Let us first finish this subsection by making a simple remark: Remark 2.9. Let M be a complete metric space, let K ≥ 1, and let S 1 , S 2 be two closed subsets of M such that d(S 1 , S 2 ) = ε > 0. Then, if T is a thread such that its sequence of gaps {C T k } k∈N verifies length(C T k ) < ε/K for all k ∈ N, then there is no K-Lipschitz map F : T → S 1 ∪ S 2 such that F (0) ∈ S 1 and F (p) ∈ S 2 for some p ∈ T .
Proof. Consider the point 2.3. Construction of threads with infinitely many gaps. Our objective now is to define a collection of closed subsets of the real segment [0, 1] containing 0 and 1 which, when given a thread metric, will verify the hypothesis of Theorem 2.8 for ε = 1/2. For the rest of this section, fix Q ∩ (0, 1) = (q n ) ∞ n=1 , an ordering of the rational numbers in the interval (0, 1). Consider a decreasing sequence of real numbers γ Put ∆ = {γ = (γ i ) i : γ is decreasing and verifies (i), (ii) and (iii)} for the rest of the article.
For any given γ ∈ ∆, we define {G i } i∈N inductively as the following open subintervals of (0, 1): G γ 1 = (q 1 , q 1 + γ 1 ), and for i ≥ 2: Note that property (ii) of γ guarantees that n i exists for all i ∈ N.
Using this, we define the closed subset T γ ⊂ [0, 1] as The definition of {G γ i } i∈N and T γ for every γ ∈ ∆ is fixed for the rest of the article.
and let {G γ i } i∈N and T γ be defined as above. Then T γ is a compact subset of [0, 1] that verifies: (1) The Lebesgue measure of T γ is greater than or equal to 1/2.
(3) The sequence of gaps of T γ is the sequence {G γ i } i∈N , and length(G γ i ) = γ i for all i ∈ N.
Proof. Notice that the Lebesgue measure of T γ is greater than or equal to 1 − ∞ i=1 2 −(i+1) = 1/2 for all possible γ by property (ii), and the points 0 and 1 are not in G γ i for any i ∈ N by construction and property (iii), so (1) and (2) are clear.
To see (3), we need to prove that G γ i is a gap in T γ for all i ∈ N, and that every gap of T γ is one of G γ i for some i ∈ N. Consider an interval a contradiction with the choice of n i and n j . We conclude that G γ i is a gap of T γ . Next, let x, y ∈ T γ with x < y and (x, y) Tγ = ∅. For a point p ∈ (x, y), since p / ∈ T γ , there must exist i ∈ N such that p ∈ G γ i . The interval G γ i is contained in (x, y) because both x and y belong to T γ . Moreover, since the endpoints of G γ i belong to T γ , we necessarily have that G γ i = (x, y), and we are done with (3).
Finally, the set T γ is nowhere dense, since it contains no intervals. Indeed, suppose there is an interval (x, x + δ) ⊂ T γ for some x ∈ [0, 1] and δ > 0 with x + δ < 1. The subinterval (x, x + δ/2) contains a rational number q n 0 . Since (γ i ) ∞ i=1 is decreasing and converging to 0, there must exist i 0 such that γ i < δ/2 for all i ≥ i 0 . Then, for all i ≥ i 0 , the natural number n 0 verifies that (q n 0 , q n 0 + γ i ) ⊂ T γ , and in particular Therefore, there must exist i 1 ≥ i 0 such that n 0 = min n ∈ N : (q n , q n + , which implies that G γ i 1 = (q n 0 , q n 0 + γ i 1 ); a contradiction with the assumption that (q n 0 , q n 0 + δ/2) ⊂ T γ . Now, given any γ ∈ ∆ and any 0 < a ≤ 1, we may assign the metric d 1,a as defined at the beginning of this section to the set T γ , such that (T γ , d 1,a ) is a thread. We will denote by T γ (1, a) the thread of length 1 and width a formed by endowing the subset T γ as defined above for γ ∈ ∆ with the metric d 1,a .
With Proposition 2.10 we have that any countable family of these threads verifies the hypothesis of Theorem 2.8. In fact, any countable family of subthreads with measure uniformly bounded from below also verifies the hypothesis of Theorem 2.8. Moreover, given such a countable family of threads {T n } n∈N and K ≥ 1, for the sequence γ * = {γ * k } k∈N obtained by Theorem 2.8 we can always find another sequence Hence, there exists a thread of the form T γ 0 (1, a) that cannot be mapped with a K-Lipschitz function preserving the extreme points onto any T n for any n ∈ N, provided a < width(T n )/K.
Observe that thanks to the properties of the generic sets T γ , we can obtain the following fact about the thread T γ (1, a): Proposition 2.11. Let γ ∈ ∆, and let T γ (1, a) be the thread of length 1 and width a associated with γ. Then T γ (1, a) is totally separated, i.e.: If p and q are two different points in T γ (1, a), there exist two disjoint open and closed subsets S 1 , S 2 ⊂ T γ (1, a) such that p ∈ S 1 , q ∈ S 2 , and T γ (1, a) = S 1 ∪ S 2 . As a consequence, for any point p ∈ T γ (1, a) and any ε > 0, there exists an open and closed subset S of diameter less than ε such that p ∈ S.
Proof. Put T = T γ (1, a). Let p and q be two different points in T . Suppose without loss of generality that p < q. If the interval (p, q) T is empty, then the result follows considering S 1 = [0, p] T and S 2 = [q, 1] T . Otherwise, if (p, q) T is nonempty, by property (4) in Proposition 2.10 there exists a gap (x, y) in T such that p < x and y < q. Put now S 1 = [0, x] T and S 2 = [y, 1] T and the result is proven.
The second statement follows immediately from the first and from the linear structure of threads.
When dealing with Lipschitz functions between threads, the image of the extreme points of a thread plays an important role for some technical arguments (as can be seen in the previous subsection). This is the motivation for introducing the next result.
Proposition 2.12. Let T and S be two threads of length l T and l S , and width a T and a S respectively. Let K ≥ 1. Suppose T is totally separated. Consider S 1 and S 2 open subsets of S, and take D 1 and D 2 dense subsets of S 1 and S 2 respectively.
Let F : T → S be a K-Lipschitz function such that F (0) = A and F (l S ) = B with A ∈ S 1 and B ∈ S 2 . Then for every ε > 0, there exists a pair of points P, Q ∈ T with P < Q, a pair of points A ∈ D 1 and B ∈ D 2 , and a (K + ε)-Lipschitz function F : By density, we can find A ∈ S 1 and B ∈ S 2 such that d F (P ), It is now routine to check that F is (K + ε)-Lipschitz.
Construction of the building blocks: Threading metric spaces
We now want to use the threads T γ (1, a) we defined for γ ∈ ∆ and 0 < a ≤ 1 to construct non-separable complete metric spaces that will act as building blocks of the final metric space. To do this, we first formalize the notion of attachment of metric spaces, which will allow us to "glue" metric spaces in a convenient way. This concept has been used in many contexts in the literature, but we choose to include a definition tailored to our necessities.
Notice that both minima used in the definition of d N ,S are well defined by compactness of the sets S γ for each γ ∈ Γ. Moreover, it is straightforward to check that the map d N ,S defines a complete metric in M (N , S).
It is also clear from the definition that the metric space M (N , S) contains M isometrically, as well as an isometric copy of N γ for each γ ∈ Γ. We may write M ⊂ M (N , S) and N γ ⊂ M (N , S) by virtue of this fact.
With this concept, we can now define the aforementioned building blocks of the main metric space we seek to construct: Definition 3.2. Consider a metric space M = {A, B} formed by two points at a distance 0 < a ≤ 1. Let N a = {T γ (1, a)} γ∈∆ , where ∆ is the set of sequences defined in Section 1, and T γ (1, a) is the thread associated with γ of width a. We may consider T γ (1, a) and T η (1, a) to be disjoint for γ = η. For each T γ (1, a), put S γ = {0 γ , 1 γ }, the set of the two extreme points of T γ (1, a). We let S a = {S γ } γ∈∆ , and Φ γ : We define the threading space Th(A, B) to be the attachment of M with N a by S a . We say that Th(A, B) is anchored at A and B, and these two points are called the anchors of Th (A, B). If a threading space Th(A, B) is fixed and there is no room for ambiguity, we write T γ (1, a) ⊂ Th(A, B) to denote the isometric copy of the thread T γ (1, a) contained in the threading space Th (A, B). By definition, in a metric space M (N , S) formed by attachment (and in threading spaces in particular), given a point p ∈ N γ for some γ ∈ Γ and a point x 1 ∈ M , there exists a point s 1 ∈ S γ such that d(p, x 1 ) = d(p, s 1 ) + d(s 1 , x 1 ). However, it is possible that for a different point x 2 ∈ M , the point s 2 ∈ S γ such that the identity d(p, x 2 ) = d(p, s 2 ) + d(s 2 , x 2 ) holds, is different from s 1 . Points in N γ that use always the same point in S γ to compute their distance to the rest of the space are especially relevant to our discussion.
In general, given a metric space M and a closed subset N , we say that For example, in a threading space Th (A, B), it is not hard to check that a point p in a thread T γ (1, a) is bound to A in T γ (1, a) if and only if d(p, A) < d(p, B) and d(p, A) ≤ 1−a 2 (the analogous result for B holds as well). In the final section of the article we will deal with Lipschitz functions defined on a single thread and with image in metric spaces formed by attachment. We finish this section by defining two simple concepts and proving a result that will help us simplify this type of maps. Now, if T is a thread, M is a metric space, N is a closed subset of M , and F : T → M is a Lipschitz function, we say that an extended interval I is maximal with respect to F and N if F (I) ⊂ N and every extended interval J = [a ′ , b ′ ] T in T that contains I and such that F (J) ⊂ N is equal to I. We have the following straightforward result: Proposition 3.3. Let T be a thread of length l and width a. Let M be a metric space, let N be a closed subset of M , and let F : T → M be a Lipschitz function. If an extended interval I in T with extremes a, b ∈ T is maximal with respect to F and N , and there exists s ∈ N such that both F (a) and F (b) are bound to s in N , then the function F : T → M defined by: Proof. Put K = F Lip . We will start by proving that If both 0 and l belong to the extended interval I then it follows trivially. Similarly, if 0 and l belong to T \ I then the inequality follows since F is K-Lipschitz. Hence, suppose first that 0 ∈ I and l ∈ T \ I. Then, we have necessarily that a = 0, and so F (0) is bound to s in N . This implies that A similar argument shows that if 0 ∈ T \ I and l ∈ I the same inequality holds. Hence we conclude that equation (4) is verified. Next, we prove that for every x, y ∈ T with x < y we have As before, we may only check this holds for x, y ∈ T with x < y such that x ∈ T \ I and y ∈ I. By maximality of I, there exists t ∈ T with x ≤ t < y such that F (t) / ∈ N . Additionally, since t is not in I but y does belong to the extended interval, one of the extremes a or b of I belongs to (t, y] T . We may suppose without loss of generality that x ≤ t < a ≤ y. Notice that d F (t), s ≤ d F (t), F (a) because a is bound to s in N . Then we have: This proves that equation (5) holds as well.
Using both equations (4) and (5) we can apply Proposition 2.1 to obtain that F Lip ≤ K.
The process used to construct the skein metric space which fails to have any non-trivial separable Lipschitz retracts is to keep attaching threading spaces inductively such that any two distinct points of the skein act as the anchors to a threading space contained in the skein 1 . As we are going to see in the final section, this construction presents its own technical difficulties. However, the main results of the first two sections will be very useful in this regard.
Construction of the skein metric spaces
The final metric space will be constructed using transfinite induction. Let us discuss this process in general for limit ordinal numbers: Let κ be a limit ordinal number. Suppose that {(M α , d α )} α<κ is a transfinite sequence of metric spaces that are increasing, in the sense that M α ⊂ M β if α < β and the restriction of d β to M α results in the metric d α . Then we may define the metric space (M κ , d κ ) where M κ = α<κ M α , and d κ is defined for any p, q ∈ M κ as d κ (p, q) = d α (p, q) where α < κ is the least ordinal number such that p, q ∈ M α . It is straightforward to check that the metric d κ is well defined and (M κ , d κ ) is indeed a metric space.
We will call (M κ , d κ ) the metric space generated by {(M α , d α )} α<κ , and as usual we may omit the mention of the metric d κ when referring to it if there is no room for ambiguity. If κ is an ordinal with uncountable cofinality (i.e., the supremum of any countable sequence of ordinals (α n ) n such that α n < κ for all n ∈ N is strictly smaller than κ), then the metric space M κ generated by {M α } α<κ is complete, provided each M α is complete for every α < κ. To see this, consider any Cauchy sequence (p n ) n in M κ . Each p n belongs to M αn for some ordinal α n < κ. Since κ has uncountable cofinality, the supremum α * = sup n (α n ) is strictly smaller than κ. Hence the sequence (p n ) n belongs to the complete metric space M α * , and therefore it is convergent in M α * to a point p * . The point p * belongs to M κ , and clearly (p n ) n converges to p * in M κ as well.
4.1. Construction of the skein metric spaces. We are going to construct by transfinite induction an increasing class of complete metric spaces {Sk(β)} β for every ordinal β, called the β-skein metric spaces. The complete metric space failing to have any non-trivial separable Lipschitz retract is the ω 1 -skein space Sk(ω 1 ) (or, more generally, any β-skein space such that the cofinality of β is uncountable).
Consider at the first step the 0-skein metric space M 0 = {A, B} formed by two points at distance 1/2, and put G 0 = {A, B}. Suppose we have defined incresingly the α-skein spaces {Sk(α)} α<β up to an ordinal β. If β is a limit ordinal, simply define Sk(β) as the completion of generated metric space α<β Sk(α) in the way described above, which contains isometrically the previous skein spaces Sk(α) for all α < β. Notice that if β has uncountable cofinality, it is not necessary to take the completion.
1 "skein: a length of yarn or thread collected together into the shape of a loose ring" (Cambridge dictionary. n.d.).
Suppose now that β = λ+1 for an ordinal λ. For every p in the skein Sk(λ) and every q ∈ G λ = Sk(λ) \ α<λ Sk(α) with p = q and d(p, q) ≤ 1/2, we may consider the threading space Th(p, q) as defined in section 3. Take now the family of complete metric spaces N λ = {Th(p, q)} {p,q}∈Γ λ , where Γ λ = {p, q} ⊂ Sk(λ) : p ∈ Sk(λ), q ∈ G λ , 0 < d(p, q) ≤ 1/2 , which we may take to be pairwise disjoint and disjoint with Sk(λ). For any {p, q} ∈ Γ λ , we have by definition of the threading space Th(p, q) that there is an isometry Φ {p,q} from the set of anchor points An {p,q} of Th(p, q) onto the set {p, q} in Sk(λ). Therefore, considering S λ = {An {p,q} } {p,q}∈Γ λ we can define Sk(β) as the attachment of Sk(λ) with N λ by S λ . The resulting metric space Sk(β) is the β-skein and it is a complete metric space containing isometrically the previous skein space Sk(λ). The induction process is finished and we have defined the β-skein metric space for every ordinal number β.
Intuitively, we may describe the previous process in the following way: If β is a limit ordinal, then the β-skein space is the completion (if necessary) of the union of all previous skein spaces. If β is the successor of an ordinal λ, then the β-skein is formed by attaching a threading space at every pair of points closer than 1/2 and such that at least one of them was newly introduced at the previous step λ.
For a subset S of a skein space Sk(β), we may define its (Skein) order, written ord(S), as the least ordinal α ≤ β such that S ⊂ Sk(α). For a point p ∈ Sk(β), we write ord(p) = ord({p}). For any ordinal β, the (skein) generation of order β is the set G β = Sk(β) \ α<β Sk(α) . Figure 5 is a conceptual representation of a subset of the skein Sk(3), which contains 3 different generations (the gaps in the threads have been ignored for the sake of clarity). The distance between the points x and y in the figure are computed by d(x, y) = d(x, p) + d(p, q) + d(q, y).
Crucially, if β has uncountable cofinality, the corresponding generation G β is empty, and every point in the β-skein Sk(β) belongs to a previous generation. This means that, in such a skein Sk(β), every pair of points p and q such that d(p, q) ≤ 1/2 belong to a set Γ α where α is strictly smaller than β, and thus an isometric copy of the threading space Th(p, q) is contained in Sk(β). Moreover, in this case, the order of any separable subset of the skein space Sk(β) is strictly smaller than β.
We turn our attention now to the specific skein space Sk(ω 1 ). However, as we mentioned, the results of the rest of the article concerning the ω 1 -skein can also be written for any β-skein where β is an ordinal with uncountable cofinality.
Notice that for any two different pairs of different points (p 1 , q 1 ), (p 2 , q 2 ) ∈ Sk(ω 1 ) × Sk(ω 1 ) such that d(p 1 , q 1 ) = d(p 2 , q 2 ) ≤ 1/2, the threading spaces Th(p 1 , q 1 ) and Th(p 2 , q 2 ) are contained in Sk(ω 1 ) and are isometric. Moreover, for any γ ∈ ∆, each of these two threading spaces contains an isometric copy of the thread T γ (1, a), where a = d(p 1 , q 1 ). To differentiate the different copies of the same thread in M that arise due to this fact, we will denote by T γ (p, q) the thread T γ (1, d(p, q)) contained in the threading space Th(p, q) ⊂ Sk(ω 1 ). Finally, note also that for a given successor ordinal number β + 1 and any (p, q) ∈ Γ β and γ ∈ ∆, it holds that T γ (p, q) \ {p, q} is open in the skein Sk(β + 1). Hence, we conclude that any open subset of a thread T γ (p, q) ⊂ Sk(β + 1) with (p, q) ∈ Γ β and γ ∈ ∆ which does not contain the extreme points {p, q} is also open in Sk(β + 1).
The skein space Sk(ω 1 ) contains separable subsets with different structures, all of which fail to be Lipschitz retracts of Sk(ω 1 ). We are going to prove some results that let us reduce the kind of separable subsets we have to consider to a smaller class. In particular, first we are going to show that it is enough to prove that separable subsets without isolated points are not Lipschitz retracts. Secondly, we will introduce some concepts and prove some results to deal with points in limit ordinal generations. We structure these two topics in two different subsections: 4.2. First reduction: subspaces with isolated points. This first reduction is relatively straightforward to see. It is based on two quick observations about the skein space Sk(ω 1 ) and about threads with small gaps. The first observation is a general fact about threads which we have already stated and proven in remark 2.9. The second one we present in the following simple lemma: Lemma 4.1. Let p, q ∈ Sk(ω 1 ) be two different points. There exists a finite sequence {x k } n k=0 ⊂ Sk(ω 1 ) with x 0 = p and x n = q such that d(x k , x k+1 ) ≤ 1/2 for all 0 ≤ k ≤ n − 1.
Proof. We prove the result by transfinite induction on ord({p, q}) < ω 1 . If ord({p, q}) = 0, then {p, q} = {A, B} and the result follows directly. Suppose ord({p, q}) = β < ω 1 , and suppose the result is true for any set of two points with order α < β. Consider β p = ord(p). If β p is a limit ordinal, then p is the limit of a sequence in α<βp Sk(α), and in particular we can choose x p with ord(x p ) < β p such that d(x p , p) ≤ 1/2. If β p = λ + 1 for a countable ordinal λ, then by construction of Sk(ω 1 ) we have that p belongs to the threading space Th(x p , y p ) for some x p , y p ∈ Sk(λ). Since p is in a thread of length 1 with extremes x p , y p , the distance from p to one of these extremes is less than or equal to 1/2. Assume without loss of generality that d(x p , p) ≤ 1/2. We conclude that in any case there exists x p with ord(x p ) < ord(p) such that d(x p , p) ≤ 1/2, and arguing in the same way there exists x q with ord(x q ) < ord(q) such that d(x q , q) ≤ 1/2.
The points x p , x q verify that ord({x p , x q }) < β, so by inductive hypothesis there exists a sequence {x k } n k= ⊂ M with x 0 = x p and x n = x q such that d(x k , x k+1 ) ≤ 1/2 for all 0 ≤ k ≤ n. The result follows now adding the points p and q at the beginning and at the end of the sequence respectively.
Let us mention that this previous lemma can be improved so that the distance between the points in the sequence is less than 1/4, since this is the biggest possible gap in the threads we are considering. However, we do not consider this improvement to be relevant enough and prefer to prove it with a simpler and shorter argument, since we will only need to use the lemma as it is stated now. Now we can prove the first reduction result: Proposition 4.2. Let S be a closed subset of Sk(ω 1 ) with at least two different points. If there exists p ∈ S such that p is isolated in S, then S is not a Lipschitz retract of Sk(ω 1 ).
Proof. Put ε = d p, S \ {p} , which is positive since p is isolated in S. Suppose there exists a Lipschitz retraction R : Sk(ω 1 ) → S, and put K = R Lip . Consider any point q ∈ S different from p. By Lemma 4.1 there exists a finite sequence {x k } k∈N ⊂ Sk(ω 1 ) such that x 0 = p and x n = q, and such that d(x k , x k+1 ) ≤ 1/2 for all 0 ≤ k ≤ n − 1. By construction of the skein space Sk(ω 1 ), there exists an isometric copy of the threading space Th(x k , x k+1 ) in Sk(ω 1 ), so we may assume that these threading spaces are contained in Sk(ω 1 ). For every 0 ≤ k ≤ n − 1, the threading space Th(x k , x k+1 ) itself contains the threads T γ (x k , x k+1 ) for every γ ∈ ∆. Choose γ * = (γ * i ) i∈N ∈ ∆ such that γ * i < ε/K for every i ∈ N, and write T * k to denote the thread T γ * (x k , x k+1 ) contained in the threading space Th(x k , x k+1 ) with extremes x k and x k+1 for every 0 ≤ k ≤ n − 1. Define which exists since q ∈ T * n−1 and R(q) = q ∈ S \ {p}. By definition of k 0 , there exists a point y 0 ∈ T * k 0 such that R(y 0 ) ∈ S \ {p}. If k 0 = 0, the point y 0 cannot be the lower extreme x 0 = p of the thread T * 0 , since R(p) = p. If k 0 = 0, again we have that y 0 cannot be x k 0 because x k 0 is also in the previous thread T * k 0 −1 as its higher extreme point, which would contradict the minimality of k 0 . We conclude then that R(x k 0 ) = p. Since the gaps of T * k 0 are given by the sequence γ * , they are all smaller than ε/K. We can then apply Remark 2.9 to reach a contradiction with the existence of the retraction R.
4.3. Second reduction: points in limit ordinal generations. In the construction of the skein space Sk(ω 1 ), we have a better understanding of the points belonging to successor ordinal generations than we do of points in limit ordinal generations. Indeed, for a point p of order α + 1 we know that there exist two points x and y, with at least one of them in generation α such that p belongs to a thread T γ for a sequence γ ∈ ∆ with extreme points x and y. However, a point in a limit ordinal generation can initially only be described as a limit of a sequence of points in previous generations, and it does not belong to a thread or to any other defined structure. This subsection is dedicated to finding ways to describe these limit points in order to compensate for the comparatively low a priori knowledge we have of them. The main result of this subsection is the following: Proposition 4.3. In the skein space Sk(ω 1 ), for every ordinal number β < ω 1 , the β-skein Sk(β) is a 1-Lipschitz retraction of the ball B(Sk(β), 1/8) • .
In fact, as we are going to see, a stronger result is verified, which is helpful in the inductive argument we use to prove it.
Let us introduce some useful concepts: For an ordinal number β < ω 1 and a point p ∈ Sk(ω 1 ), we may consider the set P β (p) = x ∈ Sk(β) : d(p, x) = d p, Sk(β) . Since the β-skein Sk(β) is not compact when β > 0, we cannot easily ensure that P β (p) is nonempty in every case. In the case where P β (p) is nonempty for a point p ∈ Sk(ω 1 ) and an ordinal β < ω 1 , the members of P β (p) will be called the ancestors of p of order β.
If a point p ∈ Sk(ω 1 ) has order β + 1 for some ordinal β < ω 1 , then it belongs to a threading space Th(x, y) for a pair of points (x, y) with ord{x, y} = β, and it is straightforward to see that the set of ancestors of p of order β is nonempty and is contained in {x, y}. Since every thread in Th(x, y) has length 1, if d(p, Sk(β)) < 1/2, then P β (p) is unique and is equal to either x or y. The other point will be called the pseudo-ancestor of p of order β, and will be denoted by Q β (p). In this way, every point p in a successor ordinal generation G β+1 such that the distance d(p, Sk(β)) is smaller than 1/2 will belong to the threading space Th(P β (p), Q β (p)). Notice that this concept is only defined for points in successor ordinal generations and with respect to the preceding ordinal.
For each ordinal number β < ω 1 , we say that a subset S of Sk(ω 1 ) containing Sk(β) is β-stable if for every point p ∈ S there exists an ancestor P β (p) and it is unique, and moreover, the resulting well defined map P β : S → M is a 1-Lipschitz retraction. Hence, the main result of this subsection will be proven if we show that B(Sk(β), 1/8) • is β-stable for all β < ω 1 .
We prove the following even stronger result: Proposition 4.4. Let β < ω 1 be an ordinal number. If two points p and q belong to the ball B(Sk(β), 1/8) • , then the ancestors P β (p) and P β (q) exist and are unique.
Proof. Put α = ord{p, q}. We are going to prove the result by induction on α. If α is smaller than β, then both p and q belong to the skein Sk(β) and the result follows trivially. Hence, we will start the induction assuming α = β + 1. Let us divide the proof into three parts: the base case, the successor ordinal case, and the limit ordinal case. The base case is in fact the most technical part of the proof: 1.-The base case: α = β + 1 Suppose that α = β + 1. Since ord{p, q} = β + 1, at least one of p and q is in generation G β+1 . Assume without loss of generality that p belongs to generation G β+1 . As we discussed earlier, since the distance from p to Sk(β) is less than 1/8 and in particular less than 1/2, we have that the ancestor of order β of p, P β (p), exists and is unique, and p is in the threading space anchored at its ancestor and pseudo-ancestor of order β, denoted by Th P β (p), Q β (p) . Moreover, since d(p, P β (p)) < 1/4, we have that p is bound to P β (p) in Sk(ω 1 ) \ Sk(β) (we briefly discussed this when introducing the concept of boundness). In other words, we have that the distance from p to any point x ∈ Sk(β) is computed by Now, if the point q is in the skein Sk(β), then q is its own ancestor of order β, and the result follows directly by the previous identity. Suppose then that q is also in generation G β+1 . By the same discussion as above, q belongs to the threading space Th P β (q), Q β (q) , and q verifies the corresponding identity to (6). There are two possibilities: either both threading spaces Th P β (p), Q β (p) and Th P β (q), Q β (q) are the same, or p and q belong to different threading spaces.
If p and q belong to two different threading spaces, then the result follows from equation (6) (applied to both p and q) and the construction of the skein Sk(β + 1). Otherwise, if both threading spaces Th P β (p), Q β (p) and Th P β (q), Q β (q) are the same, we may assume that the pseudo-ancestor of p, Q β (p), and the ancestor of q, P β (q), are the same point (otherwise P β (p) = P β (q) and there is nothing left to prove). Now, on the one hand we have that d p, P β (p) < 1/8 and d q, P β (q) < 1/8 by hypothesis; and on the other hand the width of the threading spaces in the skein spaces we defined is less than 1/2, so d P β (p), P β (q) ≤ 1/2. Hence, we necessarily have that from which the result follows, whether p and q belong to the same thread in the threading space Th P β (p), P β (q) or not.
2.-The successor ordinal case: α = η + 1 for η > β Suppose now that α = η + 1 for some countable ordinal η > β, and that the result holds for every pair of points of order strictly less than η + 1. Let us prove first that both ancestors P β (p) and P β (q) exist and are unique and that the ancestor operation commutes for p and q at order η, that is: P β P η (p) = P β (p) and P β P η (q) = P β (q). Since the argument is exactly the same for both points, we will only prove it for p, and again we may assume without loss of generality that p is in the generation G η+1 . Since the distance from p to Sk(β) is less than 1/8, we have as well that d p, Sk(η) < 1/8 since Sk(β) ⊂ Sk(η). Therefore, by the first step of the induction process we have that the ancestor of p of order η is unique and for all x ∈ Sk(η).
Moreover, now P η (p) ∈ Sk(η), and with the previous equation we can deduce that P η (p) belongs to the ball B(Sk(β), 1/8) • as well, so by induction again we have that P β P η (p) is unique, and d P η (p), x = d P η (p), P β P η (p) + d P β P η (p) , x . These two identities result in the following equation: Applying equation (7) for P β P η (p) ∈ Sk(β) we can put the first two terms of the right hand side in the previous equation as d p, P η (p) + d P η (p), P β P η (p) = d p, P β P η (p) , and finally obtain: for all x ∈ Sk(β). Now, from equation (8) it is easy to prove that P β (p) is unique and P β P η (p) = P β (p).
To finish with this case, suppose that P β (p) = P β (q). Then P η (p) = P η (q) by what we just proven. We can now apply the inductive hypothesis several times and deduce that: This finishes the successor ordinal case.
3.-The limit ordinal case
Suppose finally that α is a limit ordinal. As in the previous case, we start by proving that P β (p) and P β (q) exist and are unique. Similarly, we assume that ord(p) = α, and we only prove it for p. Consider a sequence {p n } n∈N of points in Sk(α) convergent to p and such that ord(p n ) < α for all n ∈ N. Moreover, since the ball B(Sk(β), 1/8) • is an open set of Sk(ω 1 ) which contains p, we may suppose that d p n , Sk(β) < 1/8 as well for all n ∈ N. Therefore, by inductive hypothesis, P β (p n ) is unique for all n ∈ N, and (9) d(p n , x) = d p n , P β (p n ) + d P β (p n ), x , for all x ∈ Sk(β) and all n ∈ N.
We are going to prove first that the sequence {P β (p n )} n∈N is convergent. Indeed, since ord(p n ) < α for all x ∈ N, for all n, m ∈ N such that P β (p n ) = P β (x m ), we have that d(p n , p m ) = d p n , P β (p n ) + d P β (p n ), P β (p m ) + d P β (p m ), p m . In particular, d P β (p n ), P β (p m ) ≤ d(p n , p m ) for all n, m ∈ N, whether P β (p n ) = P β (x m ) or not. Since the sequence {p n } n∈N converges, it is a Cauchy sequence, which implies that the sequence {P β (p n )} n∈N is a Cauchy sequence as well, and thus convergent in the complete metric space Sk(β). Denote the limit of this sequence by P * , which belongs to the set Sk(β).
Taking the limit when n tends to infinity in equation (9), we obtain that d(p, x) = d p, P * + d P * , x , for all x ∈ Sk(β).
Similarly to the successor ordinal case, from this equation it follows that P β (p) = P * and it is unique. Now, suppose that P β (p) = P β (q), and consider two sequences {p n } n∈N and {q n } n∈N in B(Sk(β), 1/8) • converging to p and q respectively, and such that ord{p n , q n } < α. By the previous argument, we have that: d(p, q) = lim n d(p n , q n ) = lim n d p n , P β (p n ) + d P β (p n ), P β (q n ) + d P β (q n ), q n = d p, P β (p) + d P β (p), P β (q) + d P β (q), q , which concludes the proof.
In Figure 6 we observe conceptually Proposition 4.4. In this diagram we portray again a subset of the skein Sk(3), and the ball B(Sk(1), 1/8) (colored in blue) is partitioned into 5 subsets {B i } 5 i=1 such that every point in the same B i has the same ancestor of order 1. The ancestor map clearly defines in this case a 1-Lipschitz retraction onto Sk(1).
Finally, with this proposition, the second reduction result follows directly: Proof of 4.3. It follows directly from Proposition 4.4.
In the proof of the main theorem, given a separable subset, we will consider a bigger separable subset that is in some sense closed for the operation of taking ancestors closer than 1/8. Specifically, we have the following Lemma: Lemma 4.5. Given a separable subset S of the metric space Sk(ω 1 ), there exists a separable set S ⊂ Sk(ω 1 ) containing S such that: for every point x ∈ S and every ordinal β < ω 1 such that d(x, Sk(β)) < 1/8, the unique ancestor of order β of x belongs to S.
Then, we can inductively define a decreasing sequence of ordinal numbers β n (x) = min{β < ω 1 : d P β n−1 (x) • · · · • P β 0 (x) (x), Sk(β) < 1/8} for each n ∈ N. Since β n+1 (x) ≤ β n (x) for every n ∈ N and the ordinal numbers are well ordered, there must exist n 0 (x) ∈ N such that β n (x) = β n 0 (x) for all n ≥ n 0 . Now, given a separable subset S of the metric space M , take D a countable and dense subset of S. Consider the set D defined by: which is countable, contains D, and verifies that for any point x ∈ D and any ordinal β with d(x, Sk(β)) < 1/8, the ancestor P β (x) belongs to D as well.
Finally, put S = D. The set S is separable and it contains S. For any point x ∈ S and any ordinal β < ω 1 such that x belongs to the ball B(Sk(β), 1/8) • , we have that x is the limit of a sequence {x n } n∈N of points in D which are also in B(Sk(β), 1/8) • . As we argued in the proof of Proposition 4.4, we have that the sequence {P β (x n )} n∈N , which is contained in D, converges to P β (x). The statement of the remark now follows directly.
We will use as well the fact that every separable subset of the skein Sk(ω 1 ) is contained in the closure of the union of countably many threads: Lemma 4.6. Let S be a separable subset of the skein Sk(ω 1 ). Then there exists a countable family of pairs {(x n , y n )} n∈N in Sk(ω 1 ) × Sk(ω 1 ) and a countable family of sequences {γ n } ∈N in ∆ such that the following property is verified: For every point x ∈ S in a successor ordinal generation there exists a natural number n x such that x belongs to the interior of the thread T γ nx (x nx , y nx ) ⊂ Sk(ω 1 ).
Proof. Suppose by contradiction that the result fails. Since S is separable, there are only countably many ordinals α < ω 1 such that the intersection of S with generation G α is nonempty. Hence, there must exist one successor ordinal α 0 +1 such that for every countable collection of pairs {x n , y n } n∈N in Γ α 0 and any countable family of sequences {γ n } ∈N in ∆, there exists a point in S ∩G α 0 +1 that lies outside of the interior of the thread T γ n (x n , y n ) ⊂ Sk(α 0 + 1) for all n ∈ N.
Since every point in S ∩G α 0 +1 belongs to the interior of a thread anchored at a pair of points in Γ α 0 , by a standard transfinite induction argument we may find an uncountable set of different points {p i } i∈I in S ∩ G α 0 +1 and an uncontable family of different threads {T γ i (x i , y i )} i∈I with {x i , y i } ∈ Γ α 0 and γ i ∈ ∆ for all i ∈ I, such that p i belongs to the interior of the thread T γ i (x i , y i ) for all i ∈ I. However, this implies that the family {T γ i (x i , y i ) • ∩ S} i∈I is an uncountable collection of nonempty and open subsets of S which are pairwise disjoint, which contradicts the separability of S.
4.4.
Proving the general case. We proceed now to prove the main result of this article: Proof of Theorem A. Consider the complete skein Sk(ω 1 ). We will prove that it does not contain any non-singleton separable Lipschitz retracts.
We will proceed by contradiction. Let S be a separable subset of Sk(ω 1 ) containing at least two points. We may assume that S has no isolated points by Proposition 4.2. Suppose there exists a Lipschitz retraction R : Sk(ω 1 ) → S onto S, and put K = R Lip . We are going to find a specific thread T * in Sk(ω 1 ) such that when restricting the map R to T * , the resulting K-Lipschitz function can be transformed to yield a contradiction. Because of the length of the proof, we divide it in two parts: The first part describes the process to define the problematic thread T * , while the second part deals with the map R |T * , and how to transform it to arrive at a contradiction. We will also highlight important facts throughout the proof to help in its readability.
1.-Defining the conflicting thread T * We start by finding two points to anchor th thread T * : Fact 1. There exist two points p and q in S closer than 1/2, and such that there exists a successor ordinal β 0 with p, q ∈ B(Sk(β 0 ), 1/8) • which verifies that P β 0 (p) = P β 0 (q).
In this case, if the ordinal α 0 is a successor ordinal, putting β 0 = α 0 we are done, since by definition of α 0 both P α 0 (p) and P α 0 (q) belong to generation G α 0 .
Suppose then that α 0 is a limit ordinal. Since p and q belong to the ball B(Sk(α 0 ), 1/8) • , the ordinal is less than α 0 . Both P α 1 (p) or P α 1 (q) are well defined and unique. Moreover, by minimality, α 0 must be a successor ordinal, and at least one of P α 1 (p) or P α 1 (q) must belong to generation G α 1 . Therefore we can put β 0 = α 1 and Fact 1 is proven for Case 1.
Case 2: There exists a point A ∈ S ∩ Sk(α 0 ) such that for all x ∈ B(A, 1/8) • ∩ S we have that P α 0 (x) = A.
Notice that the above statement follows from negating the assumption of Case 1. In this case define the ordinal η 0 = min{η < ω 1 : P η (x) = A for some x ∈ B(A, 1/8) • ∩ S}, Such an ordinal number must exist since A is not isolated in S by assumption. Moreover, η 0 must be a successor ordinal since every point in a limit ordinal generation is the limit of the succession given by its previous (existing) ancestors. Take now any point p ∈ B(A, 1/8) • ∩ S such that P η 0 (p) = A, and set q = A. Putting β 0 = η 0 , we have that both p and q belong to B(Sk(β 0 ), 1/8) • and P β 0 (p) = P β 0 (q) = q. Moreover, the ancestor P β 0 (p) belongs to generation G β 0 by minimality.
With this in mind, we can apply Proposition 2.11 to the thread T γ 0 (A 0 , B 0 ) and the point P β 0 (p), to find a compact subset C 0 ⊂ T γ 0 (A 0 , B 0 ) with diameter less than d P β 0 (p), P β 0 (q) such that P β 0 (p) ∈ C 0 and C 0 is open and closed in Sk(β 0 ). Put C 1 = Sk(β 0 ) \ C 0 . Then the point P β 0 (q) belongs to C 1 , and since C 0 is compact and disjoint from the closed set C 1 , the distance d C 0 , C 1 is strictly positive. Put d 0 = d C 0 , C 1 > 0. Figure 7 summarizes one possible layout of the elements we have defined so far in the skein Sk(β 0 ). Now, the separation between the sets C 0 and C 1 allows us to use Remark 2.9 to obtain the following fact: Fact 2. If T = [0, l] is a thread whose gaps are all smaller than d 0 /2K, there cannot be any 2K-Lipschitz map F : T → Sk(β 0 ) such that F (0) = P β 0 (p) ∈ C 0 and F (l) = P β 0 (q) ∈ C 1 .
Next, we define the subset S ⊂ Sk(ω 1 ) using Lemma 4.5 such that the following fact is verified: Fact 3. The set S ⊂ Sk(ω 1 ) is separable, it contains the set S, and for any point x ∈ S and any ordinal number β such that x ∈ B(Sk(β), 1/8), the unique ancestor P β (x) belongs to S.
To continue with the proof, since S is separable, by Lemma 4.6 we can find a countable family of sequences {γ n } n∈N in ∆, and a countable set of pairs {(x n , y n )} n∈N in Sk(ω 1 ) × Sk(ω 1 ) such that, denoting by T n the thread T γ n (x n , y n ) ⊂ Sk(ω 1 ) for each n ∈ N, the countable family of threads T 0 = {T n } n∈N verifies that any point x ∈ S belonging to a successor ordinal generation is contained in the interior of at least one thread T nx for some n x ∈ N. For every n ∈ N, the thread T n = T γ n (x n , y n ) ∈ T 0 has length 1, and so the open subsets given by [x n , x n + 1/8) T n and (y n − 1/8, y n ] T n are separable subsets that do not intersect. Define for every n ∈ N two countable sets D n 1 and D n 2 such that D n 1 is dense in [x n , x n + 1/8) T n and D n 2 is dense in (y n − 1/8, y n ] T n . Finally, we can define the countable family of threads given by Notice that each thread in T 0 has Lebesgue measure of at least 1/2. Therefore, the measure of the threads in T is bounded below by 1/4. We can apply now Theorem 2.8 with ε = 1/4 and 2K ≥ 1 to find a sequence γ * ∈ ∆ with the following property: Fact 4. There exists a sequence γ * = {γ * k } k∈N ∈ ∆ such that: For any thread S of length l S whose sequence of gaps {C S k } k∈N in decreasing length order verifies length(C S k ) < γ * k for all k ∈ N, it holds that: For every K ′ ≤ 2K, if there exists a K ′ -Lipschitz function F : S → [x, y] T n such that F (0) = x and F (l s ) = y, where n ∈ N and (x, y) ∈ D n 1 × D n 2 ; then the thread S has a gap of length greater than or equal to d(x, y)/K ′ .
Moreover, without loss of generality we may choose γ * such that Since the sequence γ * belongs to ∆, the associated thread T γ * (p, q) belongs to the threading space Th(p, q), and is therefore a subset of Sk(ω 1 ). Put T * = T γ * (p, q). This is the problematic thread we will use to reach a contradiction. Recall that the length of the gaps of the thread T * is given by the sequence γ * ∈ ∆. Hence, we have the following result by the choice of γ * and Fact 2: Fact 5. There does not exist any 2K-Lipschitz map F : T * → Sk(β 0 ) such that F (p) = P β 0 (p) and F (q) = P β 0 (q).
In the next section we will find a function in contradiction with this last fact. The retraction R from Sk(ω 1 ) onto S can be restricted to T * to obtain a K-Lipschitz map R |T * : T * → S such that R(p) = p and R(q) = q. This restriction will be the starting point in the process to define the contradicting function.
2.-Transforming the map R |T *
We can only ensure that the image of the map R |T * is contained in S, and so the order of R |T * (T * ) is less than the order of S, but it can still be higher than β 0 . We are going to transform inductively the map R |T * to reduce the order of its image until we arrive at β 0 , where we will reach a contradiction.
Before proving this claim let us discuss its implications: The map R |T * verifies the general hypothesis of the claim with β = ord{p, q}. Notice as well that if a function F verifies either of the conditions (A), (B) or (C) then the resulting map F for any valid ε > 0 verifies again the general conditions of the claim. In all three cases, the order of the image of the map F produced is an ordinal strictly lower than the order of the image of F . This means that putting F 0 = R |T * , we can define inductively K-Lipschitz maps {F n } n∈N such that We may choose any valid ε > 0 at the steps that require it. There must exist n 0 ∈ N such that F n = F n 0 for all n ≥ n 0 . Indeed, otherwise the sequence {ord(F n )(T * )} n∈N is an infinite strictly decreasing sequence of ordinal numbers, which are well ordered, resulting in a contradiction. Therefore, the map F n 0 : T * → S is a Lipschitz map with F n 0 Lip < 2K such that F (p) = P β (p) and F (q) = P β (q) for some β ≥ β 0 that does not verify neither (A), (B) nor (C). Since it does not verify (A), we have that ord(F n 0 )(T * ) is successor ordinal α + 1. This means that in order to fail (B), the ordinal α + 1 must equal β. In turn, since F n 0 does not meet the requirements of (C) either, we conclude that β (that is, the ordinal such that F n 0 (p) = P β (p), F n 0 (q) = P β (q), and the order of F n 0 (T * )) equals β 0 .
In conclusion, F n 0 : T * → S ∩ Sk(β 0 ) is a Lipschitz map with F n 0 Lip < 2K from the thread T * = T γ * (p, q) into Sk(β 0 ) such that F n 0 (p) = P β 0 (p) and F n 0 (q) = P β 0 (q). This contradicts Fact 5, which leads to the desired contradiction. It only remains to prove Claim 1.
Proof of Claim 1. We prove each statement separately: Proof of (A). Put α = ord(F (T * )). Then α ≥ β. Since T * is compact, the image F (T * ) is also compact in Sk(ω 1 ). Therefore, there exists a point x 0 ∈ T * such that F (x 0 ) belongs to the limit generation G α in Sk(ω 1 ). Put r 0 = min{d(F (x 0 ), Sk(β 0 )), 1/8}. Since β 0 is a successor ordinal, the number r 0 is strictly positive. Now, choose a finite set of points For each i = 1, . . . , n, the ordinal number is a successor ordinal strictly smaller than the order of F (T * ). Hence, if β = max{α i : i = 1, . . . , n}, we have that β < ord(F (T * )) and d F (x), Sk( β) < r 0 for all x ∈ T * . This implies that β is greater than β 0 . Applying Corollary 4.3, since r 0 < 1/8, we have that the map is a K-Lipschitz map. It is well defined since the ancestor of order β of each point in F (T * ) is unique, and the image of any point x ∈ T * belongs to the set S again since d F (x), Sk( β) < r 0 for all x ∈ T * (see Fact 3). We have then that F (p) = P β (P β (p)) = P β (p), and similarly F (q) = P β (q). Finally, the order of F (T * ) is at most β as well, so it is verified that ord( F (T * )) < ord(F (T * )).
Proof of (B). Put K = F Lip . Suppose that the order of F (T * ) is α+1, and that β < α + 1. Since the image of F is in S and α + 1 is a successor ordinal, there exists a subsequence {n k } k∈N such that F (T * )∩ G α+1 = F (T * )\Sk(α) is contained in the union of threads k∈N T n k . Informally, the "problematic" part of F (T * ) is contained in this countable set of threads (without the extreme points, since these always belong to a lower generation), which is a subfamily of the set T 0 we have considered in the definition of T * . Hence, for any t ∈ T * such that F (t) ∈ T n k(t) for some n k(t) ∈ N, we may find an extended interval I t in T * containing t such that I t is maximal for F and T n k(t) . The extended interval I t is actually of the form [a t , b t ] T * with p ≤ a t < b t ≤ q, since if I t contains both extremes p and q of T * , then necessarily {F (p), F (q)} = {P β (p), P β (q)} ∈ Γ α , so α = β, and thus I t = [p, q] T * = T * .
With this idea, since F (T * ) is separable, we can define a countable family in T n k(i) for all i ∈ N, and every point t ∈ T * such that its image To simplify the notation, we abuse it and write n k(i) = i. Therefore, we will write that F [a i , b i ] T * is contained in the thread T i . Recall that the thread T i ∈ T 0 belongs to the threading space Th(x i , y i ) for every i ∈ N. Again informally, we have identified a countable family of maximal intervals in T * that contain all the points whose image we need to change to prove (B). In the following Fact, we "correct" the image of this countable family of maximal intervals.
Fact 6. For every i ∈ N, there exists a Lipschitz function F i : Proof. Fix i ∈ N. Since the order of F (T * ) is α + 1, we may work directly on the skein Sk(α + 1). Here, the point F (a i ), which belongs to the thread T i , is bound to either x i or y i in T i . To see this, notice that, since there are no gaps in T * of length greater than (2K) −1 /8, by maximality of [a i , b i ] T * in T γ i (x i , y i ), we have that the distance from F (a i ) to Sk(α + 1) \ T * is smaller than 1/8. Hence, by construction of the successor ordinal skein Sk(α + 1), the distance from F (a i ) to one of the two extremes of the thread T i is also smaller than 1/8, which implies that F (a i ) is bound to one of these extremes. Similarly, F (b i ) is bound to either x i or y i in T i . Suppose without loss of generality that F (a i ) is bound to x i .
There are two possibilities: either F (b i ) is bound to x i as well, or F (b i ) is bound to the other extreme point y i . If F (b i ) is bound to x i , then we can apply Proposition 3.3 and obtain F i with the desired properties.
In Figure 8 we observe this first possibility, and the resulting map F i according to Proposition 3.3.
Suppose now that F (b i ) is bound to y i in T i . We are going to show that there is a gap C i in [a i , b i ] T * with length greater than d(x i , y i )/(K + ε). Indeed, suppose by contradiction there is no such gap.
We have that F (a i ) belongs to the interval [x i , x i + 1/8) T i , while F (b i ) belongs to (y i − 1/8, y i ] T i . Recall the definition (prior to Fact 4) of the dense and countable subsets D i 1 ⊂ [x i , x i + 1/8) T i and D i 2 ⊂ (y i − 1/8, y i ] T i in T i , which were used to define the sequence of gaps of the thread T * . Since D i as a thread, and restricting F to this thread, we obtain by Proposition 2.12 that there exist two points a Notice that since the length of T i is 1, and the points x ′ i and y ′ i belong to [x i , x i + 1/8) T i and (y i − 1/8, y i ] T i respectively, the distance d(x ′ i , y ′ i ) is greater than d(x i , y i ).
Finally, since we are assuming that there is no gap in [a i , b i ] T * with length greater than d(x i , y i )/(K + ε), we can apply Proposition 2.4 and assume that F has its image contained in the thread [x ′ i , y ′ i ] T i , which belongs to the family T we have used to define γ * . Since the thread [a ′ i , b ′ i ] T * is a subinterval of T * , its decreasing sequence of gaps {C i n } n∈N also verifies that length(C i n ) < γ * n for all n ∈ N. Hence, the existence of the function F whose Lipschitz constant does not exceed K + ε < 2K, implies by Fact 4 that there is a gap C i n 0 in [a ′ i , b ′ i ] T * such that length(C i n 0 ) ≥ d(x ′ i , y ′ i )/(K + ε). The fact that the gap C i n 0 is also a gap of [a i , b i ] T * and that d(x ′ i , y ′ i ) ≥ d(x i , y i ) results in the desired contradiction.
Hence, there exist two points c i , d i ∈ [a i , b i ] T * with c i < d i such that (c i , d i )∩(a i , b i ) T * = ∅ and d(c i , d i ) > d(x i , y i )/(K +ε). Define now F i : T * → S by Using Proposition 2.1, maximality of [a i , b i ] T * for F and T γ i (x i , y i ), and the fact that F (a i ) and F (b i ) are bound to x i and y i respectively in T γ i (x i , y i ), it is straightforward to check that F i verifies F i Lip ≤ K + ε (we use in fact the same argument as in the proof of Proposition 3.3). Figure 9 intuitively summarizes the second possibility. Notice that in both Figures 8 and 9 the resulting map F i avoids the thread T i , thus reducing the order of the image of the maximal interval [a i , b i ] T * .
To finish the proof of part (B) of the Claim, for each t ∈ T * such that t ∈ [a i , b i ] T * for some i ∈ N, define i(t) ∈ N as the least of the natural F (a i ) numbers such that t ∈ [a i(t) , b i(t) ] T * . Now, define F : T * → S by To check that F Lip ≤ K + ε, we only need to consider t, s ∈ T * with t < s and such that t ∈ [a i(t) , b i(t) ] T * and s ∈ [a i(s) , b i(s) ] T * with i(t) = i(s). We have then the following inequalities: d F (t), F (s) = d F i(t) (t), F i(s) (s) ≤ d F i(t) (t), y i(t) + d(y i(t) + x i(s) ) + d x i(s) , F i(s) (s) ≤ d F i(t) (t), F i(t) (b i(t) + d F (b i(t) ), F (a i(s) ) + d F i(s) (a i(s) ), F i(s) (s) Since d F (p), F (q) = d P β (p), P β (q) ≤ d(p, q) = a T * , we can apply Proposition 2.1 to obtain the desired Lipschitz constant for F and finish the proof of (B).
Proof of (C). The proof of the third case (C) resembles the proof of (B). The difference is that in this case at least one of F (p) and F (q) is in generation Sk(α + 1), which is at the same time the order of F (T * ). Intuitively, the idea of the proof of this last part is to first transform F to lower the order of the image of p and/or q. When we have done this, then we may simply apply the case (B) to the resulting map, thus obtaining a Lipschitz function whose image has a lower order than F .
Since F (p) = P α+1 (p), there exists n ∈ N such that x n = P α (p), y n = Q α (p), and F (p) belongs to the thread T n = T γ n P α (p), Q α (p) ∈ T 0 . We start by selecting p ′ ∈ T * such that [p, p ′ ] T * is maximal for F and the thread T n .
There are two possibilities: either p ′ = q, or the point p ′ is different from q. If p ′ = q, since P α (p) = P α (q), we have that the range of F is contained in the single thread T n = T γ n P α (p), P α (q) , and moreover F (p) = P α+1 (p) and F (q) = P α+1 (q) belong to this same thread. Since the distance from p and q to Sk(α) is less than 1/8, we have that F (p) belongs to P α (p), P α (p) + 1/8 T n and F (q) belongs to P α (q) − 1/8, P α (q) T n . Hence, we can apply Propositions 2.12 and 2.4 as we did in the proof of (B) to obtain two points a ′ , b ′ ∈ T * with a ′ < b ′ and two points x ′ , y ′ ∈ D n 1 × D n 2 together with a (K + ε)-Lipschitz function F : [a ′ , b ′ ] T * → [x ′ , y ′ ] T n with F (a ′ ) = x ′ and F (b ′ ) = y ′ . Since the thread [x ′ , y ′ ] T n belongs to the family T , by Theorem 2.8, there exists a gap C = (c, d) in T * such that d(c, d) > d P α (p), P α (q) /(K + ε). Defining F : T * → S as finishes the proof of (C) if p ′ = q, without need for further discussion. Hence, suppose now that p ′ is not q. We are going to define a (K + ε/2)-Lipschitz function F 1 : T * → S such that F 1 (p) = P α (p) and F 1 (t) = F (t) for all t ∈ (p ′ , q] T * . In the space Sk(α + 1), the point F (p) = P α+1 (p) is bound to P α (p) in T n because the distance from p to Sk(α) is less than 1/8. In addition, the point F (p ′ ) is also bound to one of the extremes P α (p) or Q α (q) in T n . This is because there are no gaps in T * bigger than (2K) −1 /8 and [p, p ′ ] T * is maximal for F and T n . We may consider again two possibilities: either F (p ′ ) is bound to P α (p) as well, or F (p ′ ) is bound to Q α (p).
If F (p ′ ) is bound to P α (p), we can use Proposition 3.3 to define a K-Lipschitz function F 1 : T * → S with F 1 (t) = P α (p) for all t ∈ [p, p ′ ] T * , and F 1 (t) = F (t) for all t ∈ (p ′ , q] T * ; as desired. Suppose then that F (p ′ ) is bound to Q α (p). Then, since F (p) ∈ P α (p), P α (p)+ 1/8 T n and F (p ′ ) ∈ Q α (p) − 1/8, Q α (q) T n , we can repeat the process we did in the proof of (B) and in the case when p ′ = q to find a gap C = (c, d) in [p, p ′ ] T * such that d(c, d) > d P α (p), Q α (p) /(K + ε/2). Again, we use this gap to define F 1 : T * → S by if t ∈ (p ′ , q] T * . The Lipschitz constant of F 1 is less than or equal to (K + ε/2) as desired. We may repeat the same argument to find a point q ′ ∈ T * with p ′ < q ′ , together with a second (K + ε/2)-Lipschitz function F 2 : T * → S such that F 2 (q) = P α (q) and F 2 (t) = F (t) for all t ∈ [p, q ′ ) T * . We combine F 1 and F 2 to form yet another Lipschitz function F : T * → S in the following way: if t ∈ (p ′ , q ′ ) T * , F 2 (t), if t ∈ (q ′ , q] T * . It is again straightforward to prove that the Lipschitz constant of F is less than or equal to K + ε/2. It is possible that the order of F (T * ) is already the desired ordinal α < α + 1, in which case the proof is finished. However, it might be that there are points in F (T * ) in the generation G α+1 . If this is the case, notice that the function F verifies the hypothesis of the claim and the conditions of (B). Hence, we may use the already proven case (B) with ε/2 > 0 to find a (K + ε)-Lipschitz function F : T * → S such that F (p) = P α (p), F (q) = P α (q), and the order of F (T * ) is α. The proof is now finished. | 2022-06-22T01:16:40.538Z | 2022-06-21T00:00:00.000 | {
"year": 2022,
"sha1": "57fd2a9866bb0542aa87e85bd30ba5b1939017c0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fd7d8ce60afe1f6591c6e30a823bdc8cf1d20898",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
54189826 | pes2o/s2orc | v3-fos-license | Behcet ’ s Disease and Related Diseases-Immune Reactions to Oralstreptococci in Their Pathogenesis
Behçet’s disease (BD) is a systemic disorder characterized by the recurrent involvement in the muco-cutaneous, ocular, intestinal, vascular, and/or nervous system organs. The clinical muco-cutaneous manifestations including recurrent aphthous stomatitis (RAS), erythema nodosum (EN) –like eruption, genital ulceration, etc. of patients with BD were reviewed in their pathogenesis comparing with the similar symptoms seen in patients without BD (non-BD). Most of BD patients tend to have hypersensitivity against streptococci which might be acquired in the oral cavity through the innate immune mechanism. Generally, BD patients have the systemic symptoms following RAS symptom as an immune reaction. Then, the characteristics of hypersensitivity to oral streptococci may be utilized in order to make a diagnosis for BD. The skin prick with self-saliva including oral streptococci was much more sensitive than “Pathergy test” conventionally used for BD diagnosis. HLA-B51-restricted CD8+ T cell response is suspected to catch the target tissues expressing major histocompatibility complex class 1 chain-related gene A (MICA) by stress in active BD patients. Bes-1 gene and 65kD of heat shock protein (HSP-65) derived from Streptococcus sanguinis (S. sanguinis) are detectable in the lesions. The peptides of Bes-1 gene are highly homologous with REVIEW Behcet’s Disease and Related Diseases -Immune Reactions to Oralstreptococci in Their Pathogenesis Fumio Kaneko, Ari Togashi, Erika Nomura, Koichiro Nakamura 41 J. of Dermatological Res. 2016 September; 1(3): 41-50 ISSN 2413-8223 Online Submissions: http://www.ghrnet.org/index./jdr/ doi:10.17554/j.issn.2413-8223.2016.01.12 Journal of Dermatological Research in the oral bacterial flora of BD patients in comparison with those of healthy controls. S. sanguinis from BD patients was identified as uncommon serotype KTH-1 (so-called BD113-20) by the bacterial and enzymatic properties. Most of BD patients tend to acquire hypersensitivity against streptococci in their oral flora, as previously demonstrated that the cutaneous reactions by the injection and/or prickle with bacteria antigens of streptococci and enterococci were much stronger than the reaction by “Pathergy test”. The histology from the cutaneous streptococcal response of a BD patient is similar to the vascular reaction seen in EN-like eruption (Figure 1a-c). The cutaneous reactions to streptococcal antigens induced the clinical symptoms in some BD patients. Non-BD patients with RAS (non-BD RAS) were also considered to react with streptococcal antigen, although several environmental factors are also to be a trigger of aphthous ulceration. In vitro system, inflammatory cytokines, interleukin (IL)-6 and interferon (IFN)-γ were produced from peripheral blood mononuclear cells (PBMCs) of BD patients by stimulation with streptococcal antigen. The titers of serum-antibody against streptococci were also elevated in BD patients. The 65kDa of heat shock protein (HSP-65) derived from S. sanguinis, can be detected along with counterpart human HSP-60 which reactively appears in the sera and lesions of BD patients. The peptides of HSP-65 show considerable homology with those of the human HSP-60. Epidemiology surveys suggest that the prevalence of BD is highly distributed from the Mediterranean countries to Japan via China and South Korea, along so-called “old Silk Route”. The prevalence rates Kaneko F et al . Immune reactions to oralstreptococci in their pathogenesis 42 Figure 1 a. Prick tests by bacterial antigens (1 × 10 org./mL) (Hollister-Stairs, USA). After 24-48 h, strong erythemaous reactions appeared by Streptococcus (S.) sanguis, S.sarivarius, S.faecalis, S.pyogenes and cell wall of S.sanguis and salivarius antigens except Staphilococcus (S.) aureus antigen and saline (control). b. The cutaneous reaction by 0.01 mL injection of S.viridans and Staphilococcus aureus antigens after 48 hours. c. A biopsy specimen from the reactive site by S.viridans showed vascular phenomenon was similar to that of EN-like eruption of a BD patient. Cutaneous tests with bacterial antigens. Kaneko, et al: Yonsei Med J 38: 444, 1997 of 1990s were 8 to 37 per 100,000 in adult population of Turkey and 11 -13per 100,000 population in Ningxiahui and Heilongjiang of China. In Japan, the prevalence was suggested to be 13 per 100,000 as well as in Korea in the 1970s, but its rate has decreased lately, because the environmental conditions, such as oral health behaviors, etc., are changed. Then, we have attempted to review about the new diagnostic ways for BD in comparison to the related diseases showing the similar symptoms due to immunological reactions. MUCOCUTANEOUS INVOLVEMENTS RAS: The oral aphthous ulceration punch-out shaped painfully occurs on the tongue, buccal mucosa, gingival and lip, continues around a week, though self-limited, and nearly 100% of BD patients will be associated as the initial signs (Figure 4 and 5). On the other hand, non-BD RAS is a very common disorder due to trauma, some viral and/or bacterial infections and other autoimmune diseases, because about 20% of the general population is thought to be affected in the world. The biopsy specimen from RAS lesion of a BD patient revealed the epithelial cells surrounded by neutrophils and T cells like the antibody dependent cell mediated cytotoxicity. The epithelial cells of the ulcer margin were stained with anti-human IgA, IgM, complement, streptococcal antibodies and HLA-DR monoclonal antibody. However, it is difficult to differentiate oral ulceration-lesions in patients with BD from non BD-RAS by the clinical and/or histological aspects. control S.aureus S.sanguis S.salivarius S.faecails S.pyogenes
INTRODUCTION
Behcet's disease (BD) [1] is a chronic systemic inflammatory disorder characterized by the recurrent involvement of muco-utaneous [recurrent aphthous stomatitis (RAS), genital ulceration, erythema nodosum (EN)-like eruption, acne-like eruption, etc.], ocular, vascular, digestive and/or nervous system organs. RAS showing oral aphthous ulceration generally starts as an initial sign since childhood and/or youth before the systemic symptoms of BD patients [2][3][4] . Although the actual etiology of BD is still unclear, the pathogenesis has been generally clearer by the etiological studies based on the genetic intrinsic factors and extrinsic triggering factors [5][6][7][8][9][10][11][12][13][14][15][16] . As one of the triggering factors, the oral unhygienic condition may be suspected, because periodontitis, decayed teeth, chronic tonsillitis, etc. are frequently noted in BD patients [12,13] . The proportion of Streptococcus sanguinis (S. sanguinis), which was previously recognized as species of the genus Streptococcus named "S. sanguis", was significantly high of 1990s were 8 to 37 per 100,000 in adult population of Turkey and 11 -13per 100,000 population in Ningxiahui and Heilongjiang of China [26,27] . In Japan, the prevalence was suggested to be 13 per 100,000 as well as in Korea in the 1970s [2,4] , but its rate has decreased lately, because the environmental conditions, such as oral health behaviors, etc., are changed [28] .
Then, we have attempted to review about the new diagnostic ways for BD in comparison to the related diseases showing the similar symptoms due to immunological reactions.
RAS:
The oral aphthous ulceration punch-out shaped painfully occurs on the tongue, buccal mucosa, gingival and lip, continues around a week, though self-limited, and nearly 100% of BD patients will be associated as the initial signs [1][2][3][4] (Figure 4 and 5). On the other hand, non-BD RAS is a very common disorder due to trauma, some viral and/or bacterial infections and other autoimmune diseases, because about 20% of the general population is thought to be affected in the world [20,29,30] . The biopsy specimen from RAS lesion of a BD patient revealed the epithelial cells surrounded by neutrophils and T cells like the antibody dependent cell mediated cytotoxicity. The epithelial cells of the ulcer margin were stained with anti-human IgA, IgM, complement, streptococcal antibodies and HLA-DR monoclonal antibody [17,31] . However, it is difficult to differentiate oral ulceration-lesions in patients with BD from non BD-RAS by the clinical and/or histological aspects. Streptococci from saliva of a BD patient in SM agar Figure 3 When saliva from a BD patient was incubated in Salivarius and Mitis (SM) agar, oral streptococci are limited grown in a few days. In a. area, streptococcal colonies from crude saliva were grown and in b area, no bacterial colonies was recognized from the saliva sterilized through the micro-filter with 0.2 μpores. Figure 4 A clinically typical and active case of 35 year-old female BD patient classified as "Incomplete type" by Japanese Classification (a). Although 2 mm erythema-reaction appeared by "Pathergy test" (b), more than 20mm diameter erythematous reaction was recognized 24-48 hours after self salivary prick (Salivary prick test). Also, by the sterilized saliva, more than 10mm erythema reaction was observed in this case (c).
Incomplete BD female patient (35 F, YS)
Genital ulcer: The clinical features of genital ulceration are generally shaped as similar to oral aphthous ulceration of BD patients ( Figure 5). A few cases of young female are suddenly attacked by genital ulceration as the initial BD symptom clinically like Lipschutz genital ulceration [32] , which is supposed to be due to Epstein-Barr viral (EBV) infection [33,34] . However, EVB was not detected from the lesion as our case listed in Table 1. About more than 50% of BD patients are found to be associated with genital ulceration (female: 55.5%, male: 58.7%), that is, ulcers occur on vulva (66.1%), vaginal mucosa (35.7%), anus (9.6%), cervix (4.1) and groin area (0.8%) in female and on the penis (46.5%), scrotum (38.5%), anus (9.2%) and groin area (5.0%) in male patients [2,4] .
EN-like eruption of BD patients and non BD-EN: More than 50% of BD patients are reported to be associated with ENlike eruption on the lower legs [2][3][4] , which looks smaller to EN of non-BD patients ( Figure 2a). Generally, the histology is "vascular reaction" infiltrated by mainly lymphoid mononuclear cells, socalled "lymphocytic vasculitis", and septal panniculitis in the subcutaneous fatty tissue. In acute and active phase of BD patients, however, vasculitis surrounded by neutrophils is able to be recognized in a few days after the occurrence. Although it is difficult to differentiate BD-EN like eruption from non-BD EN, features of venous thrombosis features are sometimes found in active BD-EN [35] . Immunofluorescence revealed deposits of IgA, IgM, C3 and streptococcal related materials by anti-streptoccocal antibody in the vascular walls (Figure 2c,d) [12,17,31] . On the other hand, the streptococcal related materials could not be detected in our cases with non-BD EN tissues. The findings suggest that streptococcal antigens might be playing an important role in the BD symptoms as the triggering extrinsic factor [11,12,17,31] . It is of interest that GroEL of S. sanguinis and human heterogenous nuclear ribonucleoprotein (hnRNP) A2/B1 were expressed on the vascular walls [36,37] . However, the causation of non-BD EN is also unknown, but the majority of EN patients have evidence of recent streptococcal infection or have no identifiable causes [38,39] .
PATHERGY TEST AND ORAL STREPTOCOCCI
It is not difficult to make a diagnosis for BD except for the atypical cases without the main muco-cutaneous symptoms including RAS. Pathergy reaction, which is a non-specific cutaneous hypersensitive response showing around 2 mm pustule 24-48 h after pricking with 20 G syringe needle, has been thought to be helpful to diagnose BD, because the phenomenon has been believed as a unique feature of BD described by International Study Group of BD [40] . Histology and immunohistology of the "Pathergy reaction" is similar to those of EN-like eruption from BD patients [41,42] . However, recently the reactivity by Pathergy test became chronologically lower to less than 40% of BD patients seen in 2007s, though the response was seen in more than 70% of the patients in 1970s. The patients associated with HLA-B51 were thought to show stronger skin reaction by Pathergy test [43] , but its diagnostic value is different from the prevalence in the countries [44][45][46] . In our cases, only one of 22 cases showed a 2 mm pustule 48 h after pricking with 20 G syringe needle, which Pathergy reactivity was less than 5% ( Figure 3b). It is of interest that the surgical cleaning of the forearm before needle prick reduced its reactivity [47] , suggesting that the "Pathergy reaction" might be a response to some bacteria living on the surface of the skin. Then, instead of "Pathergy test", we tried to prick with self-saliva including oral streptococci (Salivary prick test) (Figure 3), to the forearm of BD patients using Lancetter with tiny stick (OY ALGO AB Espoo/ Esbo, Sweden), because the patients have hypersensitivity to 44 Kaneko F et al . Immune reactions to oral streptococci in their pathogenesis Tables 1 and 2). The histology of the positive site was basically similar to that of BD-EN-like lesions. Non-BD RAS and Lipschutz genital ulcer patients showed weaker reaction than those of BD patients, suggesting some possibilities to differentiate from each other and also correlation with streptococci in the pathogenesis. No reaction and/or tiny spot were seen by the prick with microfilter-sterilized saliva and saline in BD patients and the disease controls including patients with viral aphtha and non-BD EN and healthy persons (Table 1). Regarding non-BD RAS, the results also suggest that oral streptococci are playing an important role in the pathogenesis of RAS of BD patients, although there are many 37 F NI (intestinal type BD) oral and genital ulcerations in Table 2. we have obtained interesting results that PBMCs from BD patients without HLA-B51 gene can be significantly stimulated by S. sanguinis antigen in the expression of IL-12p40 mRNA and that its protein level was also increased in connection with IL-12p70 (p35 and p40 subunits) rather than those of the patients with HLA-B51 [59] . The antibacterial host response by T cell type immunity mediated by IL-12 is suggested to be much stronger in HLA-B51-negative BD patients in vitro experiment. In our cases, about 33% of the patients were associated with HLA-B51 (Table 2) and the severity of the Salivary prick test might be correlated with the disease activity in BD patients, though Pathergy test was reported to be stronger than those of patients with HLA-B51 [45,49,60] .
HYPERSENSITIVITY AGAINST S. SANGUINIS:
Generally, the oral health is impaired in BD patients with the disease severity [11][12][13]15,16] . The antibodies against S. sanguinis showed cross reactivity with the synthetic peptides of HSP-65 derived from the studies, still no clear causation is present [50,51] . The Salivary prick test is considerable to make a differentiation of BD from non-BD disorders with similar symptoms. The case with Lipschutz genital ulceration showed a weak skin reaction to self-saliva (Figure 6a, b, Table 1) [49] .
HLA GENOTYPING AND STREPTOCOCCAL INFECTION
HLA-B51 is supposed to be a highly associated with BD patients as the genetic marker even in many different ethnic groups including European, Mediterranean and Asian people. BD has several unique epidemiologic features which seem to go from Southern Europe to Japan along "the old Silk Road", as mentioned previously [5,7,8,52,53] . The appearance of BD lesions is not directly correlated with HLA-B51 in the immunological background of the patients, but it was found that HLA-B51-restricted cytotoxic T lymphocytes (CTLs) played some roles in correlation with the stressed target tissues expressing major histocompatibility complex class I chain-related gene A (MICA) in BD pathogenesis. When the transmembrane-MICA is preferentially expressed on epithelial and endothelial cells by stress, they seem to be the candidates for the HLA-B51restricted CTLs response. The endothelium are also considered to be the ligand for activating natural killer (NK) cells with NKG2D molecule and CD8 + T cells as CTLs [54,55] . Regarding NK cell activation, inhibitory CD34/NKG2A and activating CD94/NKG2C molecules are alternatively expressed on NK, CD4 + CD8 + T cells, as indicating an imbalance in cytotoxic activity in BD patients [56] . However, the function of NK cells is supposed to be down-regulated in the active stage and to be up-regulated in the remission of BD patients [57] . On the other hand, the expressive CD4+T cells activated by inflammatory cytokines including interferon (IFN)-γ, IL-12, IL-23, etc. might be altered to Th17 cells, which release IL-17 in the BD lesions, as seen in autoimmune disorders [58] . HSP-65/60 derived from microorganism including S. sanguinis and damaged human tissues, which is actually detectable in the oral mucosal and skin lesions of BD patients, also becomes a stressinducible factor in connection with MICA*009 expression [23,24] . Generally, it has been reported that antigen presenting cells (APCs) expressing IL-12 are thought to be activated in BD patients with HLA-B51 in the active stage, as seen in transgenic mice [52] . However, bacteria [61,62] and delayed type cutaneous hypersensitivity reactions against streptococcal antigens were also seen in BD patients. Actually BD symptoms were provoked by the antigens and aphthous ulceration can be also induced by a prick with streptococcal antigen on the oral mucous membrane of a BD patient [11,12,17,18] , which is socalled "oral bacterial allergic reaction". Isogai et al [62] demonstrated that the symptoms mimicking BD appeared in germ-free mice when S. sanguinis from BD patients was inoculated into their oral tissue damaged by heat shock and/or mechanical stress. This report suggests that the immunization with S. sanguinis through the oral membrane route elicits BD-like symptoms in the animal model. We tried to find the presence of Bes-1 gene by polymerase chain reaction (PCR) in BD lesions using 2 distinct primer sets (peptides, 229-243 and 373-385) encoding S. sanguinis (serotype KTH-1) prepared by Yoshikawa et al [63] . Bes-1 DNA was present in various muco-cutaneous lesions including oral and genital ulcerations and EN-like lesions. PCR-in situ hybridization revealed Bes-1 DNA expression in the cytoplasm of inflammatory infiltrated monocytes adhering the vascular walls in muco-cutaneous lesions (Figure 7) [64] . In contrast, we failed to detect DNAs of herpes simplex virus (HSV)-1, HSV-2, cytomegalovirus, human herpes virus (HHV)-6 and HHV-7 in the lesions by PCR [65] , although it is reported that the animal models infected by HSV demonstrated to mimic BD like symptoms [66] .
Aphthous ulceration and systemic symptoms in BD patients
Interestingly, the amino acid sequence of the peptides of Bes-1 (229-243 and 373-385) shows more than 60% similarity to the human intraocular ganglion peptide, Brn-3b which is a subfamily of POU (pit-Oct Unc) domain factors containing Brn-3a and Brn-3c [67] . The peptide of Bes-1 (229-243) was also found to be correlated with the peptide of HSP-60 (336-351) [61] . These results suggest that Bes-1 derived from oral S. sanguinis might be an inducer for the possible retinal and neural involvement in BD patients.
HSP-65 DERIVED FROM MICROORGANISM AND HUMAN HSP-60
HSPs, which scavenge denatured intracellular proteins, are supposed to be induced by microorganisms and mammalian tissues under a variety of stressful condition [68] and they may be involved in the pathogenesis of some autoimmune diseases [69] . The serum levels of IgA to mycobacterial HSP-65, which cross-reacts with selected strains of S. sanguinis, are significantly increased and HSP-60 was also detected in various lesions in BD patients [70,71] . On the other hand, 4 peptides of HSP-65 (111-125, 154-172, 219-233 and 311-326) derived from S. sanguinis, which are 50-80% homology to the counterpart human HSP-60, were recognized as immuno-dominant agents for T and B cell responses [24,[72][73][74] . The 4 peptides of HSP-65 were also shown to significantly stimulate and undergo CD4 + and CD8 + T cell apoptosis in PBMCs from BD patients and HSP-60 also seemed to stimulate them [71] . On the contrary, the other two peptides of HSP-65 (21-35 and 401-415) corresponding to human HSP-60 (425-441) are reported not to stimulate PBMCs from BD patients and healthy individuals [68] . The peptide of HSP-60 (336-351) was also identified to be highly homologous to T cell epitope [68,[70][71][72][73][74][75][76] . Whole HSP-60 is, however, suspected to induce vascular endothelial growth factor (VEGF) which activates, impairs and propagates the vascular endothelial cells [76] . It may also lead to thrombophlebitis and vasculitis by damaging endothelial cells in BD patients, although non-BD EN does not seem to be associated with thrombophlebitis [78,79] .
It is of interest that the peptide of HSP-60 (336-351) linked to recombinant cholera toxin B subunit (rCTB) reduced the uveitis induced by whole HSP-60, although the peptide without the adjuvant is reported to induce uveitis in Lewis strain rats [80][81][82] . A therapeutic trial by the peptide conjugated with rCTB was performed to BD patients with recurrent uveitis. The successful results were obtained to show that 5 of 8 patients had no relapse of uveitis, and that 2 of the remaining 3 patients had improved recurrent oral ulceration, folliculitis, EN-like eruptions and genital ulcers without any side-effects. In those patients with uveitis and extra-articular manifestations, a lack of the peptide-specific CD4 + T cell population, a decrease in expression of Th1 type cells (CCR5, CXCR3) and a reduction of IFN-γ, TNF-α and CCR7 + T cells were observed in comparison to BD patients with relapse of disease [82] . The HSPs presented by APCs can directly stimulate αβ + T and γd + T cells which play important roles in the oral mucosal immunity as the first defense against microorganisms. It is thought that Vγ9d2 + T cells, a major subset of γd + T cells, which recognize antigens in the innate and adaptive immune responses, were influenced by secreting IFN-γ. The γd + T cells expressing CD29 and CD69 produce IFN-γ and TNF-α from stimulation by HSP-65/60 in the lesions of BD patients with active disease [83] . In the active stage of BD patients, IL-12 as a sign of Th1 type reaction, is also produced and advanced the symptoms. It is interest that the gene polymorphism in the promoter region regarding a 4 bp insertion within IL-12p40 (IL-12B) was significantly higher in the HLA-B51 negative BD patients. The expression of IL-12B mRNA and protein levels were also significantly increased in PBMCs from BD patients without HLA-B51 by stimulation with S. sanguinis antigen [58] . The expression of IL-23, which is composed of a shared p40 subunit of IL-12 and p19 subunit of IL-23 was also increased in EN-like lesions of BD patients [84,85] . The therapeutic approaches using the peptide of HSP (336-351) linked to rCTB were applied for BD patients with advanced uveitis, as "oral toleration" demonstrated by Stanford et al [82] . In order to understand the suppressive mechanisms of the cytokine production in PBMCs from active BD patients, we tried to find the binding sites of the peptides on monocytes by cDNA chips (Gene Chip; Human Genome) using NOMO-1 cells (human macrophage cell line) activated by S. sanguinis antigen. Although the expression of IL-8, IL-16, IL-13R and IL-17R was decreased after incubation with LO1 and UK, respectively, LO2 did not decrease IL-8 production. CD58 (lymphocyte function-associated antigen-3) molecule and/or FK506 binding protein were highly expressed on the cell membrane after application of LO1 and UK [86,87] .
TOLL-LIKE RECEPTOR (TLR) EXPRESSION
Regarding the recognition system for the microorganism antigens in humans, 10 numbers of TLR families are supposed to act as innate immune receptors by binding of particular structures present on bacteria, viruses, fungi, etc. [88] . TLR-3 [ds RNA] and TLR-6 [mycoplasma, staphylococci, etc.] are also reported to be enhanced in expression on neutrophils and monocytes of BD patients, when stimulated by HSP-60 and S. sanguinis antigen [89] . In RAS lesion of BD patients, expression of TLR-9 [unmethylated CpG DNA, bacteria and virus] has been also found [90] . These findings suggest that innate immune system contributes the acquisition of hypersensitivity against oral streptococci in the pathogenesis of BD as the extrinsic factor.
COMPLEMENT SYSTEM
Deposits of complement C3 with immunoglobulins are frequently detectable at the vascular involvement by immunofluorescent techniques in BD patients [17,31,79] and the titer of serum complement is generally high in the inactive stage. However, the levels of mannose-binding lectin (MBL) pathway are generally decreased in the patients [91] . The MBL pathway is considered to play an important role in the innate immunity. Ficolin (FCN) is a soluble protein that binds to carbohydrate on the microbial cells and 3 different types of FCN are detected. FCN 1 and 2 genes are located in the chromosome 9q34 and FCN3 gene is assigned to chromosome 1. FCN 2 binds to lipoteicholic acid on the cell wall constituent in all Gram-positive bacteria and activate immune cells to produce proinflammatory cytokines [92] . We have found that novel FCN 2 gene single nucleotide polymorphisms (SNPs) are identified in the promoter regions as well as in the exon regions. The MBL genetic polymorphisms might be involved in immune responses to streptococcus infections in BD patients, because the relationship between MBL gene mutations and microbiological factors were suspected in the lesional immune reaction of BD patients [93] . Although no significant difference was present in the genotype allele frequencies of MBL gene SNPs between BD patients and healthy controls, the allele frequencies of FCN2 gene SNPs were significantly recognized in the promoter regions (-557 and -64 sites) among HLA-B51 positive BD patients [94] . The findings suggest the possibility that FCN gene of the MBL pathway in complement system contributes to the innate immunity in BD patients.
RAS AND SYSTEMIC SYMPTOMS
BD symptoms are characterized by vascular involvements histologically showing swollen endothelial cells of the micro-veins infiltrated by inflammatory monocytes with a few neutrophils, so-called "vascular reaction" seen in EN-like eruption and other lesions [17,42,78,79] . The strong hypersensitivity reaction against S. sanguinis agents carried by APCs can be suspected in the pathogenesis of BD which may be one of the extrinsic triggering factors [11,12,17,18,20] . Regarding the treatment, low dose administration of minocycline is clinically effective for BD patients, because minocycline is experimentally administered not only to decreases a growth of oral S. sanguinis but also works to suppress IL-1β and IL-6 production from T cells inflamed. Actually, we recognized clinically effective for RAS, acne-like eruption and EN-like lesion in BD patients [12] . Other studies also showed that combination therapy, colchicine and benzathine penicillin, was effective to suppress BD symptoms compared to colchicine monotherapy [95,96] . The oral infectious agents suggest the hypothesis that after Bes-1 gene derived from streptococci taken in the cytoplasm of APCs through the TLRs in RAS lesion of BD patients, the APCs carrying the streptococcal antigen produce HSP-65 in the peripheral vascular lesions. The APCs impair MICA expressed endothelium of the vessels in correlation with HSP-65/60, VEGF, adhesion molecules, etc. BD lesions will be induced by the "vascular reaction" and/or "lymphocytic vasculitis" as the immunological reactions due to the APCs expressing S. sanguinis antigen [97,98] ( Figure 8).
THERAPY
In order to treat for BD patients, we should know about the clinical manifestations and pathogenesis, as described above. It is important to analyse clinical metabolic biomarkers of inflammation in the advanced systemic symptoms of BD including involvements of ocular, vascular, neurosystem and gastrointestinal organs. However, though the therapy for the muco-cutaneous symptoms such as RAS, genital ulceration and acne-like eruption is centered on the topical measures, low dose of minocycline capsule (50-100 mg/ day) for long time treatment is effective not only for the clinical symptoms, but also inflammatory cytokine production from activated lymphoid cells, as described previously [12] . And administration of colchicine (0.5-1.0 mg) can also manage the inflammation of EN-like symptoms and the joint involvements [99] . As to the immune reduction, azathiopurine, cyclosporine and corticosteroids are used in cases with severe resistant muco-cutaneous and articular manifestations of BD. To date, in the point of immunological mediators correlated to the systemic involvements, some biological antibodies, as infliximab, adalimumab, etc., are applied for BD patients [100] . | 2019-04-01T13:15:50.013Z | 2016-09-18T00:00:00.000 | {
"year": 2016,
"sha1": "12f9ee25c39ec34041e80a8f465cadee9b9866b6",
"oa_license": null,
"oa_url": "http://www.ghrnet.org/index.php/jdr/article/download/1679/2135",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "53d6be60ca7894f1a94951e3e9857e429e5f9cfe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
261985544 | pes2o/s2orc | v3-fos-license | Technological modeling of physicochemical removal of iron from deep groundwater
In view of the significant difficulties arising in controlling the operation of the rapid iron removal filters on the basis of full exhaustion of their clarifying resource at every calculation stage (filter run), it is suggested to realize a simple control algorithm, which assumes an equal duration of filter runs during the entire service life of one filtering material change. Since the efficiency of physicochemical iron removal depends significantly on the specified duration, in order to establish its optimal value in every given case it is necessary to perform a special technical and economic analysis with a detailed consideration of the composition and degree of contamination of natural water. In the working conditions considered above, which are typical for physicochemical iron removal from deep groundwater in Ukraine, such a value was 600 conventional units, which corresponds to 48 h. The cost of the treatment increases significantly even with a small deviation of the filter run duration from the optimal value.
Introduction
Water supply to settlements and industrial plants is conducted from surface or ground sources [1].Surface waters are often contaminated with toxic chemicals, petroleum products, salts of heavy metals, phenols, biogenic substances, etc. [2].Groundwater are characterized by physical and bacteriological indicators that are quite constant and their quantity is rather high for economic and drinking water supply, with quite diverse chemical indicators, which do not depend on weather conditions in most cases [3] especially for deep horizons.
Iron is often present in high concentrations in groundwater [4][5][6].There are a number of consequences associated with the fact.Continuous consumption of water with a high iron content can lead to a development of various health problems [7,8].Additionally, a high iron content in water is associated with an unpleasant taste, red color of water and stains on laundry and plumbing fixtures [9].Different methods are developed and used for iron removal from water [10][11][12][13].The use of sorbents (nanomaterials) is perspective for the sewage treatment from heavy metal ions, including iron [14][15][16].
In general, reagent-free, reagent, cation-exchange, membrane and biochemical methods can be used at iron removal from water for drinking purposes [17].The first two methods belong to physicochemical methods and involve the introduction of iron oxidizers.When the first method is applied, this oxidizing agent is air oxygen [17].The task of the methods is to convert soluble forms of iron into Fig. 1.Scheme of iron removal from water by aeration and filtration on polystyrene foam filters with upflow filtration.
V.L. Poliakov and S.Y.Martynov medium.Contaminated washing water is collected by the distribution system 5 and discharged into the sewage system by the pipeline 11.The water level in the above-filter space gradually decreases and when the level reaches 10 cm above the grid, backwashing is stopped by closing the gate valve on the pipeline 11.The filter is again switched to the filtering mode, for which purpose the gate valve on the pipeline 1 is opened.
Various technological schemes and simple calculation methods have been developed to solve the problem of iron removal from deep groundwater [17][18][19].However, a number of its important formal peculiarities were not being considered, including nonlinear effects of mass transfer, autocatalyticity of the sorption, specific character of deposit consolidation, etc. Mathematical models adequately describing this technological process have not been proposed until now.This is due to both the complexity of a set the physicochemical processes insuring virtually complete iron removal and the need for specific extensive experimental studies to provide information support for mathematical modelling.
Based on the results of our own experimental data and the conceptual model [18], we have developed a mathematical model of iron removal consisting of interconnected joint clarification and hydraulic compartments [19].The scheme for the calculation of physicochemical iron removal from groundwater is shown in Fig. 2. The mathematical model assumes the presence of two forms of iron retained in the pore space of the packed bed.The clarification compartment is represented by equations describing, first, the transport, Fig. 2. Scheme of immobilization and transformation of iron forms at physicochemical iron removal from deep groundwater.
V.L. Poliakov and S.Y. Martynov adsorption and oxidation of ferrous iron, secondly, the transport and sorption of ferric iron, and also boundary and initial conditions [17].The key component of our proposed mass transfer model for the two forms of iron in filters of various design is a system of kinetic equations that takes into account the following.
-intensification of iron removal from water under the impact of the formed deposit; -oxidation of ferrous iron predominantly in an adsorption form; -the limitation of the sorption capacity, which leads with time to a noticeable decrease in the intensity of iron removal: The method of aeration and filtration on granular filters has become widespread for the groundwater treatment [17].The relatively high cost of excess iron removal from groundwater in Ukraine is largely due to the inefficient use of expensive filtering materials in specialized rapid filters [18].Such filters are often operated impractically.Thus, the capital expenditure for the physicochemical removal of iron from artesian waters, which often meet the drinking water standards according to other environmental and sanitary indicators, can be determinative in assessing the price of water treatment [20].Although, the operation costs also play a significant role.
It is well known that the optimization of the operation of any filtering plant consists of minimizing the total material and labor used in its construction, preparation for operation, and maintenance [21].At the same time, it should be considered that iron removal filters operate under more difficult technological conditions than conventional filterclarifiers for treating aqueous suspension due to the presence and the transformations of the two forms of iron, and most importantly, the progressive accumulation of the non-washable deposit [22,23].Therefore, a special approach is required for calculating the filtration of water with a higher concentration of iron compounds.It allows for a quantitative analysis of the set of processes in the filter medium during both, a separate operating period and any of their sequences.When simulating physicochemical removal of iron, it is necessary to first set the initial concentration of the immobilized ferric iron, which is corrected considering the consolidation of the newly formed deposit at every calculation stage.
In general, the applied optimization of the operation of any water treatment filter should consist of choosing the least expensive process control algorithm from a variety of possible ones, ultimately reducing the water treatment cost.It is customary to operate with the concept of a representative filter run (is identified with the standard operation period) for rapid filter-clarifiers [24].The filter control algorithm, during their prolonged operation, can be interpreted as a sequence of unified filter runs in the first approximation.Therefore, they have the same and the greatest duration considering the technological limitations.Abrasion of the packed bed elements (grains) is not taken into account; thus, it is often possible to significantly reduce the cost of the filter-clarifier operation by implementing such an algorithm in practice and sometimes prevent excessive deterioration of the filtrate quality.
The objectives of this work were.
-elaboration of the procedure of numerous consecutive technical and economic calculations of the operation of rapid filters under conditions of deposit consolidation, formation and accumulation of its non-washable component as a consequence; -comparative analysis of technological process control algorithms used in the practice of physicochemical iron removal from deep groundwater and on its basis the development of recommendations on selection of the most economical algorithm.
Technological optimization of the operation of a filter-clarifier
Technological optimization procedure which is commonly used at rapid filters in the separation of aqueous suspensions is based on two criteria of efficiency [25].These criteria in symbolic form express the restrictions that must be imposed on the action of filters as clarifiers.Their observance should be constantly monitored.And above all, the concentration of residual disperse contamination in the filtrate C e should not exceed the standard value C * .Therefore, the next condition must be fulfilled as follows and determinate by Equation ( 1) The second requirement is determined by the rate mode.If the filtration rate is maintained constant due to special devices (regulators), then it is necessary to monitor the head losses in the packed bed Δh throughout the entire filter run.It is advisable to continue the filter operation only to the point where the specified losses have not reached the maximum allowable value Δh * .Then the second criterion is formalized and determinate by Equation ( 2) ( The filter productivity gradually decreases due to unregulated filtration rate V.In this case, it is necessary to set a limit for such decrease based on economic considerations and determinate by Equation (3) From the sanitary, technical and economic standpoints of view, it is obvious that the productive operation of the filter can only be considered if conditions (1) and ( 2) or (3) are simultaneously met depending on the rate mode, (constant or variable rate).
Based on (1)-( 3), it is advisable to introduce a number of technological times as follows: -the time of the protective action of the filter bed t p from the following equality as a partial case of ratio ( 1) is determinate by Equation ( 4) -the time necessary to reach the maximum allowable head losses t h from equality according to (2) is determinate by Equation ( 5) -the time of allowable decrease in filter productivity t V according to (3) is determinate by Equation ( 6) Now the start time of the next backwashing of the filter (the duration of the filter run t f ) should be identified as the shortest of the indicated technological times.Formally it is determinate by Equation ( 7) However, this approach is not sufficient for the technological optimization of the physicochemical removal of iron, because it helps to reduce operating costs.At the same time, capital costs, which play a special role in the economic analysis of the iron removal filter, are not taken into account.As a result, there is an urgent need for full-scale mathematical and technological modeling of such filters action.Therefore, it is necessary to analyze the changes in the filtration characteristics in detail (outlet concentration of total iron C e , head losses in the packed bed, filtration rate) during at least the entire service life of the packed bed (the total time of the productive operation of the filter with one change of the filtering material t s ).
The optimization of filter operation during iron removal from deep groundwaters
The gel-like deposit that forms and distributes unevenly in the packed bed of a filter-clarifier does not consolidate.Therefore, almost all new deposit can be regularly removed due to a short-term intensive supply of treated water in the opposite direction.The technological process inside the iron removal filters proceeds fundamentally differently.Here, the progressive accumulation of the residual deposit from one filter run to another is clearly observed, which progresses from one to another filter run [23].Its strength increases so much, due to the ongoing deep structurization with the formation of new stable chemical connections, that it is necessary to sharply increase the washout intensity and the volume of washout water in order to remove it.However, operating costs increase in an accelerated manner, which is economically disadvantageous.Therefore, the traditional approach to the operation optimization of the water treatment filters and substantiation of the design and technological parameters has to be significantly developed both in the engineering practice of iron removal and in its technological modeling.In fact, it is necessary to focus on the long-term control algorithm, which is implemented up to the complete exhaustion of the treated resource of the gradually contaminated filtering material.The design of an algorithm that can ensure the maximum service life of the filter medium can be the goal of such modeling in principle.It is logical that capital costs will be minimized in this way and, as a result, the cost of the treated water will be reduced significantly.But certainly, it is possible to achieve the maximum efficiency from the physicochemical removal of iron by filtration based only on the results of a special technical and economic analysis with the involvement of generalized (the reduced costs) and specialized (prices) economic indicators.As a rule, such information is difficult to access and insufficiently complete for mathematical modeling.Moreover, its reliability often raises doubts.The only real possibility to develop a rational filter control algorithm for the entire service life of the next filtering material change consists in attracting adequate mathematical models of the physicochemical removal of iron from groundwater and using effective methods of their solution.And only the application of correct solutions of the corresponding mathematical tasks to the entire sequence of filter runs, including the last run, (before the filtering material replacement) allows to estimate objectively the actual workability of the filter when removing excess iron and create a basis for subsequent technical and economic estimations.
The calculation method presented in Ref. [19] permits a significant variety both in the physicochemical state of the deep groundwater, which directly enters the medium, and in the filtering material before the next filter run.Fresh water in an aquifer that is deep-lying and is not exposed to the atmosphere contains iron exclusively in the ferrous form.Naturally, that the ratio between the contents of both forms of iron in the pumped groundwater significantly depends on the method and duration of such contact and the initial proportion of the ferric iron form can only increase and the ferrous iron form decrease respectively at the preparatory stage.From a technological point of view, the access of air to water can be simultaneously undesirable and desirable.Early interaction of groundwater with air, which accompanies their pumping, contributes to the deterioration of the technical condition of water intake equipment, even with its possible failure after long-term operation.On the contrary, water must be aerated to create favorable conditions for the oxidation of ferrous iron in the filter medium immediately before entering the filter.Here a practical question arises about the efficient depth of the aeration, since the cost of the treated water depends significantly on its solution.The practice of the physicochemical removal of iron indicates that simplified aeration is preferable at a comparatively low initial content of ferrous iron.It is simply implemented and, unlike forced aeration (using aerators), does not require additional and often very serious costs.
In fact, the deep groundwater supplied to the filter necessarily contains both forms of iron [26].The ratio between them can vary widely in principle.As a basis for a comparison when estimating the degree of preliminary oxidation of the ferrous iron, (was initially in the fresh water of an isolated aquifer), it is advisable to take the sum of the initial volume fractions of ferrous iron C a0 and iron hydroxide C h0 (the previously oxidized part of the total amount C 0 = C a0 + C h0 has concentration C h0 due to uncontrolled access of air V.L. Poliakov and S.Y.Martynov to water and its special aeration).Then the entire range of possible situations with iron contamination of water immediately before filtration can be conveniently characterized by the normalized parameter Ψ 0 = C a0 /(C a0 + C h0 ).In principle, Ψ 0 can vary within the maximum limits, i.e. from 0 to 1.However, this parameter is in a narrower range when using simplified aeration.It is important to note that the ratio Ψ 0 is not a calculated value, although it can be controlled.In fact, when modeling the iron removal process, it should be set based on the actual composition of the contamination at the filter inlet.It is especially important to regularly calculate the initial content of the immobilized iron with the greatest precision due to the accumulation of permissible computational errors.At this stage of calculations, it is justified to take the amount of ferrous iron adsorbed by the end of the previous stage as the initial amount after the corresponding recalculation.The amount of the deposit based on iron hydroxide is sharply reduced as a result of the medium washout.The residual amount of the deposit is simply set using the hydrodynamic resistance factor R h .The indicated empirical coefficient characterizes the part of the non-washable deposit in the newly formed deposit and essentially depends on its age by definition [23].Since the proportion of iron hydroxide in the deposit is stable, this coefficient can also be used to calculate the accumulation of the bound water and deposited ferric iron separately.
Calculation of initial concentrations of immobilized ferrous and ferric iron
The filter run number j is selected and considered separately from the numerous sequences of filter runs and the time count starts simultaneously with the start of the filter run.Then the calculation period will be [0, t fj ].The volume fractions of the immobilized ferrous and ferric iron are assumed to be equal S 0 aj , S 0 hj and are the same at any height due to intensive mixing and multiple collisions of the elements of the pseudo-fluidized packed bed as a result of its washout.The volume of the adsorbed ferrous iron will be in the bed with height L at the moment t (t fj ≥ t > 0) as follows is determinate by Equation ( 8)
Iron hydroxide is also additionally immobilized in it, namely
The next hydroxide amount will be added after the interval Δt.
The age of a given portion of the deposited ferric iron immediately before the washout j will determinate by Equation ( 9) The residual of this portion will remain in the medium after the washout is determinate by Equation (10) ∂ ∂t The expression (10) must be integrated over time in the range from 0 to t fj to determine the total amount of the new deposited hydroxide remaining after the washout.Then the residual concentration of hydroxide (initial for the filter run j + 1) will determinate by Equation ( 11) The expression for the new initial value of the concentration of the iron hydroxide (filter run j + 1) has the following dimensionless form The concentration of the iron hydroxide (filter run j + 1) has the dimensionless form of the expression for the new initial value is determinate by Equation ( 12) V.L. Poliakov and S.Y. Martynov Here As the result of the supply of both forms of iron into the filter being even over time, the smoothness and almost linear nature of the changes in W j , it is possible, in order to simplify, to use the average age 0.5t fj for the accumulated deposit during the filter run j.
We substitute the exact expression for an approximate expression with minimal errors.Then Finally, it is determinate by Equation ( 13) At last, due to the redistribution in the pseudo-fluidized medium of the filtering material of the ferrous iron, that was adsorbed by the end of the filter run j (we neglect desorption), the relative concentration of ferrous iron at the beginning of the filter run j+ 1 according to (9) will determinate by Equation ( 14)
Calculation scheme
The calculation scheme involves a step-by-step implementation of a large amount of calculations.The first filter run is the single calculation stage for which we accept S 0 a = S 0 h = 0 due to the beginning of the operation of a new packed bed.The distribution functions are specified at the end of this stage S a (z, t f1 ), S h (z, t f1 ), S d (z, t f1 ) (S d -is the volumetric deposit concentration, i.e. its specific volume).It is necessary to preliminarily calculate the initial distribution functions S 0 a2 (z), S 0 h2 (z) in order to continue the calculations at the next stage (the second filter run), based on the data of the previous calculation stage.The age and strength of the deposit, the washout efficiency, the redistribution of the adsorbed ferrous iron and the deposited ferric iron along the medium height should be taken into account when determining them.Thus, it is reasonable to take the following relative constant values in accordance with (12), (14) or simplified (13) it is determinate by Equation ( 15) As is known [16], the proportion of iron hydroxide in the deposit can be characterized by the coefficient of the physicochemical state γ, which depends on the concentration of the deposited hydroxide S h .Note that it is reasonable to use its average value γ av in the applied calculations due to the smooth and almost linear nature of the decrease with increasing S h .The calculation expressions arising from its solution are primarily intended to predict their spatiotemporal changes since the basic task is formulated relative to the volume fractions of the ferrous iron in the adsorbed (S a ), dissolved (C a ) states and similarly to concentrations of the deposited (S h ) and dissolved (C h ) hydroxide.The deposit dynamics is easy to follow because the aforementioned coefficient is being used.Therefore, the first portion of the non-washable deposit is characterized by a relatively constant volume fraction along the height (specific volume) by Equation ( 16) It should be noted that it is necessary to take the same value γ (or γ av ) for both the fresh and non-washable deposits.
In fact, the entirety of the major calculations of the filtration characteristics, which has to be performed during a detailed technical and economic analysis of the operation of the iron removal filter, is conditionally divided into N stages (by the number of the filter runs).Two approaches to the choice of the duration of these stages are implemented in practice, these are essentially the two main algorithms of the iron removal control.The principle of establishing the characteristic duration t f becomes of key importance in the theoretical substantiation of the control algorithm.It is necessary to apply the technological optimization procedure, which is standard for the filter-clarifiers, at every calculation stage if we strive to maximally reduce the cost of the filter operation.The treatment potential at the end of every filter run is fully utilized with this approach, which is the basis of the irregular algorithm.Thus, calculations at every stage of the control algorithm using a calculation step unevenly distributed over time (t fj > t f,j+1 ) are carried out as long as both criterion conditions are satisfied, namely, (1) and ( 2) in the case of the constant rate mode (V = const).In fact, the duration of the filter run number j(N ≥ j ≥ 1) t fj is calculated for given and corresponding initial concentrations S 0 aj , S 0 hj .At the same time, the final distribution functions S aj (z, t fj ), S hj (z, t fj ) are concretized using already known value t fj .Firstly, the indicated profiles of both concentrations serve as the basis for determining the initial values S 0 a,j+1 , S 0 h,j+1 for the subsequent filter run number j + 1.They are calculated, assuming full mixing of the set of the structural elements of the medium, according to (12) or ( 13) and (14).The fact that the duration t fj is reduced to a value that is less than the minimum allowable t * experimentally established (according to the formally accepted assumption assuming the N filter runs can be performed in time t s , it should be j = N in the specified duration, so that t fN > t * > t f,N+1 ), indicates the functional unsuitability of the contaminated medium and the need for its immediate replacement.
The additional amount of the deposit that remains in the medium and is distributed in it evenly after a sufficiently intensive washout number j is at the set value S 0 h j .The corresponding relative concentration ΔS dj is calculated using the formula to generalize ( 16) However, it is extremely difficult to constantly control the technological process and automate it in the case of the implementation of the irregular algorithm, which allows to maximally prolongate the filter operation.
It is technically much easier to organize a continuous operation of the filter at a fixed duration of the filter run.Here, instead of different N and previously unknown values t fj , it is enough to substantiate and then operate with a single value t f .Preliminarily, it can only be approved that the specified value should be much less t f1 than that related to the irregular algorithm (with decreasing t fj ), and the final calculated values C e and Δh should be approximately equal to C * , Δh * for the final filter run (j = N), and at the same time t fN ≈ t f , where t fN is determined according to the above optimization procedure.The problem here is to establish the appropriate initial values S 0 aN ,S 0 hN , which requires a complex of calculations in full for the filter runs from the first run to run number j − 1.The number of calculations can be significantly reduced if we begin to control the quality of the filtrate and the head losses in a packed bed after a sufficiently long operation of the filter.The combined algorithm was considered in addition to the main algorithms.At the same time, two sequences of filter run with different fixed durations and fit into the service life t s .
The basis for our developed method of technical and economic analysis of physicochemical iron removal from deep groundwater was a nonstationary nonlinear mathematical model consisting of three interconnected compartments, as well as its exact solution [19].In order to provide the model with the initial information in full, comprehensive experimental studies were carried out in laboratory and production conditions [ [23,26]].A procedure of continuous step-by-step calculations was organized and implemented using Mathcad to establish technological parameters and economic indicators with a focus on the reduced costs during the service life of the filtering material change.The mentioned procedure allows to analyze operatively and in a dialog mode the impact of physicochemical conditions at the boundaries and inside the bed on the quality of water treatment and mechanical energy resource of the filter, to substantiate the economic expediency of application of modern control algorithms and to optimize them.
Technical and economic analysis
The control time Τ sets for a generalized economic analysis of the costs of the filtration of deep groundwaters.It includes both the productive time (the actual filter operation) and unproductive time (spent on all washouts).
The basic equation for the technical and economic analysis in general view will be determinate by Equation ( 17) where CC, OC are the capital and operation costs respectively); Ω F is the medium surface area; P W is the cost of the treated water per each volume unit.The part of the costs that was spent on expensive filter material is separated from the capital costs, it is determinate by Equation ( 18) where ΔCC are all other points of the capital costs (automation equipment, pumps, aerators, etc.), P F is the cost of a unit volume of the filtering material and the cost of its replacement.Also, in the second term of equation ( 17), the costs of regular washouts are considered separately, namely it is determinate by Equation ( 19) Here N Τ is the number of filter runs per time Τ, P BW is the cost of one washout.Taking into account (18), (19), equation ( 17) can be presented as follows and determinate by Equation (20) V.L. Poliakov and S.Y.Martynov and after dividing both parts of equation ( 20) by VΤΩ F we obtain as following by Equation ( 21) The relative time is introduced to generalize the results of the subsequent calculations.Therefore, t s = Vts n0L , T = VΤ n0L .Then equation ( 21) will be determinate by Equation ( 22) In the case of the second algorithm (t fj = t f ) equation ( 22) transforms in the following and determinate by Equation ( 23) Thus, the reduced costs of the treated water RC depend on the time parameters t s , t f in accordance with (23), and consequently on the control algorithm.It is logical that the choice of the algorithm here is the final goal of the technical and economic analysis that results in RC = min.In fact, such values should be specified as t s and t f,av = Τ/N Τ (or t f ), at which it will be possible to minimize the right-hand sides of equations ( 22) and (23).
Studying the influence of three factors on RC that have economic and technological meanings became the goal of a detailed technical and economic analysis.First, the value RC was calculated depending on the initial ratio of the concentrations of two iron forms Ψ 0 .This ratio is directly related to preliminary aeration (forced, simplified), which significantly affects the cost of water treatment.Secondly, three control algorithms associated with RC are the irregular (with an even calculation step), regular (with an uneven calculation step, so that t fj = t f,j+1 = t f ) and combined (with different calculation steps, t fI > t fII ).Thirdly, the efficiency of iron removal at different degree of groundwater contamination characterized by the sum C 0 = C a0 + C h0 was estimated.
Input data array necessary for the technological modeling was prepared in full, primarily due to a set of experimental studies at laboratory and industrial plants.These studies were planned based on the fundamental non-linear non-stationary mathematical model of the physicochemical removal of iron from groundwater.The mathematical processing of the regime information, which was obtained during the experiments on the above-mentioned plants, made it possible to establish the characteristic values of all model coefficients.They are presented in the articles [16][17][18].Also, the successful experience of long-term operation of iron removal plants of various designs and filtration rates was involved.The specified array was conditionally divided into three groups of parameters depending on the goal application.The empirical coefficients which fully provide an adequate description of the phenomenology of non-biological removal of iron belong to the most numerous groups.The tasks set by us for the technological modeling made it possible to initially fix the values of the indicated coefficients, then calculate the dimensionless analogs, which were involved without changes in all calculations.The second group also included stable technological-criteria constants C * ; Δh * = 6 ; t * = 100.The dimensionless values C * = C * /C 0 depended on the total amount of iron at the inlet to the medium.Thus, if C 0 = 1 mg/dm 3 , then C * = 0.2.The variation of the parameters of the third group within a wide range, corresponding to both normal and extreme technological conditions, made it possible to objectively and comprehensively assess the practical possibilities of the physicochemical iron removal.The value Ψ 0 was changed continuously or discretely (the characteristics of the contamination composition), C 0 (contamination rate), t f (technological time interval).
The quantitative analysis was conducted primarily to illustrate the effectiveness of the proposed method of the technical and economic optimization of the technology of iron removal from groundwater by filtration.The results of the analysis are significant and can contribute to the rational choice of the technological parameters under similar conditions, since real initial data obtained by experimental methods on pilot plants and operating filters with widely used filtering material were utilized.It is obvious that the effectiveness of the operating procedure significantly depends on the filter operation algorithm.The three algorithms described above were compared to determine the most cost-effective option.At the same time, great importance was attached to the possibility of implementing the algorithm with minimal labor costs, i.e. filter automation.The corresponding expense item was not taken into account directly when determining the reduced costs due to the complexity of its cost estimation, but it should be considered when making a final decision on the method and means of control.
Therefore, the main efforts were directed to studying the effectiveness of the regular algorithm (I), which is easily automated unlike the other two algorithms.To substantiate a constant operating time interval t f (an analogue of the filter run in deep-bed filtration of aqueous suspensions) as the most important characteristic of this algorithm, it was varied discretely or continuously within such a framework that the filter operation was ensured over a certain sequence of marked intervals.
The specifics of using the irregular algorithm (II), the traditional algorithm for filter-clarifiers, is that the amount of non-washable deposit increases under the conditions of the physicochemical removal of iron and the adsorption capacity of the packed bed decreases with every filter run.Therefore, it is impossible to unify the calculations of water treatment by filtration in this case which necessitates the determination the corresponding duration t fj at every (j) stage (a separate operating period).As a result, a numerous sequence of decreasing relative values t fj (j = 1, 2, ...) is calculated.
V.L. Poliakov and S.Y. Martynov
The combined algorithm (III), which can be considered as a combination of the first two algorithms, was analyzed fragmentarily due to significant difficulties in automating the algorithm II.Now the every next value t fq was maintained for a number of operating intervals and until either the head losses in the bed or the contamination concentration at its outlet did not exceed the limit values Δh * and C * respectively.Then the accepted value t fq immediately decreased by 100 units and the procedure for calculating the desired characteristics at the new value t f ,q+1 was repeatedly duplicated, starting from the last calculation interval and up to the next violation of conditions (1) and (2).All predictive, technical and economic calculations were completed if in the case of the algorithm II every next value is t fj , and in the case of the algorithm III the last value t fq (obviously that j ≫ q), they did not turn out to be less than the minimal value allowed by regulatory documents (corresponds to the dimensional value t f = 8 hours).
The total time of the effective operation of the filter material used for iron removal before its replacement primarily is of special interest due to its high cost.In general, the service life T f of the medium depends on the properties of its material, filtration conditions and the way the filter is operated.In the first series of examples, the subject of calculations is just the relative service life T f of a bed of a given height and well-studied filter materials (polystyrene foam, zeolite, crushed granite) under varying conditions (Ψ 0 , C 0 ) and filter operation in accordance with the three algorithms above.In this case, it is necessary to focus attention on the regular algorithm according to which the desired value T f demonstrates the particular sensitivity.Discrete or continuous changes (several times) in the interval t f caused disproportionately larger (tens of times) changes in T f .The longest service life was obtained in the examples at t f = 100, Ψ 0 = C 0 = 1 and served as a scale for all other values of T f .Fig. 3 is especially illustrative, since it shows the importance of choosing the right operation algorithm for the service life of one change of the filtering material.Here, a typical (moderate) initial level of water contamination is assumed (C 0 = 1mg /dm 3 ) and any composition of the contamination is allowed (1 ≥ Ψ 0 ≥ 0).Fig. 3 illustrates, first of all, the impact of the ratio between two forms of iron in the initial water on the efficiency of the packed bed.If you want to achieve a longer service life, first, you should minimize the initial amount of ferric iron as much as possible and, secondly, ensure the same length of the operating periods.At the same time, it should remember that changes in t f lead approximately to changes in T f in the inverse proportion.
The initial content of iron significantly affects the period T f , as evidenced by Fig. 4. The dependence graphs of T f on C 0 correspond to the limiting case Ψ 0 = 1, when the water entering the granular medium contains only ferrous iron, and also to different fixed values of t f (algorithm I).Almost the same reduction in T f is observed with an increase in C 0 at least in the range of 2 ≥ C 0 ≥ 1.It makes sense to discuss the service life only for short operating intervals with severe contamination of the water supplied to the filter and its operation according to the regular algorithm.In particular, the filter is generally unable to reduce the inlet concentration to the maximal allowable C * at C 0 = 3 mg/dm 3 and t f ≥ 300.Therefore, we can formally consider T f = 0. Unlike the regular algorithm, the irregular algorithm II contributes to the long-term operation of the filter material change due to the maximal exhaustion of the treating resource during every filter run also at C 0 = 3 mg/dm 3 .Moreover, an increase in T f should be noted as the groundwater contamination by ferrous iron increases starting from the concentration of 2 mg/dm 3 .
However, the long service life of the medium does not mean low total costs for the physicochemical iron removal.Therefore, it is necessary to conduct a complete joint technical and economic analysis of its functioning in the development of final recommendations for optimal filter operation.It is essential to consider the capital costs together with the operating costs and the goal of the technical and economic calculations should be the reduced costs for treating a unit volume of groundwater.Figs. 5 and 6 present the calculations data of the capital (CC), operating (OC) and, as a result, reduced costs (RC) at C 0 = 1.5 mg/dm 3 in order to visually compare the contribution to the total costs of both components (CC and OC).Specifically, Fig. 5 shows the contamination composition parameter Ψ 0 , which serves as an argument and makes it possible to visually assess its impact on the costs inventory when using different filter operation methods.Three sets of curves are shown here, the first set corresponds to the algorithm I (curves 1-3) -at t f = 600, the second set corresponds to the algorithm II (curves 4-6).Finally, the third set corresponds to the algorithm III and is composed of curves Fig. 3. Dependence T f (ψ 0 ) at C 0 = 1 mg/dm 3 .
V.L. Poliakov and S.Y.Martynov 7-9.Fig. 6 refers exclusively to the regular algorithm and demonstrates the consequences of continuously varying the operating interval t f for different types of costs.Here it is appropriate to note that the operating costs of rapid filters for water treatment in general are mainly associated with their regeneration.The amount of backwashing water is proportional to the number of the operating intervals in the service life with standard backwashing of the filter medium with a fixed volume of pure water to remove the iron deposit.As a consequence, the operating and capital costs at small and large values of t f respectively clearly prevail.At the same time, they are so significant that the RC turn out to be significantly greater than the optimal value, at least in the examples calculated here.All subsequent results refer exclusively to RC as to a function of Ψ 0 , C 0 , t f because in the joint technical and economic analysis the values CC and OC play a subordinate role and are aimed at establishing the reduced costs precisely.
The oxygen in groundwater with a high content of ferrous iron can significantly change the ratio between it and ferric iron in favor of the ferric iron [17,18].Thus, the formation of a ferruginous deposit in the bed and, as a result, an increase in the consumption of mechanical energy are accelerated and ultimately its service life is reduced.As a first approximation, the effect of oxygen on the technological process and the efficiency of the filter can be taken into account by selecting a suitable value Ψ 0 .In the third series of examples, this factor is reflected to the maximum possible extend based on its continuous variability from 0 to 1.The results of calculating the reduced costs RC depending on Ψ 0 are summarized in Figs.7 and 8 and correspond to all three operation algorithms.Five characteristic values t f are chosen in the case of the regular algorithm.The costs RC increase significantly as Ψ 0 decreases when water contamination at the inlet to the filter is moderate (Fig. 7).If we choose the minimal (Ψ 0 = 1) values of RC as reference values for comparison, then the maximal increase is estimated at about 23.5% (algorithm II), 10.9% (algorithm III), 22 … 40% or more (algorithm I).The sensitivity of the costs RC to Ψ 0 increases multiple times when the level of the initial contamination is doubled (Fig. 8).Specifically, the pre-oxidation of all ferrous iron will cause an increase in RC by 4.4 times when the algorithm II is implemented, in 4.3 times with the algorithm III, in 1.54 times with the algorithm I (t f = 200).It is important to note that the calculated curves in Figs.7 and 8, corresponding to the algorithms II and III and also to the values t f (algorithm I), are very different.This fact highlights the significance of the value C 0 for the filter operation.The following series of examples just allows to directly assess the significance of the initial contamination (C 0 ) for the economics of the physicochemical removal of iron.
The concentration of iron mainly in the ferrous form varies in the deep groundwaters of Ukraine in a very wide rangefrom 1 mg/dm 3 to 3 mg/dm 3 , although there are regions with iron concentrations over 20 mg/dm 3 .The value C 0 is characterized by significant variability and within not only geological and geographical regions, but also individual geological massifs.The choice of the method, technical means and technological scheme of iron removal depends on the level of contamination of groundwater with iron compounds.Therefore, the economic basis for the practicability (or non-practicability) of using the selected three algorithms for slightly contaminated and highly contaminated waters with an admixture of different composition was in the focus of attention when we considered the theoretical methods of the physicochemical water treatment at rapid filters.The reduced costs only served as an indicator of the economic efficiency of measures needed to prepare the filter for functioning and being operated.A series of figures is calculated, representing in graphical form the functional relationship between RC and C 0 .The argument (C 0 ) changed continuously from 1 mg/dm 3 to 3 mg/dm 3 .The detailed calculations of RC covered all the basic values Ψ 0 , three algorithms and four values of t f for the algorithm I, considering the practical importance of E w (C 0 ).It should be noted that groundwater treatment became more expensive with higher initial contamination in all examples, regardless of the algorithm and contamination composition.However, the nature of the impact of the total concentration C 0 on RC may change significantly in response to the variability of the degree and composition of contamination and also the values of t f in the case of regular process operation.The results of the detailed numerical and analytical calculations of RC, which are figured in Figs.9-11, allow us to evaluate the way C 0 impacts the economy of the removal of iron compounds from groundwater under real various physical and technological conditions of filtration.Also, the array of data obtained in this way is summarized in Table 1, where the form of limiting the processing procedure is specified along with the actual values of RC namely based on the head losses (t h ) or filtrate contamination (t p ). Factually, the costs RC increase smoothly and in a linear manner as C 0 increases when the total level of iron in water (2 ≥ C 0 ≥ 1Мг /дМ 3 ) is moderate, its predominant initial content is in the ferrous form (Ψ 0 ≥ 0.5) and time intervals t f are relatively short.The behavior of the graphs of the dependence RC(C 0 ) changes drastically with severe contamination (3 ≥ C 0 > 2 mg /dm 3 ), a higher initial content of the oxide form (Ψ 0 < 0.5), also extending the regular time of continuous operation of the filter (algorithm I).The significant non-linearity shown on the calculated curves is explained by the Fig. 7. Dependence RC(Ψ 0 ) at C 0 = 1 mg/dm 3 .
V.L. Poliakov and S.Y.Martynov filtration conditions being close to becoming critical, which from a physical point of view means either accelerated accumulation of a deposit in the packed bed or accelerated a decrease in its protective ability.The value of RC will grow uncontrollably if you continue to hypothetically increase C 0 , t f and decrease Ψ 0 .This can be interpreted in practice as a crisis of iron removal by filtration.In other words, the filter is completely unable to function productively and efficiently (see Table 2).
In order to still achieve the desired result, namely, treatment of the required volume of water of a standard quality in a given time, it is necessary to reduce the duration of filter runs, or treat low-quality filtrate further, or improve the design of the filter.
The filter runs are limited first by the time of the protective action of the bed t p (limited by the quality of the filtrate) and then by t h (by head losses) at initial iron concentrations of 2.5 − 3 mg/dm 3 and Ψ 0 = 1 for the operation algorithm II (Table 2).Similar results were obtained for the algorithms II and III at initial iron concentration of 2.5 mg/dm 3 and Ψ 0 = 0.5.A similar change in the nature of the limitation of the technological process during the operation of one replacement of the medium also takes place if the algorithm III is implemented and Ψ 0 = 0.5.The duration of the filter runs is initially determined by the outlet concentration of iron in the oxide form predominantly, which rapidly increases with time due to the relatively low clogging and adsorption ability of the filter material in relation to ferric iron and its comparatively high initial content.The progressive accumulation ferruginous deposit causes a fundamental change in the assessment of the functionality after a significant amount time, from a point of view.Now, the consumption of mechanical energy for filtration as a result of increased contamination of the porous medium increases so much that the time t h decreases in comparison to t p despite a significant reduction in the absorption resource of the medium and accordingly the process time t .
An unusual situation develops at the implementation of the algorithm I in relation to strongly contaminated groundwater.In the cases considered in detail C 0 = 2.5 mg/dm 3 and t f = 600, as well as C 0 = 3 mg/dm 3 , t f = 300 and in the presence of only one form of contamination initially and already during the first operating interval, the maximum allowable head losses in the bed and the concentration of iron in the filtrate are exceeded.Thus, the filter is initially unable to provide the required result in the given technological conditions.At the same time, it can operate normally for a short period of time with an equal initial content of both forms, -in the first case for 70 intervals (42000 units), in the second case for 107 intervals (32100) with an equal initial content of both forms.The operability of the filter can be ensured simply by reducing the interval t f or using the algorithm II in the case of the ferrous form of contamination which is more economically efficient.It is necessary to take more serious measures with the oxide form of contamination, in particular, to use two-stage filters, etc.
It is easy to establish the optimal interval t f , at which the reduced costs will be minimal when the filter is operated in accordance with the algorithm I and also the algorithms II, III possibly, if the dependence RC(t f ) is calculated.Fig. 12 shows three graphs of this dependence, which correspond to the basic values Ψ 0 and moderate contamination of the water supplied to the filter (C 0 = 1 mg /dm 3 ).Therefore, it is obvious that the optimal relative value t f is related to Ψ 0 respectively.Thus, it is enough to specify the function RC(t f ) in order to reduce the cost of the treated water to the minimum, using the algorithm I for a given initial ratio between the forms of iron.Approximate values of the desired optimums for t f and RC correspond to the lowest point on this curve.Three values of t f , corresponding to the condition RC = min, namely t f = 800, are obtained in this case at Ψ 0 = 1, t f = 700, at Ψ 0 = 0.5, t f = 630 and at Ψ 0 = 0.It is logical that the range of the practical values t f decreases when C 0 increases and Ψ 0 decreases.Finally, this procedure of applied optimization of the technological process becomes impractical if C 0 ≥ 2.5mg/dm 3 and Ψ 0 = 0, since the filter cannot process such high levels of contamination.
The comprehensive view of the technological and economic indicators obtained as a result of purposeful long-term numerous calculations using the computer analysis resource (software package) Mathcad deserves a detailed discussion and comments.The complications that arose when calculating a large number of examples in relation to normal and extreme filtration conditions were mainly due to the choice of an overestimated duration of the operating interval t f at the regular (first) filter operation algorithm.In this case, one of the ratios t f > t h or t f > t p took place in the worst case and the filter immediately turned out to be inefficient.The abrupt failure of the filter and the need for early replacement of the filter material as a result, was noted in a slightly better case but also undesirable for practical reasons.It was established that the algorithm I with an ultra-long interval t f = 1200 can only be used for very short operating intervals t f , and for the ferrous form of the initial water contamination.However, a significant reduction in the operating costs with such t f does not significantly reduce the total costs.Moreover, there was even a noticeable increase in DC in comparison with the variant t f = 900.It should be emphasized there is always a high probability the treatment treating resource of the filter will be insufficient in principle to solve the problem, based on the previous experience in a case of high levels of contamination of groundwater (it implies that the obtained filtrate will be high quality).Therefore, initially the filter cannot function in accordance with the established standards for the packed bed with the properties specified according to Ref. [19] when implementing the algorithm I, Ψ 0 = 0 (oxide form of contamination) and C 0 ≥ 2.5mg/dm 3 .There are different ways to resolve the situation in different ways.In particular, it is advisable to increase the height of the bed accordingly or reduce the (equivalent) size of its elements (grains) when the protective ability of the filter is low, especially if Ψ 0 = 0...0.5.The next article from our series on physical and mathematical modeling of physical and chemical removal of iron from deep groundwater is planned to be dedicated to the rational choice of the main design factors.
Moreover, a significant initial content of ferric iron can also cause the practical unsuitability of the algorithms II, III.Indeed, at C 0 ≥ 2.5mg/dm 3 in the absence of the adsorption process that prevents the transfer of iron (C 0a = 0), already due to the initial breakthrough of a great number of the oxide particles their concentration will immediately exceed the maximum allowable value C * .In general, the expenses for iron removal when using the filter operation algorithms under consideration are comparable.However, a reasonable choice is important based on the specific filtration conditions of the operation algorithm and especially the operating interval in the case of the algorithm I.According to the obtained results of numerous technical and economic calculations with typical initial data, it can be concluded that the cost of clean water is 12% less in the case of the algorithm I, but only with a reasonable choice t f (in the case under study, from 48 to 98 h with the optimal number of hours being 72) with a predominant content of ferrous iron in the groundwater and its relatively low contamination (C 0 = 1 mg /dm 3 ).A daily increase or decrease in t f per day results in a 73% increase in RC.
Naturally the cost of water treatment becomes higher but noticeably faster with the implementation of the algorithm I when the level of the contamination increases and at the same standards of iron removal.Thus, pure water will be 16% cheaper even at C 0 ≥ 2.5mg/dm 3 with the algorithm II.The cost of water treatment increases significantly with an increase in the proportion of oxide iron.The algorithm II allows to save 9 … 15%.Certainly, one should also keep in mind the additional difficulties with the control of the filtrate quality and head losses in the filter, which is necessary when operating according to the algorithm II and partially to the algorithm III in the final choice of the operation algorithm.The mathematical modeling methods can be of great help in a such situation.
Based on the date derived from numerous calculations, it is justified to conclude that the role of excessive oxygen concentration at iron removal from groundwater is negative from an economic point of view.The decrease in Ψ 0 first to 0.5 and then to 0 in all examples inevitably led to a noticeable increase in RC (from several percent to several tens of percent).At the same time, a decrease in Ψ 0 to 0 generally led to multiple increases in the cost of water treatment in certain situations, for example, in the case of the algorithm I and C 0 = 2 mg/dm 3 .Particular attention should be paid to the cases of partial or complete incapacity of the filters when the initial content of ferric iron is high.If Ψ 0 = 0 then at C 0 ≥ 2.5mg/dm 3 all three algorithms are unsuitable.Actually, it is not possible to achieve the required quality of the filtrate under the considered technological conditions, in principle, due to the rapid depletion of the adsorption capacity of the filter material.However, the situation changes drastically already at Ψ 0 = 0.5 and now the algorithm II makes it possible to remove iron from water at a moderate cost.The algorithm I provides the required quality and volume of pure water at an unnecessary higher cost, but only when t f is selected correctly.If the values of t f are too high, then the iron content in the filtrate can immediately exceed the values allowed by the standards.
Conclusions
1.It is advisable to conduct the optimization calculations during filtration at a constant rate taking into account the specifics of the physicochemical removal of iron from groundwater and using three criteria.They impose restrictions on the iron content in the filtrate (quality criterion), head losses in the deep bed (hydraulic criterion), deposit strength (consolidation criterion).The joint use of these criteria makes it possible to conduct technical and economic calculations taking into account not only the operating costs but also the capital costs.The developed calculation method allows to comprehensively determine the reduced costs for treating groundwater with excess content of iron, considering the service life of the bed and, accordingly, the cost of its replacement and the costs for regular backwashing of one medium change.As a result of systematic long-term calculations of examples with typical initial data, the patterns of changes in the capital, operating and reduced costs were established depending on the concentration and ratio of iron forms (ferrous/ferric) in the water in its initial state, three algorithms for controlling the operation of the filter.2. The service life of the filter medium is an important indicator of the working ability of the filter-clarifiers.The specified period depends on the properties of the filter material, filtering conditions, filter control method, etc.The ratio of iron forms in the raw water significantly affects the efficiency of iron removal by filtration.It is possible to significantly increase the serviceability time for changing the material of the bed by reducing the content of ferric iron in the water in its initial state.The service life of the medium decreases as the initial iron concentration increases.It is possible to achieve longer service life of the bed with shorter filter runs using the regular filter control algorithm, that is a result of weaker deposit consolidation during every filter run.There is a maximum depletion of the treating resource throughout the entire sequence of filter runs, which contributes to the long-term operation of the filter material change when the irregular filter control algorithm is being used.3. The service life of the medium is limited due to the rapid decrease in the capacity of the bed (hydraulic criterion) in the case of low concentrations of iron in the raw water.When the initial iron concentration increases, the specified period is already limited by the time of the protective action of the bed (first for ferric iron), regardless of the filter control algorithm.4. Using the regular filter control algorithm, the operating costs are reduced with an increase in the duration of the filter run due to a decrease in the total number of backwashings per filtering material change.At the same time, the capital costs for replacing one bed change increase, which is caused by its shorter service life.The clearly defined extremum is characteristic of the behavior of the marked indicators.For example, when Ψ 0 = 1, 0, C 0 = 1 .5Мг/дМ 3, then the lowest cost of the treated water is provided if the regular duration of the filter runs is 700 units (optimum).5.As a result of a detailed analysis of the dependence of the reduced costs on the content and composition of contamination, the method of controlling the technological process, we conclude that the reduced costs with a decrease in the percentage of ferrous iron increase significantly in the case of the regular algorithm and a moderate iron content at the filter inlet.In general, the initial concentration of iron has a significant impact on the value of the reduced costs.The sensitivity of the reduced costs to the contamination composition parameter Ψ 0 increases multiple times with its increase.The reduced costs rise non-linearly as a level of contamination and the proportion of the ferric iron form in it increase.In this case, the efficiency of the filter is sharply reduced down to a critical level (the filter is initially unable to function; the regular duration of the filter run is less than the minimum allowable).In such situations, it is necessary to improve the design of the filter, apply multi-stage filtration, etc.
Table 1
Values of RC and nature of the technological process limitation (t h or t p ) for the operation algorithm I.
Table 2
Values of RC and nature of the technological process limitation (t h or t p ) for the operation algorithm ІІ и ІІІ | 2023-09-17T15:13:45.183Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "b1cd6e29928c49371dc7aed2b7daf34220870c29",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023074108/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "490f7f7c183d85e82633333d47c7ddbc7fb16f5e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
254181490 | pes2o/s2orc | v3-fos-license | Bright and sensitive red voltage indicators for imaging action potentials in brain slices and pancreatic islets
Genetically encoded voltage indicators (GEVIs) allow the direct visualization of cellular membrane potential at the millisecond time scale. Among these, red-emitting GEVIs have been reported to support multichannel recordings and manipulation of cellular activities with reduced autofluorescence background. However, the limited sensitivity and dimness of existing red GEVIs have restricted their applications in neuroscience. Here, we report a pair of red-shifted opsin-based GEVIs, Cepheid1b and Cepheid1s, with improved dynamic range, brightness, and photostability. The improved dynamic range is achieved by a rational design to raise the electrochromic Förster resonance energy transfer efficiency, and the higher brightness and photostability are approached with separately engineered red fluorescent proteins. With Cepheid1 indicators, we recorded complex firings and subthreshold activities of neurons on acute brain slices and observed heterogeneity in the voltage‑calcium coupling on pancreatic islets. Overall, Cepheid1 indicators provide a strong tool to investigate excitable cells in various sophisticated biological systems.
INTRODUCTION
Fluorescent indicators are powerful tools for spatiotemporally resolved mapping of cellular activities.Genetically encoded voltage indicators (GEVIs) allow noninvasive readout of membrane potential changes of large neuronal ensembles at the single-neuron level, thus enabling millisecond-time scale recording of neuronal activities, including subthreshold potentials (1).A common challenge associated with voltage imaging is its high noise level due to limited photon counts, which arises from a combination of the high acquisition frame rate (typically at 0.5 to 2 kHz) and the low copy number of the membrane-embedded sensor protein (1).This high level of imaging noise has a substantial impact on the signal-to-noise ratio (SNR), particularly for voltage imaging in vivo, which is further complicated with the presence of tissue autofluorescence and light scattering (2).For this reason, GEVIs with red-shifted emission spectra are highly sought after, since they avoid much of the autofluorescence window and suffer less from scattering.Moreover, a brighter fluorophore could increase the baseline fluorescence signal, which, in turn, promotes the SNR.
Here, we report a pair of red eFRET GEVIs, Cepheid1b and Ce-pheid1s, with improved voltage response, brightness, and photostability, which enable voltage imaging in mouse brain slices with laser power lower than 2 W/cm 2 .Both Cepheid indicators support multiplexed imaging with green fluorescent indicators for calcium [GCaMP6s (17)] or glutamate [iGluSnFR (18)].They are also capable of pairing with CheRiff (7), a spectrally orthogonal blue-shifted optogenetic actuator, to achieve all-optical electrophysiology measurement of neuronal excitability.We further demonstrate that Cepheid indicators report APs and subthreshold potentials ex vivo and in vivo.On acute brain slices, Cepheid1b simultaneously record burst firing and subthreshold depolarization activities in dozens of neurons.On pancreatic islet tissue, we noninvasively observed glucose-stimulated correlations between electrical spiking and calcium oscillations in multiple cells simultaneously by dual-color imaging using Cepheid1b and GCaMP6f.
AlphaFold2-aided design of GEVI scaffolds
We have shown previously that the voltage responsiveness of eFRET GEVIs depends critically on the baseline FRET efficiency (E FRET ) between the fluorescent donor and the retinal acceptor (16,19).For engineering red eFRET GEVIs, we sought to maximize the E FRET through a combination of approaches.First, we chose the red-shifting Asp 81 Cys mutation of Ace rhodopsin (Ace D81C ) as the voltage-sensing module to maximize its spectral overlap with RFP emission (16).Second, we applied AlphaFold2 (20) computational structural modeling to guide our optimization of the donoracceptor distance R and the orientation factor κ 2 , which are quantitatively linked to E FRET as described in Eq. 1 Whereas previous eFRET GEVI designs have focused exclusively on the C-terminal fusion of fluorescent protein donors, AlphaFold2 (20) predicts a 4-to 12-Å shorter donor-acceptor distance and substantially higher κ 2 when the RFP donor is inserted into the first extracellular loop (ECL1) of the Ace rhodopsin (Fig. 1A, fig.S1, and table S1).Alternatively, insertion into the third intracellular loop (ICL3) is also predicted to improve the overall E FRET .However, with limitations of AlphaFold2 including the inability of predicting non-amino acid fluorophores and the lack of dynamic simulation, the above computational predictions require subsequent experimental evaluations.
Multiplexed recording and manipulation of neuronal activity
The red-shifted spectra of Cepheid1 indicators allow the combination with the photosensitive cation channel CheRiff ( 7) for alloptical electrophysiology.Patch clamp tests revealed minimal photocurrent of Cepheid1b and Cepheid1s under various illumination conditions (fig.S10), with one exception that when coilluminated with 405 and 561 nm, both GEVIs produce a depolarizing current of ~20 pA.It has been reported that CheRiff can be partially excited by 561-nm illumination at an intensity of 1.5 W/cm 2 (6), eliciting a photocurrent of approximately 30 pA.Therefore, in neurons coexpressing Cepheid1b and CheRiff via a self-cleaving 2A peptide, we tested the photocurrent and depolarization caused by 561-nm cross-talk (fig.S11).Under 561-nm illumination at imaging level, we observed a substantially lower photocurrent of ~10 pA and a depolarization of ~2.5 mV.The lower photocurrent may be attributed to the lower expression of CheRiff when coexpressed with Ce-pheid1b via 2A peptide linkage (27).A leakage at this low level is insufficient to trigger unwanted firing or opening of ion channels and is therefore compatible to all-optically recording neuronal activities.
In cultured rat hippocampal neurons coexpressing CheRiff and Cepheid1b/s-ST, we applied 405-nm laser to stimulate AP firing while simultaneously imaging membrane voltage (Fig. 2A).Under various stimulation patterns, Cepheid1b-ST faithfully recorded tonic APs and burst firing (Fig. 2B).We tested the refractory period of neuronal firing with 10 s of ramped increasing optogenetic stimulation strength.The firing rate during the trial can be overlapped with optogenetic stimulation dosage, while the maximum firing rate is shown as a platform (fig.S12).We then conducted long-term imaging with the photostable Cepheid1s-ST.We found that a few minutes of recovery after continuous recording can notably restore the fluorescence level and reduce phototoxicity.Using this method, Cepheid1s-ST can faithfully report optogenetically induced APs for more than 15 min accumulatively (fig.S13).
In some cases, 405-nm light directly causes observable artifact on Cepheid fluorescence level (fig.S11), which explains the fluctuations of the baseline in the ramp experiment (figs.S12 and S13).
We further paired Cepheid1b/s-ST with green-emitting indicators to simultaneously record membrane potential with other neuronal signals such as cytoplasmic calcium and extracellular glutamate.We coexpressed Cepheid1b/s-ST with GCaMP6s (17) in primary rat hippocampal neurons (Fig. 2C and fig.S14).The high photostability of Cepheid1s-ST allowed continuous imaging of time-correlated voltage and calcium spikes for 5 min (Fig. 2D).We also applied Cepheid1b-ST and SF-iGluSnFR (28) to record voltage-induced glutamate release (Fig. 2, E and F, and fig.S15).In addition, the high brightness of Cepheid1b and its spectral orthogonality with GCaMP6f have enabled us to sensitively record membrane potential and cytoplasmic calcium in cultured mouse pancreatic islet cells, which unveils time-correlated spontaneous calcium waves and AP spikes that are reminiscent to neuronal activities (Fig. 2, G and H).
Simultaneous recording of multiple neurons on acute brain slices
We performed voltage imaging in acute mouse brain slices to assess the ex vivo performance of Cepheid1.We introduced Cepheid1b-ST into mouse brain via adeno-associated virus (AAV) infection, which showed good expression and membrane trafficking across multiple brain regions, including cortex, hippocampus, and cerebellum (Fig. 3A).In acute brain slices, the high sensitivity of Cepheid1b-ST (ΔF/F 0 = −9.9± 0.3% per AP; Fig. 3B) has allowed faithful report of APs in single trial measurements at the frame rate of 498 Hz.Cepheid1b-ST also resolved current-induced subthreshold and burst APs at 14 Hz with a sensitivity of ~−10% ΔF/F 0 (Fig. 3C).
To simultaneously resolve the electrophysiology across a large neuronal population, we expressed Cepheid1b-ST in the whole brain through delivery with AAV-PHP.eBvector that can cross the blood-brain barrier.In the thalamus, Cepheid1b-ST enables the simultaneous recording of spontaneous neuronal AP dynamics in 23 cells across a large field of view measuring 504 μm by 504 μm (Fig. 3D).The expression of Cepheid1b-ST throughout the brain facilitates the collection of signals from multiple brain areas, such as thalamus (Fig. 3D), habenula (fig.S16), and hippocampus (fig.S17) in a single trail.The resting state and excitability of neurons both were captured, and the neuronal spikings including tonic and burst APs were obtained with an SNR of ~10.
Imaging of electric-calcium coupling in pancreatic islets
In mammalian pancreatic islets, elevated glucose concentration initiates electrical activities in β cells via the concerted actions of Glut2 transporter, K ATP channels and voltage-gated Ca 2+ channels (29).The opening of voltage-gated Ca 2+ channels underlies the basis for AP firing, increasing intracellular calcium levels to trigger the exocytosis of insulin granules (30).Notably, glucose-induced oscillations of intracellular Ca 2+ concentration have been observed in mammalian pancreatic islets (31)(32)(33)(34), and patch clamp has been used to study the relationship between membrane potential and [Ca 2+ ] i (35).However, simultaneous noninvasive recording of membrane potential and [Ca 2+ ] i in multiple cells has remained challenging.Furthermore, while electrical coupling between β cells in mammalian islets (36) and oscillatory membrane potential in individual pancreatic β cells (37) has been measured using patch clamp, the spatial patterns of these coupling and oscillation have not been recorded because of the lack of high-throughput detection methods.Moreover, since patch clamp can only access cells on the islet surface, which are mainly β cells (38), cells deeper in the islet have eluded from the previous analysis.
The high brightness of Cepheid1b and its spectral orthogonality with GCaMP6f enables simultaneous recording of membrane potential and cytoplasmic calcium with higher throughput and better access to cells deeper in the islet.We thus infected GCaMP6f +/+ mice with adenovirus encoding Cepheid1b and performed dual-color imaging in isolated pancreatic islets.Upon switching from low (3 mM) to high (10 mM) levels of extracellular glucose, we observed highly heterogeneous patterns of electrical activities in individual islet cells, whereas their calcium signal increased gradually and synchronously over the time course of minutes (Fig. 4A).The electrical activities in individual cells appear uncorrelated with the calcium signal.Following 30 to 60 min of incubation in high-glucose medium, three types of calcium oscillations emerged: fast (~20-s cycle), slow (>100-s cycle), and mixed (20-to 300-s cycle), which were consistent with the previous observations (31)(32)(33)(34).The electrical activities of islet cells are spontaneously tuned to be highly synchronized and time-correlated with all three types of calcium oscillations (Fig. 4, B to D).In addition, in a few cases of very fast calcium oscillations (1-to 2-s cycle), electrical activity appeared highly heterogeneous, indicating relatively weak electrical coupling in this type of calcium oscillations (Fig. 4E).Together, the above data demonstrate the power of simultaneously recording electrical and calcium activities in multiple cells in pancreatic islets.
DISCUSSION
To summarize, we have reported a pair of red GEVIs with improved voltage dynamic range of 33% −ΔF/F 0 per 110 mV, which is the highest among red GEVIs reported to date.The high dynamic range of Cepheid1 allows sensitive detection of APs at 500-Hz frame rate in both cultured neurons (ΔF/F 0 = −12%) and brain slices (ΔF/F 0 = −10%) under mild illumination with 561-nm laser lower than 2 W/cm 2 .While Cepheid1b is useful for detecting both subthreshold and APs as a brighter indicator, Cepheid1s enables long-term imaging due to its high photostability.Both GEVIs support multiplexed imaging with green fluorescent indicators and all-optical electrophysiology.Currently, applications of Cepheid indicators are likely restricted to one-photon imaging regime, a limitation shared by all opsin-based GEVIs developed to date.However, we demonstrated that Cepheid1b faithfully reports spike firing in acute mouse brain slices and pancreatic islets, showing its capability for tissue imaging and potential in vivo applications.
Unlike previous engineering efforts that focused exclusively on C-terminal RFP fusions, we achieved higher-voltage responsiveness through insertion into ECL1.This design leveraged the predictive power of computational modeling of chimeric protein structures.In recent years, artificial intelligence models such as AlphaFold or Rosetta are beginning to show their usefulness in understanding protein function and assisting protein design (20,39,40).In this work, we also harnessed the power of AlphaFold2 in predicting unknown protein structure and thus estimating the performance of our voltage indicators.However, despite the intuition and inspiration we acquired with AlphaFold2, some of the intrinsic properties of the model limited the reliability of the predictions.First, AlphaFold2 can only take amino acid sequence as the input, and non-amino acid molecules such as fluorophores can only be aligned onto the predicted tertiary structure using crystallography data, which undoubtedly lowered the accuracy (20).Second, the stationary output also created a bias on the assessment of FRET efficiency, which, in the real case, is determined by a dynamic ensemble.Therefore, future improvement would likely involve combining structural prediction models with molecular dynamic simulations.
We envision that the sensitivity of Cepheid1 indicators could be further optimized through directed evolution.On the one hand, the mutation we introduced to Ace rhodopsin domain is directly inherited from a previous work where a short peptide is inserted into the ECL1 region ( 16).This mutation might not be optimal for our scaffold, and we expect a further improved sensitivity in the future library screening; meanwhile, with several published work with plenty of mutation library data of Ace rhodopsin, engineering positive-responding GEVI is also a promising direction (13,41).On the other hand, a next-step engineering on mScarlet-I1.4 is also vital for a better performance.We inserted the original mScarlet-I1.4without circular permutation, which might have caused a compromise in fluorescence intensity compared with C-terminal fusion.To improve this, we propose an optimization in the RFP insertion site, flanking linker sequence, and circular permutation of mScarlet-I1.4.
Cepheid indicators have enabled the visualization of electrical coupling in multiple pancreatic islet cells, revealing synchronized oscillatory cellular membrane potentials that are time-correlated with slow and fast calcium oscillations.The synchronization in membrane potential is substantially weaker either in the absence of calcium oscillation or at very fast calcium oscillation.Future investigation into the underlying mechanism of calcium-sensitive electrical coupling could benefit from voltage imaging-guided single-cell sequencing to analyze the heterogeneous response among islet cells.Furthermore, the genetically encoded property of Cepheid indicators allows for the labeling of specific types of pancreatic islet cells, providing a possible way to solve the debate about whether glucose depolarize or hyperpolarize islet α cells (42).
Materials and reagents
The reagents used in this study are summarized in table S3.All animal procedures were approved by the Animal Center of Peking University, and the experiments were carried out in accordance with the guidelines of Institutional Animal Care and Use Committee of Peking University.
Molecular cloning
Plasmids used in this study were generated by Gibson Assembly ligating polymerase chain reaction amplified inserted DNA fragment and linearized vector with 25-base pair overlap.The primers for polymerase chain reaction are summarized in table S4.DNA fragments and linearized vector were mixed with Gibson Assembly enzyme (Lightening Cloning Kit).The sequences of plasmids were verified through Sanger sequencing.
Computational modeling of FRET efficiency
The structures of Cepheid1b and Cepheid1s are predicted with Al-phaFold2.Non-amino acid chromophores are manually added via structural alignment of the predicted structures against the crystallography data of separate protein domains: Acetabularia rhodopsin II [Protein Data Bank (PDB) ID: 3AM6], mScarlet (PDB ID: 5LK4), and mRuby (PDB ID: 3U0L) as proxies of mScarlet-I1.4and mRuby4, respectively.The coordinates of fluorophores are used for calculating the distance R (defined as the distance between the centers of mass of donor and acceptor) and the orientation factor κ 2 , which is defined as κ 2 , where R, D, and A are vectors connecting the centers of mass of donor and acceptor, normalized longitudinal axis of donor and acceptor, respectively.
Expression of Cepheid1 in cultured cells
HEK293T cells were seeded in a 24-well plate and incubated in Dulbecco's modified Eagle medium (Gibco) containing 10% (v/v) fetal bovine serum (Gibco) at 37°C with 5% CO 2 .Cells were transfected with Lipofectamine 3000 reagent following the manufacturer's instructions.Primary rat hippocampal neurons were digested from dissected rat brain at postnatal day 0 and plated onto sterile 14mm glass coverslips precoated with poly-D-lysine (20 μg/ml) and laminin (10 μg/ml).Neurons were incubated at 37°C with 5% CO 2 and were transfected on DIV7 (7 days in vitro) to DIV9 with Lipofectamine 3000 following the manufacturer's instructions.Transfected neurons were imaged after 3 to 7 days.
Imaging apparatus
Fluorescence microscopy in cultured cells and pancreatic islets was performed on an inverted microscope (Nikon-TiE) equipped with a 40×, numerical aperture (NA) of 1.3 oil immersion objective lens, three laser lines (Coherent OBIS; 405, 488, and 561 nm), a spinning disk confocal unit (Yokogawa CSU-X1) and two scientific complementary metal-oxide semiconductor cameras (Hamamatsu ORCA-Flash 4.0 v2).A dual-view device (Photometrics DV2) was used to split the emission into green/red fluorescence channels.Fluorescence imaging experiments in acute slices were performed on an upright microscope (Olympus BX51WI) equipped with a 40×, NA 0.8 water immersion objective lens, a 561-nm laser line (Coherent OBIS) and a scientific complementary metal-oxide semiconductor camera (Hamamatsu ORCA-Flash 4.0 v2).The spectra properties of the filters and dichroic mirrors for various fluorescent indicators used in this study are summarized in table S5.
Fluorescence voltage imaging in HEK293T cells and cultured neurons
To measure the dynamic range and kinetics of the voltage indicators in HEK293T cells, the membrane potential was controlled via whole-cell patch clamp (Axopatch 200B, Axon Instruments).Fluorescence imaging was performed at the frame rate of 1058 Hz.To measure the response of indicators to APs, neurons were currentclamped and injected with 200 to 500 pA of current for 5 to 10 ms to stimulate AP firing.Fluorescence imaging was performed at the frame rate of 484 Hz.For simultaneous imaging of membrane voltage and calcium, Cepheid1 and GCaMP6s-nuclear export signal (NES) were cotransfected into neurons at DIV8.On DIV14 to DIV18, neurons were illuminated with 488-nm (2.4 W/cm 2 ) and 561-nm (1.6 W/cm 2 ) lasers and imaged with a dual-view device at a frame rate of 500 Hz.For simultaneous imaging of membrane voltage and glutamate, Cepheid1 and iGluSnFR were cotransfected into neurons on DIV8 and imaged on DIV14 to DIV19 in dual-view mode (2.4 W/cm 2 of 488-nm laser and 1.6 W/cm 2 of 561-nm laser) at a frame rate of 500 Hz.
Intracerebroventricular injection and acute slice measurements C57BL/6N mouse lines were purchased from Charles River.The AAV vector expressing Cepheid1b-ST: AAV2/PHP.eb-hsyn-Ce-pheid1b-STwas custom-produced by Chinese Institute for Brain Research.For one 6-to 7-week-old mouse (without regard to sex), 3 μl of AAV2/PHP.eb-hsyn-Cepheid1b-ST(4.8 × 10 12 genome copies/ml) was injected into the lateral ventricle.The coordinate for intracerebroventricular injection (in millimeters from Bregma: anteroposterior and mediolateral) was −0.58 to −0.59 and 1.35 to 1.4 and dorsoventral of 1.8 to 1.9 mm.Acute slices were prepared from 9-to 11-week-old mice (at least 3 weeks after AAV injection).Mouse was deeply anesthetized via isoflurane inhalation and rapidly decapitated.The brain was dissected from the skull and placed in ice-cold artificial cerebrospinal fluid (ACSF) containing 26 mM NaHCO 3 , 1.25 mM NaH 2 PO 4 , 125 mM NaCl, 2.5 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 2 mM KCl, and 20 mM glucose (295 mosmol/kg; pH 7.3 to 7.4) and saturated with Carbogen (95% O 2 and 5% CO 2 ).The brain was sliced into 250-μm sections with a Leica VT1200s vibratome.Slices were incubated for 45 min at 34.5°C in ACSF and maintained at room temperature (22°C).ACSF was continuously bubbled with Carbogen for the duration of the preparation and subsequent experiment.
For fluorescence imaging and electrophysiology recoding, slices were placed on a custom-built chamber and held by a platinum harp net stretched across.Carbogen-bubbled ACSF was perfused at a rate of 4 ml/min with a longer peristaltic pump.Fluorescence images were captured at a frame rate of 400 to 500 Hz, under 561-nm illumination (2 W/cm 2 ).The intracellular solution for whole-cell patch clamp contains 105 mM potassium gluconate, 30 mM KCl, 4 mM Mg adenosine 5 0 -triphosphate, 0.3 mM Na 2 guanosine 5 0 -triphosphate, 0.3 mM EGTA, 10 mM Hepes, and 10 mM sodium phosphocreatine (295 mosmol/kg; pH 7.3).Electrophysiology measurements were acquired with a MultiClamp 700B amplifier (Molecular Devices).
Isolation of mouse pancreatic islets and infection of Cepheid1b
Islets of Langerhans were isolated from GCaMP6f +/+ mice.After isolation, the islets were cultured overnight in RPMI 1640 medium containing 10% fetal bovine serum, 8 mM D-glucose, penicillin (100 U/ml), and streptomycin (100 mg/ml) for overnight culture at 37°C in a 5% CO 2 -humidified air atmosphere.The islets were infected with adenoviruses pAdeno-CMV-Cepheid1b by 1.5-hour exposure in 100 μl of culture medium, with approximately 4 × 10 9 plaque forming units per islet, followed by addition of regular medium and further culture for 16 to 20 hours before use.
Dissociation into islet single cells
Freshly isolated islets were washed with Hanks' balanced salt solution and subsequently digested with 0.025% trypsin-EDTA for 3 min at 37°C, followed by brief shaking.Digestion was stopped with addition of culture medium, and the solution was centrifuged at 94g for 5 min.The cells were suspended by RPMI 1640 culture medium.The cell suspension was plated on coverslips in the poly-Llysine-coated glass bottom dish (D35-14-1-N, Cellvis) or a microfluidic chip.The dishes or chips were then kept for 60 min in the culture incubator at 37°C and 5% CO 2 to allow cells to adhere.Additional culture medium was then added, and the cells were cultured for 24 hours before the imaging experiments.
Voltage imaging in mouse pancreatic islets and islet single cells
For simultaneous imaging with Cepheid1b and GCaMP6f, mouse pancreatic islets and mouse islet single cells were illuminated with 488-and 561-nm lasers at 0.2 to 0.5 W/cm 2 and 0.9 to 1.8 W/cm 2 , respectively, and continuously imaged for 60 to 300 s at a camera frame rate of 200 Hz.The samples were kept at 37°C on the microscope stage during imaging.For mouse islets, a polydimethylsiloxane microfluidic chip was used to provide a stable and controllable environment for long-term imaging.The reagents were automatically pumped into the microfluidic chip with a flow rate of 800 μl/ hour by the TS-1B syringe pump (LongerPump).Before imaging, the chip and all the solution were degassed with a vacuum pump for 5 min to achieve stable hour-long imaging.The microfluidic chip was pre-filled with KRBB solution [125 mM NaCl, 5.9 mM KCl, 2.56 mM CaCl 2 , 1.2 mM MgCl 2 , 1 mM L-glutamine, 25 mM Hepes, and 0.1% bovine serum albumin (pH 7.4)] containing 3 mM D-glucose before use.
Data analysis
Most fluorescence images and electrophysiology recordings were analyzed with home-built software written in MATLAB (Math-Works, version R2022a).The fluorescence images obtained in acute slices were preprocessed with a Python-based voltage imaging data analysis package (VolPy).Fluorescence intensities were extracted from the mean values over a manually drawn region of interest around the soma of each labeled cell.Statistical analysis was performed with Origin (version 2019b) and R (version 4.2.0).
Supplementary Materials
This PDF file includes: Figs.S1 to S17 Tables S1 to S6
Fig. 1 .
Fig. 1.Design and characterization of Cepheid1 indicators in cultured cells.(A) Diagram showing the red fluorescent protein (RFP) insertion site (top) and the predicted structure of Cepheid1s.(B) Normalized fluorescence-voltage response curves of red genetically encoded voltage indicators (GEVIs).GEVIs names are listed in descending order of measured voltage sensitivities.(C) Voltage sensitivities of red GEVIs for recording action potentials (APs) in cultured neurons.*P < 0.05, twosample t test.(D) Optical waveforms of Cepheids and VARNAM2 to APs.Region of interest that yielded the trace is marked by yellow circles.Scale bars, 20 μm.VARNAM, voltage-activated red neuronal activity monitors.
Fig. 2 .
Fig. 2. Multiplexed imaging of Cepheid1b/s with optogenetic tools and blue-shifted biosensors.(A and B) Epifluorescence image (A) and whole-cell fluorescence response (B) of cultured neuron expressing Cepheid1b-ST-P2A-CheRiff, when optogenetically triggered with 405-nm laser.(C and D) Epifluorescence images (C) and whole-cell fluorescence traces (D) of cultured rat hippocampal neuron expressing GCaMP6s-NES (top) and Cepheid1s-ST (bottom), with zoom-in view of the boxed region shown on the right.(E and F) Epifluorescence images (E) and whole-cell fluorescence traces (F) of cultured rat hippocampal neuron expressing SF-iGluSnFr (top) and Cepheid1b-ST (bottom), with zoom-in view of the boxed region shown at the bottom.(G and H) Epifluorescence images (G) and whole-cell fluorescence traces (H) of mouse pancreatic islet cells expressing GCaMP6f (cyan) and Cepheid1b (red).Scale bars, 20 μm.
Fig. 3 .
Fig. 3. Voltage imaging with Cepheid1b indicators in acute brain slice.(A) Confocal images of fixed slices showing expression and localization of Cepheid1b-ST.Scale bars, 50 μm.(B) Fluorescence image (top) and mean electrical (black) and optical (red) waveforms of stimulated action potentials (APs) recorded from a hippocampal neuron expressing Cepheid1b-ST in acute brain slice.Gray and red lines indicate individual and mean optical traces, respectively (n = 48 spikes).Scale bar, 10 μm.(C) Fluorescence response (red) of Cepheid1b-ST to single-trial burst firing (black) in acute brain slice, with zoom-in view of the boxed region shown at the bottom.(D) Representative epifluorescence image of mouse brain slice expressing Cepheid1b-ST in the thalamus (scale bar, 20 μm) (left) and signal-to-noise ratio (SNR) traces indicating spontaneous activity from the labeled cells (right).
Fig. 4 .
Fig. 4. Voltage imaging with Cepheid1b indicators in pancreatic islets ex vivo.(A to E) Representative dual-color imaging of calcium (mean, cyan; individual, gray) and voltage (red) in isolated islets at 200-Hz frame rate.At the onset of imaging, extracellular glucose is switched from 3 to 10 mM (A).Following 30 to 60 min of incubation at 10 mM glucose, both mixed (B), slow (C), and fast (D) calcium oscillations are observed together with time-correlated electrical spikes, while weaker electrical coupling is observed with very fast (E) calcium oscillation (zoom-in view of framed region is shown at the top).Scale bars, 20 μm. | 2022-12-04T14:10:46.795Z | 2022-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "90ad0e519293803158f3eba8f8a65cafa800773c",
"oa_license": "CCBYNC",
"oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.adi4208?download=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ea183b004249cd70dd532a859fb192c013e8e03",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
119314969 | pes2o/s2orc | v3-fos-license | Boundary blow up under Sobolev mappings
We prove that for mappings $W^{1,n}(B^n, \R^n),$ continuous up to the boundary, with modulus of continuity satisfying certain divergence condition, the image of the boundary of the unit ball has zero $n$-Hausdorff measure. For H\"older continuous mappings we also prove an essentially sharp generalized Hausdorff dimension estimate.
Introduction
Throughout the paper B n denotes the unit ball in Ê n and W 1,n (B n , Ê m ) is the Sobolev space of L n (B n , Ê m )-functions f : B n → Ê m with weak first order derivatives in L n (B n ).
If f : B 2 → Ω ⊂ Ê 2 is a conformal mapping, then the boundary of Ω can have positive Lebesgue measure even if f extends continuously up to the boundary of the disk. If one requires more, for example uniform Hölder continuity, then ∂Ω is necessarily of Lebesgue measure zero. In fact, Jones and Makarov proved in [6] that ∂Ω has measure This condition is very sharp: if the integral in (1) converges then [6] provides us with a simply connected domain Ω and a conformal mapping f : B 2 → Ω so that the boundary of Ω has positive Lebesque measure and f has the modulus of continuity ψ.
Our first result gives a surprisingly general extension of the conformal setting; notice that each uniformly continuous conformal mapping f : B 2 → Ω belongs to W 1,2 (B 2 , Ê 2 ). Theorem 1.1. Let f ∈ W 1,n (B n , Ê m ) be a continuous mapping so that for all z, w ∈B n , where ψ : (0, ∞) → (0, ∞) is an allowable modulus of continuity with Then H n ( f (∂B n )) = 0.
Above, H n (A) denotes the n-dimensional Hausdorff measure of a set A.
For the definition of an allowable modulus of continuity see Section 2 below. For example, ψ(t) = Ct γ , 0 < γ < 1, and 1 ψ l,s (t) = exp are allowable for all integers l ≥ 2 and all s > 0. Notice that ψ l,s satisfies (3) if and only if s ≤ 1. Here C > 0, log (k) t is the k-times iterated logarithm and C l is any constant with log (l) C l 2 ≥ 1. Let us look at the special case n = m = 2 of Theorem 1.1 in the Hölder continuous setting: ψ(t) = Ct γ , where 0 < γ ≤ 1. Consider a space filling (Peano) curve, i.e. a continuous mapping g from the unit circle onto a square. In the standard construction, g is Hölder continuous with exponent γ = 1/2. If one takes, say, the Poisson extension f of g to the unit disk, then f is also Hölder continuous. It is easy to check by hand that the partial derivatives of f do not belong to L 2 (B 2 ). By Theorem 1.1 no Hölder continuous (or even continuous with control function satisfying (3)) extension f of a space filling curve can satisfy |D f | ∈ L 2 (B 2 ).
In the Hölder continuous case, Jones and Makarov actually proved that the Hausdorff dimension of f (∂B 2 ) is strictly less than two for conformal f . Contrary to the area zero results, this dimension estimate is truely conformal in the following sense. Example 1. Let p > 1. There exists a locally Hölder continuous homeomorphism for the gauge function g(t) = t 2 (log 1 t ) p . Here H g denotes the generalized Hausdorff measure with the function g(t) as the dimension gauge. The precise definitions are given in Section 2. Our second result gives a rather optimal positive result. Theorem 1.2. Let f ∈ W 1,n (B n , Ê m ) and fix 0 < γ ≤ 1 and C 0 > 0. If f satisfies for all z, w ∈ B n , then H g ( f (∂B n )) = 0, for the gauge function g(t) = t n log 1 t . Jones and Makarov proved their result via harmonic measure and hence this technique does not work in the setting of Theorem 1.1. An alternate approach, relying on the conformal invariance of (quasi)hyperbolic metric, was given in Koskela-Rohde [7], see [11]. Furthermore, Malý and Martio [10] established Theorem 1.1 in the Hölder continuous case via a technique that we have not been able to push further.
Let us briefly describe the idea of the proof of Theorem 1.1. We consider a Whitney decomposition of B n and assign each Q ∈ W a vector f Q ∈ Ê m and a radius r Q . The vector f Q will simply be the "average" of f over Q and r Q the maximum of f Q − fQ over all neighbors of Q. Then the n-integrability of the weak derivatives of f guarantees, via the Poincaré inequality, that the sequence {r Q } Q∈W belongs to l n . We realize f (∂B n ) as (a part of) the closure of { f Q } Q∈W in Ê m . Those f (ω), ω ∈ ∂B n , for which one can find a sequence of Q ∈ W with f Q − f (ω) r Q are easily handled. For the remaining ω ∈ ∂B n , we modify our centers f Q and radii r Q , still retaining the l n -condition, so that suitably blown up balls cover these points sufficiently many times. This is where the non-integrability condition (3) kicks in. One cannot fully follow the above idea, and our proof below is more complicated.
Our approach is flexible and applies to many related problems. In order to avoid extra technicalities, we do not record such applications here. Let us simply mention that the dimension gap phenomenon from [3] can be shown to extend from conformal mappings to general Sobolev mappings [8].
Preliminaries
Let us first agree on some basic notation. Given a number a > 0, we write ⌊a⌋ for the largest integer less or equal to a. Similarly, ⌈a⌉ is the smallest integer greater or equal to a. If A is a finite set set, ♯A is the number of elements in A. If A ⊂ R n has finite and strictly positive Lebesgue measure and f : R n → R is a Lebesgue integrable function, we denote the average 1 is a ball and a is a positive number, the notation aB stands for the ball B(x, ar). We denote the radius of a ball B by r(B). If we write L = L(·), we mean that the number L > 0 depends on the parameters listed in the parentheses. Finally, C denotes a positive constant, which may depend only on n and m, the dimensions of the domain space and the image space, and may differ from occurrence to occurrence. We write H h (A) for the generalized Hausdorff measure of a set A ⊂ R n , given by and h is a dimension gauge (a non-decreasing function with lim t→0+ h(t) = h(0) = 0 and with h(t) > 0 for all t > 0). If h(t) = t a for some a ≥ 0, we simply write H a for H h and call it the a-dimensional Hausdorff measure.
We need also a generalized weighted Hausdorff content of a set A ⊂ Ê n , given by Proof. The lemma follows from Corollary 8.2 and the proof of Theorem 9.7 of [5] (see also [1, 2.10.24]).
Recall that for each open subset
where Q i are cubes with mutually parallel sides, pairwise disjoint interiors and each of edge length 2 k for some integer k, such that the relation holds for all i = 1, 2, . . .. We write Q 1 ∽ Q 2 , if the Whitney cubes Q 1 Q 2 share at least one point (the so-called neighbor cubes). We have once Q ∽Q. Therefore, the total number ♯{Q :Q ∽ Q} of all neighbours of a fixed cube Q does not exceed C. See [12] for details.
Let ω ∈ ∂B n . By (Q j (ω)) ∞ j=1 , we mean the sequence of all Whitney cubes in a fixed Whitney decomposition of B n , intersecting the radius [0, ω]. This sequence starts with a central cube and tends to ω. For a point x ∈ [0, ω], we denote the number of Whitney cubes intersecting the segment [0, x] by ♯q(0, x). It is easy to see that are constants that may depend on n.
Finally we define the allowable moduli of continuity.
Definition 2.2. A continuously differentiable increasing bijection
is an allowable modulus of continuity if there exists t 0 < 1 and β > 0 such that for every t ≤ t 0 the following conditions hold: t is a decreasing function; is a monotone function.
Remark 1.
i) One could replace the monotonicity conditions in (6) and (8) with a pseudomonotonicity condition (e.g. there exists a constant C > 0 such that u(t) ≤ Cu(s) if t ≤ s). This would only affect the constants in the proofs.
ii) The conditions (6) and (7) mean that the function log 1 ψ −1 (t) is a function of logarithmic type in the sense of [11,Definition 4.2.].
Let W be a fixed Whitney decomposition of B n . For each cube Q ∈ W, we define a corresponding centre f Q and a corresponding radius r Q = max{| f Q − fQ| : Q ∽Q}, which determine a family of balls on the image side: B = {B( f Q , r Q ) : Q ∈ W, r Q > 0}. Note that some balls in B may coincide, the simplest way to act in such a situation is to treat them as different balls for certainty (we may identify each ball in B, with (Q, B( f Q , r Q )), then different Whitney cubes on the pre-image side generate different pairs), however, identifying such balls would cause no problem either.
We assign two new weighted collections of balls to each ball in B. Given B = B(x, r) ∈ B, we define concentric subballs S i (B) = B(x, r/2 i ) for all i ∈ AE and assign the weight The second collection is defined in a similar way. If B = B(x, r) is a ball in B, we choose the smallest number k 0 (B) ∈ N, such that 2 −k 0 (B) ≤ r. Next, for each k = k 0 (B), k 0 (B)+1, . . ., we choose R k (B) = B(x, α(2 −k )) and set R B = {R k (B) : k = k 0 (B), k 0 (B) + 1, . . .}. The weights we assign this time are w R k (B) = λ(k) for all k = k 0 (B), k 0 (B) + 1, . . .. Similarly to above:
Finally, we define our weighted collection of balls by setting
Again, some of the balls in the united families may coincide; however, we treat them as "different" balls. Distinguishing them is, again, not difficult.
Let us now estimate the weighted sum of the nth powers of the radii of the balls in F .
be the union of all neighbors of a cube Q ∈ W. For neighboring cubes Q and Q ′ , we obtain, via the Hölder and Poincaré inequalities, that Hence, we have the estimate for each Q ∈ W and some constant C > 0. Next, using the fact that the inequality Q∈W χ N(Q) (y) ≤ C holds for every y ∈ R n , we estimate where C 1 > 0 is some constant depending on n, m and λ(0) only.
We may assume that there is at least one Q ∈ W with r Q > 0; otherwise f (∂B n ) is a singleton. Let ω ∈ ∂B n . We consider the radius [0, ω] and the sequence (Q j (ω)) ∞ j=1 . We fix a large integer l 0 = l 0 (ω, f ) ∈ N so that there are elements of the sequence ( f Q j (ω) ) ∞ j=1 outside B( f (ω), 2 −l 0 +1 ), if ( f Q j (ω) ) ∞ j=1 contains at least one element different from f (ω). If such an integer does not exist, there necessarily is some Q = Q w ∈ W with f Q = f (ω) and r Q > 0. In this case, we choose l 0 = l 0 (ω, f ) ∈ N so that 2 −l 0 < r Q ω . In both cases we also require that 2 −l 0 +1 < t 0 . This allows us to use the properties (6) and (7).
For the purposes of our "porosity argument", we would like to make the number l 0 independent of the point ω. This is done by considering the decomposition Let us fix l 0 ∈ N. Our aim is to prove that H n ∞ (F l 0 ) = 0. Fix x ∈ F l 0 . Take any ω ∈ E l 0 , such that x = f (ω), and define the sequence of concentric annuli A l (x) = B(x, 2 −l+1 ) \ B(x, 2 −l ) with l = l 0 , l 0 + 1, . . .. Next, we assign a suitable set P l (x) of cubes from W to each annulus A l (x), l = l 0 , l 0 + 1, . . .. If f Q j (ω) = x for all j ∈ AE, we put P l (x) = {Q ω } for each l ≥ l 0 , where Q ω is the cube defined earlier. Otherwise, all the sets P l (x) with l ≥ l 0 consist of elements from (Q j (ω)) ∞ j=1 : if an annulus A l (x) with some l ≥ l 0 , contains no centres from ( Moreover, it is possible to choose the sets P l (x) above so that the inequality k 1 ≤ k 2 is valid, whenever Q k 1 (ω) ∈ P l 1 (x), Q k 2 (ω) ∈ P l 2 (x) and l 1 < l 2 . Denoting for l ≥ l 0 and a constantc 0 > λ −1 (0), which we will specify later, we would like to prove that there exists an integer l 1 ≥ 2l 0 , such that for each l ≥ l 1 . In other words, at least half of the annuli do not contain too many centres from ( f Q j (ω) ) ∞ j=1 . There is nothing to prove, if f Q j (ω) = x for all j ∈ AE; otherwise, the proof is by contradiction.
Let us assume that (11) does not hold for some l ≥ 2l 0 . Take the smallest number J ∈ AE which is the closest to ω. Now, the assumption on the continuity of f and the properties of our Whitney decomposition imply That is, Next, we connect this estimate to the number of Whitney cubes that precede Q J in (Q j (ω)) ∞ i=1 . 6 Using (5), we observe that In the calculation above, we may have to adjust the choice of l 0 to ensure ♯q(0, ω ′ ) > c 3 (see (5)). Finally, we obtain a lower bound for ♯q(0, ω ′ ), using the assumption that we have at least ⌊l/2⌋ − l 0 + 2 annuli A k (x) with θ k (x) = 0. We notice that the sets P k (x) with θ k (x) = 0 contain different cubes for different k's, and, if k ≤ l, then the cubes in P k (x) precede Q J (ω) in (Q j (ω)) ∞ j=1 . We have Choosingc 0 > c 2 β, this cannot hold when l is large enough. Thus, there is a number l 1 = l 1 (c 0 , l 0 , u), such that (11) holds for all l ≥ l 1 . Our next step is to prove that if θ k (x) = 1 for some k and P k (x) = {Q 1 , . . . , Q m }, then it is possible to find a collection of balls having radii at least const · α(2 −k ) and satisfying m ′ i=1 w B i ≥ const · λ(k). Moreover, we choose different balls (in the sense mentioned above) for different k's.
Let us fix k ≥ l 0 such that θ k (x) = 1. Suppose first that the annulus A k (x) contains no centres from ( f Q j (ω) ) ∞ j=1 . Then the set P k (x) consists of a single cube Q ∈ W with f Q ∈ B(x, 2 −k ). The definitions of r Q and l 0 imply r Q > 2 −k , and hence k ≥ k 0 (B( f Q , r Q )). Thus, we may choose the ball R k (B( f Q , r Q )), which, by definition, has radius α(2 −k ) and weight λ(k). In addition, the centre of this ball lies in B(x, 2 −k ).
Assume now that the annulus A k (x) contains at least one of the centres from ( f Q j (ω) ) ∞ j=1 . Then, we have by the definitions of P k (x) and r Q that Since ♯P k (x) ≤c 0 λ(k), we observe that For each Q ∈ P k (x) with 2r Q ≥ α(2 −k ) 2c 0 , we choose a number n Q ∈ AE so that and pick a ballB = S n Q (B( f Q , r Q )) = B( f Q , r Q /2 n Q ) ∈ S B( f Q ,r Q ) . By the definition of S i (B), we have wB = 2 n Q and For the sum of the weights Q 2 n Q of all the balls obtained in such a manner, we observe that Hence we have a collection of balls {B 1 , . . . , B m } ⊂ F with weights sum m i=1 w B i >c 0 λ(k) and of radii at least α(2 −k )/8c 0 . Moreover, all these balls have their centres in the annulus A k (x), and hence in the ball B(x, 2 −k+1 ).
We have proved that there exists a number l 1 = l 1 (l 0 ,c 0 ), such that for each ω ∈ E l 0 and l ≥ l 1 , among the numbers l 0 , . . . , l, there are at least ⌈l/2⌉ integers k ∈ {l 0 , . . . , l}, such that we are able to find a finite collection of balls {B i } i∈I ⊂ F with weights sum i∈I w B i at least λ(k) and of radii at least α(2 −k )/8c 0 , so that the centres of the balls B i , i ∈ I, lie in the ball B(x, 2 −k+1 ). Here,c 0 is a positive constant depending only on β, n and λ(0), and the balls are different for a fixed ω and different k's.
If ω ∈ E l 0 , x = f (ω) and k ∈ {l 0 , . . . , l} is such that θ k (x) = 1, then there is a collection {B i } i∈I ⊂ F with the properties mentioned above. If a ball B i with some i ∈ I is replaced by a ballB i = λ(k i ) λ(l) B i , while creating F l , we necessarily have k i ≤ k. Therefore, the inequalities i∈I wB i = i∈I λ(l) λ(k i ) n w B i ≥ λ(l) λ(k) n i∈I w B i ≥ λ(l) λ(k) n λ(k) = λ(l) n 1 λ(k) n−1 and r(B i ) ≥ 2 −k i /8c 0 λ(l) ≥ 2 −k /8c 0 λ(l) hold (by (6), λ is increasing). Since, for each i ∈ I, the centre of a ballB i is contained in B(x, 2 −k+1 ), we have the inclusion x ∈ 16c 0 λ(l)B i . Hence we observe that is a weighted cover of the set F l 0 . We observe also that diameters of all balls in this cover are at least 2 −l . This information will be used in the proof of Theorem 1.2 below. | 2013-04-15T23:34:12.000Z | 2013-04-15T00:00:00.000 | {
"year": 2013,
"sha1": "a76acfe66ce592e004e87a3fc9b77db1d3e94338",
"oa_license": null,
"oa_url": "https://projecteuclid.org/journals/analysis-and-pde/volume-7/issue-8/Boundary-blow-up-under-Sobolev-mappings/10.2140/apde.2014.7.1839.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a76acfe66ce592e004e87a3fc9b77db1d3e94338",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
11464947 | pes2o/s2orc | v3-fos-license | Zinc oxide nanoparticles as a substitute for zinc oxide or colistin sulfate: Effects on growth, serum enzymes, zinc deposition, intestinal morphology and epithelial barrier in weaned piglets
The objective of this study was to evaluate effects of zinc oxide nanoparticles (nano-ZnOs) as a substitute for colistin sulfate (CS) and/or zinc oxide (ZnO) on growth performance, serum enzymes, zinc deposition, intestinal morphology and epithelial barrier in weaned piglets. A total of 216 crossbred Duroc×(Landrace×Yorkshire) piglets weaned at 23 days were randomly assigned into 3 groups, which were fed with basal diets supplemented with 20 mg/kg CS (CS group), 20mg/kg CS+3000 mg/kg ZnO (CS+ZnO group), and 1200 mg/kg nano-ZnOs (nano-ZnO group) for 14 days. Results indicated that compared to CS group, supplementation of 1200 mg/kg nano-ZnOs (about 30 nm) significantly increased final body weight and average daily gain, and 3000 mg/kg ZnO plus colistin sulfate significantly increased average daily gain and decreased diarrhea rate in weaned piglets. There was no significant difference in growth performance and diarrhea rate between nano-ZnO and CS+ZnO groups. Supplementation of nano-ZnOs did not affect serum enzymes (glutamic oxalacetic transaminase, glutamic-pyruvic transaminase, and lactate dehydrogenase), but significantly increased plasma and tissue zinc concentrations (liver, tibia), improved intestinal morphology (increased duodenal and ileal villus length, crypt depth, and villus surface), enhanced mRNA expression of ZO-1 in ileal mucosa, and significantly decreased diamine oxidase activity in plasma, total aerobic bacterial population in MLN as compared to CS group. Effects of nano-ZnOs on serum enzymes, intestinal morphology, and mRNA expressions of tight junction were similar to those of high dietary ZnO plus colistin sulfate, while nano-ZnOs significantly reduced zinc concentrations of liver, tibia, and feces, and decreased total aerobic bacterial population in MLN as compared to CS+ZnO group. These results suggested that nano-ZnOs (1200 mg/kg) might be used as a substitute for colistin sulfate and high dietary ZnO in weaned piglets.
Introduction
Weaning is commonly practiced at the 14 th to 28 th day of life in piglets for optimum herd performance. However, early weaning practice is closely associated with increased diarrhea occurrence, damaged epithelial barrier and restricted growth performance. Antibiotic growth promoters like colistin sulphate (CS) had been widely used as feed additive to attenuate gastrointestinal infections and improve the post-weaning growth performance [1][2][3]. But the use of sub-therapeutic antibiotics in feed is involved in antibiotic resistance potential of microbiota and residues in animal products. Due to these serious human health hazards, Europe has banned use of antibiotics as feed additives since January 2006 [4,5]. The other countries are trying to gradually reduce or forbid use of feed antibiotics. For example, the use of CS as feed additives in animal diets has been banned in China since May, 2017. Therefore, it is urgent to explore novel alternatives of antibiotic feed additives.
Zinc is an essential trace element for animals and serves as a component of many metalloenzymes, including DNA and RNA synthestases, and plays important roles in metabolism and intestinal nutrient absorption [6,7]. Physiological zinc requirement of nursery piglets is about 80-100 mg/kg [7,8], however, higher dietary doses of zinc oxide (ZnO) e.g 2000-4000 mg/kg are generally used to promote growth performance, reduce intestinal permeability and/ or decrease incidence of diarrhea in weaning piglets [9][10][11]. Due to its low digestibility, most of dietary ZnO is excreted into the manure, which contains high amounts of zinc and may pose environmental pollution hazards [12,13].
Rapid developments in nanotechnology provide new dimensions for researches on substitutes of antibiotics and high dietary ZnO. Nanoparticles with the size between 1 to 100 nm (high surface area to volume ratio) exhibit quantum mechanics and show great potentials for applications in many fields [14][15][16]. In medicine, using nanoparticles as an alternative to traditional therapies shows various advantages, such as enhanced drug absorption, improved bioavailability and targeted activity for particular organs [17,18]. Nano-ZnOs are one of the best studied and most widely used nanoparticles for their high surface area, enhanced bioactivities, and especially for their high chemical stability and easy synthesis, therefore nano-ZnOs has yet been widely used in cosmetics, sunscreens, plastics and package [16,[19][20][21][22]. It has also been well documented that nano-ZnOs shows great potentials as anti-cancer drugs and novel immunoprotective agents [23,24].
Despite a few concerns raised about the toxicity of nano-ZnOs [25,26], it has been reported recently that nano-ZnOs still exhibit great promise in agriculture, such as feed additives [27][28][29][30]. As compared to their counterpart, nano-ZnOs exhibit enhanced antibacterial activities (comparable to colistin), especially against gram-negative bacteria [3,31,32]. Reddy et al. [33] also verified that nano-ZnOs possess strong antibacterial activity, exhibit low toxicity to eukaryotic systems [34]. Our previous studies have found that dietary supplementation of nano-ZnOs (500 mg/kg) for 32 weeks in mice showed minimum toxicity [35]. The oral zinc sulfate, as a common zinc source in animal diets, seems to be much more toxic than nano-ZnOs [36].
Based on these previous studies, we hypothesized that nano-ZnOs might be a potential substitute for CS and/or high dietary ZnO to improve growth performance, decrease diarrhea incidence, provide protection against intestinal injury and decrease fecal zinc content in weaned piglets. Therefore, this study was conducted to evaluate effects of nano-ZnOs on growth performance, enzymes, zinc deposition, intestinal morphology and epithelial barrier in weaned piglets. Results of present study might provide insights for application of nano-ZnOs as feed additive to replace feed antibiotics and/or high dietary ZnO.
Material and methods
Experiments were approved and conducted according to the guidelines of Institutional Animal Care and Use Committee of Nanjing Agricultural University, China.
Characteristics of nano-ZnOs
The morphological characteristics of nano-ZnOs were analyzed by the transmission electron microscope (TEM, JEM-200CX, Japan). The nano-ZnOs were provided by Zhangjiagang Bonded Area Hualu Nanometer Material Co., Ltd (China, Jiangsu). These nanoparticles were suspended in ethanol by ultrasonic vibration for 15 min. Subsequently, mixture was placed on a carbon coated copper grid and analyzed at 200 kv.
The piglets were obtained and raised in a local farm. During this 14 day trial, piglets were housed in a warm (20-28˚C) house with concrete floors (4.2×5.0 m per pen). A nipple drinker and feeders with wide trough were provided to allow pigs ad libitum access to water and feed. House temperature and ventilation system, food and water supply, and swine behavior patterns (patterns of sleeping, walking, breathing, feeding and drinking) were monitored three times a day (7:00, 14:00 and 21:00). Body weight (BW), feed intake and incidence of diarrhea were recorded during this 14 day trial. The average daily gain (ADG), average feed intake Evalation of the substitute for zinc oxide or colistin sulfate (ADFI), feed/gain ratio (F/G) and diarrhea rate were calculated. After feeding trial, two piglets (one male and one female, fasting for 6h) from each replicate were randomly selected, euthanized by electrical stunning and exsanguinated. Plasma and serum were collected by centrifugation at 3000×g for 15 min in tubes with or without heparin sodium, and stored at -80˚C. The samples of tibia, liver and feces were stored at -20˚C to determine zinc concentration.
Analysis of serum and plasma parameters
The serum activities of glutamic oxaloacetic transaminase (GOT), glutamic-pyruvic transaminase (GPT) and lactate dehydrogenase (LDH) were analyzed by corresponding commercial kits provided by Nanjing Jiancheng Bioengineering Institute (Nanjing, China). The D-lactic acid content, diamine oxidase (DAO) activity and endotoxin level in plasma were determined by ELISA kits (Nanjing Aoqing Co., Ltd, Jiangsu, China).
Analysis of zinc concentrations in plasma, tibia, liver and feces
The zinc concentrations in plasma, tibia, liver and feces were determined as previously reported with minor modifications [37]. Briefly, plasma samples were diluted with demineralized water. Samples of tibia, liver and feces (1-2.5g) were digested with the acid mixture (4:1 HNO 3 and HClO 4 ). The digest was brought to a volume of 50 ml with demineralized water and diluted to the optimal concentration with 5% HNO 3 solution. After the external matrixmatched standard curves were prepared via zinc standard (diluted with 5% HNO 3 solution), blanks and prepared samples were determined by inductively coupled plasma optical emission spectrometry (ICP-OES).
Analysis of the population of total aerobic bacteria
After the mesenteric lymph node (MLN) was collected from the ileum, they were packed in sterile plastic tube and were homogenized with phosphate buffer solution (PBS) on ice (1:9 w/v). The homogenate was used as a source for serial dilutions in PBS for viable counts of total aerobic bacteria. Subsequently, 100 μl of serial dilution was planted in nutrient agar plate, which was cultured at 37˚C for 24h. The bacterial enumerations were expressed as log(cfu/g).
Analysis of intestinal morphology
The intestinal morphology was analyzed as previously described by Dong et al. [38] with minor modifications. Briefly, 2 cm-long segments of duodenum (about 4 cm from pyloric sphincter) and ileum (about 15 cm beyond ileocecal junction) were harvested, fixed in paraformaldehyde, dehydrated using a graded series of ethanol and embedded in paraffin. Cross sections (5 microns in size) were cut, dehydrated, stained with hematoxylin and eosin (HE). For each section, villus length, crypt depth and villus width were determined with an optical binocular microscope (Olympus BX5, Olympus Optical Co. Ltd, Japan) and Image-Pro Plus software. The villi/crypt ratio and villus area were calculated.
Analysis of relative mRNA expression of occludin, claudin-2 and ZO-1 Samples of ileal mucosa were scraped on ice, frozen in liquid nitrogen immediately and stored at -80˚C. Total RNA was extracted with Trizol reagents according to manufacturer's instruction (Invitrogen, USA). RNA was quantified by nano-drop 2000 (absorbance ratios of 260/280 nm and 260/230 nm between 1.90 and 2.05) and verified by agarose gel electrophoresis. Then cDNA was prepared with Primer-Script TM reagent kit, provided by TakaRa Biotechnology Co. Ltd (Dalian, China).
The gene-specific primers of occludin, claudin-2 and ZO-1 were synthesized by Invitrogen Biotech Co. Ltd (Shanghai, China) and are listed in Table 2. GAPDH was chosen as a housekeeping gene. Reverse transcription polymerase chain reaction (RT-PCR) tests were conducted with ABI 7600 RT-PCR system with a SYBR Premix Ex Taq TM kits (TakaRa Biotechnology Co. Ltd Dalian, China). The relative mRNA expression was analyzed with ABI software and calculated with 2 -ΔΔCt as described previously [39].
Statistical analysis
All data were analyzed by the SPSS statistical package (IBM SPSS, version 20.0, Chicago) and expressed as mean ± standard error (SE). The statistical analysis was performed using Analysis of Variance (ANOVA) with comparison of means by Duncan's Multiple Comparison Test. Pens were used as experimental units for the analysis of growth performance, diarrhea rate and zinc concentration in feces. For other parameters, individual piglet was used as the experimental unit. P value less than 0.05 was considered as significant difference, while a P value between 0.05 and 0.10 was considered as a tendency towards statistical difference.
Characteristics of nano-ZnOs
The characteristics of nano-ZnOs were analyzed by TEM, and images with low, middle and high magnification were shown in Fig 1. Results indicated that primary particle sizes were about 30 nm (mainly range from 20 to 40 nm) and these nano-ZnOs exhibited almost spherical geometry.
Effects of nano-ZnOs on growth performance and diarrhea rate
Effects of nano-ZnOs on growth performance and diarrhea rate are presented in Table 3. Dietary supplementation of nano-ZnOs significantly increased final BW and ADG (P<0.05), while had no significant effects on ADFI, and F/G (P>0.05). There was no significant difference in growth performance and diarrhea rate between nano-ZnO group and CS+ZnO group
Effects of nano-ZnOs on serum and plasma enzymes
As shown in Table 4, dietary supplementation of nano-ZnOs and CS+ZnO did not affect activities of serum GOT, GPT, and LDH (P>0.05) but significantly decreased DAO activity in plasma as compared with CS group (P<0.01). Piglets in nano-ZnO and CS+ZnO group showed a tendency to have lower levels of D-lactic acid (P = 0.06) and endotoxin (P = 0.05) in plasma as compared with CS group.
Effects of nano-ZnOs on zinc concentrations in plasma, liver, tibia and feces
Dietary supplementation of nano-ZnOs and CS+ZnO significantly enhanced zinc concentrations of plasma, liver, tibia and feces (P<0.001) in weaned piglets (Table 5). However, piglets from nano-ZnO group showed significantly lower zinc concentrations in liver, tibia and feces than those from CS+ZnO group (P<0.001).
Effects of nano-ZnOs on intestinal morphology
Effects of nano-ZnOs on duodenal (Fig 2) and ileal (Fig 3) morphology were shown in Table 6. Effects of nano-ZnOs on total aerobic bacteria in MLN and mRNA expression of occludin, claudin-2, and ZO-1 in ileal mucosa There was no significant difference in mRNA expressions between nano-ZnO and CS+ZnO groups (P>0.05).
Discussion
Dietary supplementation of antibiotic growth promoters like CS and high dietary ZnO, is commonly practiced in piglets to address post-weaning challenges of reduced growth and high incidence of diarrhea [40][41][42]. Results of our present study showed that supplementation of dietary ZnO (3000 mg/kg) plus colistin sulfate (20 mg/kg) improved growth performance and decreased occurrence of diarrhea as compared to CS group, which is in line with previous reports [9,11]. In this study, effects of dietary 1200 mg/kg nano-ZnOs on growth performance and diarrhea ratio were similar to the beneficial effects of 3000 mg/kg ZnO plus colistin sulfate. Hahn and Baker [43] proposed that the mechanism for enhanced growth performance in weaned piglets was linked with plasma zinc concentration. Results of our study showed that dietary supplementation of 1200 mg/kg nano-ZnOs and 3000 mg/kg ZnO plus colistin sulfate increased plasma zinc concentration, and there was no significant difference in plasma zinc content between these two groups. Similarly, it has been reported earlier that nanoparticles exhibit higher bioavailability, and enhance drug absorption [29,44]. Early reports also verified that dietary supplementation of 20 and 60 mg/kg nano-ZnOs had greater weight gains and better feed conversion ratios than 60 mg/kg ZnO in broilers [28].
Nanoparticles are natural or artificial polymers with sizes ranging from 1 to 100 nm and have a larger surface area, which can facilitate more catalytic space, can easily enter and target cells, enhance antibacterial activities, and possess various potentials in applications [14,17,19]. Data were expressed as mean ± SE (n = 6). 2 Treatments including: CS group: basal diet+20 mg/kg CS; CS+ZnO group: basal diet+20 mg/kg CS+3000 mg/kg ZnO;nano-ZnO group: basal diet+ 1200 mg/kg nano-ZnOs. https://doi.org/10.1371/journal.pone.0181136.t006 Evalation of the substitute for zinc oxide or colistin sulfate PLOS ONE | https://doi.org/10.1371/journal.pone.0181136 July 13, 2017 The particle size and shape are important characteristics of nanoparticles, which are closely associated with their properties [45]. The antibacterial activity of nano-ZnOs increases with decrease in crystallite size [31,46]. Higher photocatalytic inactivation against E. coli had been observed for flower shaped particles than rod and sphere-like particles [46]. The characteristics of nano-ZnOs were determined by TEM in this study. The results revealed that these particles possessed potentials of nanoparticles since primary particle sizes were about 30 nm (mainly ranging from 20 to 40 nm). These nano-ZnOs were of nearly spherical geometry, similar to those used in our previous experiments on mice [35,36]. The toxicity of high doses of nano-ZnOs have been reported by several researchers [25,26,47,48]. Serum GOT, GPT and LDH activities are important biological parameters to evaluate the possible toxicity of nano-ZnOs in vivo. However, serum enzyme activities (GOT, GPT and LDH) were not affected by supplemented 1200 mg/kg nano-ZnOs or 3000 mg/kg ZnO plus colistin sulfate as compared to CS group in the present study, suggesting that dietary addition of nano-ZnOs and 3000 mg/kg ZnO plus colistin sulfate for 14 days produced minimal toxicity to weaned piglets. Similar to our findings, Wang et al. [35] also found that dietary supplementation of nano-ZnOs (500 mg/kg) for 32 weeks did not affect serum GOT and GPT activities in mice. However, supplementation of 5000 mg/kg nano-ZnOs for 32 weeks enhanced serum GPT activity in mice [35]. Wang et al. [48] reported that oral administration with high doses of 120-nm ZnO (1-5g/kg body weight) led to liver damage and increased serum LDH activity in mice. Our previous study also indicated that oral administration with 250 mg/kg nano-ZnOs and zinc sulfate for 7 weeks can cause liver damage, however, nano-ZnOs seemed to be safer than zinc sulfate [36]. More studies are required to explore long-term effects of nano-ZnOs on pigs and further investigate possible mechanisms in detail.
It has been reported that tissue damage induced by oral nano-ZnOs and zinc salt is closely related to zinc accumulation, which might further induce oxidative stress and DNA damage [25,26,36]. The analysis of zinc deposition in our present study showed that dietary nano-ZnOs (1200 mg/kg) and ZnO (3000 mg/kg) plus colistin sulphate increased zinc concentrations in liver and tibia as compared to CS group. However, zinc concentrations of liver and tibia in weaned piglets from nano-ZnO group were much lower than those of piglets in CS +ZnO group, suggesting that nano-ZnOs might be safer option than ZnO plus colistin sulfate. In addition, significantly lower level of fecal zinc content in nano-ZnO group revealed that use of nanoparticles may alleviate environmental pollution caused by high dietary ZnO [12,13,49].
Villus length, crypt depth, villi/crypt ratios, villus width, and villus surface area are important indicators of intestinal morphology, which play critical roles in nutrient absorption. In the present study, dietary supplementation of 1200 mg/kg nano-ZnOs was as efficacious as 3000 mg/kg ZnO plus colistin sulfate to increase villus width (in ileum), villus length (in both duodenum and ileum) and villus surface area (in both duodenum and ileum), which would be beneficial for intestinal nutrient absorption. The mucosal surfaces of small intestine are lined by epithelial cells, which are derived from self-renewing stem cells that reside at the crypt base [50,51]. In our present study, dietary addition of nano-ZnOs increased crypt depth in duodenum and ileum, which suggests that ability of intestinal stem cells for self-regeneration and proliferation might have been enhanced but it needs further investigations.
D-lactic acid, DAO, and endotoxin in plasma are important indicators for intestinal epithelial integrity and permeability. The impaired intestinal mucosal integrity may enhance activity of DAO, and increase contents of D-lactic acid and endotoxin [52,53]. Results of our present study revealed that nano-ZnOs and ZnO plus colistin sulfate significantly decreased plasma DAO activity and also tended to decrease contents of D-lactic acid and endotoxin as compared with CS group. These findings reinforced that nano-ZnOs and ZnO plus colistin sulfate showed protective effects on intestinal mucosal integrity.
To further investigate beneficial effects of nano-ZnOs on intestinal barrier, population of total aerobic bacteria in MLN was determined as it was reported that once intestinal epithelial barrier is impaired, bacterial translocation from gastrointestinal tract to MLN enhances [54]. Our present study revealed that dietary supplementation of nano-ZnOs decreased population of total aerobic bacteria in MLN as compared to CS group, which were in agreement with the decreased DAO activity in plasma. To further elucidate the related molecular mechanism, mRNA expressions of occludin, claudin-2 and ZO-1 in ileal mucosa were determined. The intestinal epithelial barrier is mainly modulated by tight junction, which is comprised of several proteins, such as occludin, claudin, and ZO-1 [53,55]. This junction combines with epithelial cells to maintain intestinal integrity and is an important indicator for intestinal barrier function [53,55]. In this study, dietary supplementation of nano-ZnOs increased mRNA expression of ZO-1 in ileal mucosa as compared to CS group, suggesting that dietary nano-ZnOs enhanced the expression of ZO-1 at mRNA level, which could decrease intestinal permeability and reduce intestinal infections [9,56]. There was no significant difference in mRNA expressions of occludin, claudin-2 and ZO-1 in ileal mucosal between nano-ZnO and ZnO plus colistin sulfate. These results suggested that the protective effect of nano-ZnOs on epithelial barrier was similar or even better than that of ZnO plus colistin sulfate, since total aerobic bacterial population in MLN was significantly decreased by nano-ZnOs as compared to CS +ZnO group.
Conclusion
Dietary supplementation of nano-ZnOs (1200 mg/kg) was as efficacious as ZnO (3000 mg/kg) plus colistin sulphate (20 mg/kg) in promoting growth performance, alleviating diarrhea and increasing plasma zinc concentration. However, nano-ZnOs might be safer (decreased tissue zinc concentration), more beneficial for the environment (decreased fecal zinc concentration), and even seemed to be more effective on the epithelial barrier (decreased total aerobe bacterial population in MLN) than ZnO plus colistin sulfate. Therefore, nano-ZnOs could be used as a substitute for high dietary ZnO and colistin sulfate. Our results might be useful for future investigations on nano-ZnOs as feed additives and their application in agriculture. However, further investigations are required to explore long term effects of nano-ZnOs in piglets and in environmental organisms. | 2018-04-03T00:21:39.797Z | 2017-07-13T00:00:00.000 | {
"year": 2017,
"sha1": "c115320763604efadeafdd2351249e20bf1f737c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0181136&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c115320763604efadeafdd2351249e20bf1f737c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
92290167 | pes2o/s2orc | v3-fos-license | Genetic Variability Studies of Some Quantitative Traits in Cowpea ( Vigna unguiculata L . [ Walp . ] ) under Water Stress
This research was conducted to study genetic variability of some quantitative traits in varieties of cowpea (Vigna unguiculata L. [Walp.]) under water stressed in Zaria Sudan Savannah, Nigeria. Seven varieties of cowpea (Sampea 1, Sampea 2, IAR1074, Sampea 7, Sampea 8, Sampea 10 and Sampea 12) collected from Institute for Agricultural Research, Samaru, Zaria, were screened for tolerance to water stress. The seeds were sown in poly bags containing sandy-loam arranged in Completely Randomized Design with three replications for quantitative traits evaluation. The result obtained revealed highly significant difference (P≤0.01) in the effects of water stress on the number of wilted and dead plants at 40 days after sowing. However, variety sampea-10 has the highest mean performance in terms of number of wilted plants at 34, while sampea 2 and IAR 1074 have the lowest mean performance. However, sampea 7 was found to have the highest mean performance for the number of wilted plants at 40 days and sampea 2 is the lowest. The result for quantitative traits study indicated highly significant difference (P≤0.01) in the plant height, number of days to 50% flowering, number of days to maturity, number of pods per plant, pod length, number of seeds per plant and 100 seed weight, and significant (P≤ 0.05) at seedling height and number of branches per plant. Similarly, IAR1074 was found to have high performance in terms of most of the quantitative traits under study. However, sampea 8 has the highest mean performance at nutritional level. It was therefore concluded that, all the seven cowpea genotypes were water stress tolerant and produced considerable yield that contained significant nutrients. It was recommended that IAR1074 should be grown for yield, while sampea 8 should be grown for protein supplements.
Key words: Quantitative traits, water stress, genetic variability, carbohydrate, protein, cowpea.
I. INTRODUCTION
One of the major global challenges of the millennium is food security and how to address the phenomenon of malnutrition among the teeming and ever rising population of poor rural dwellers of the third world countries.There is need to promote crops that could fix into global nutrient requirements.One of such crop is cowpea.As a legume grain, cowpea is an important source of human dietary protein and calories.
In most West African countries, development and release of improved varieties that adapts well and yield better have been slow in getting to the farmers (FAO, 2000).Development of cultivars with early maturity, acceptable grain quality, resistance to stress condition is necessary to overcome the ever growing food shortage (Ehlers and Hall, 1997).Hence, there is need to generate more information on variability among the existing germplasm and cultivars and also broadening the gene pool of the crop for selection and development of more improved varieties.
The study of variability and diversity in accessions of cultivated crops could provide vital information for the establishment of breeding programme, especially when intraspecific hybridization are necessary for the incorporation of new features or for mapping purposes.Assessment of genetic diversity and variability in cowpea would enhance development of cultivars for adaptation to specific production constrain.Therefore, sufficient information is necessary on genetic variability among the available germplasm to formulate and accelerate breeding programme.Previous In order to achieve a successful breeding programme to improve the yield potentials of the crop, should also be a pivot concern.This enables the breeder to operate selection efficiently and subsequently developed appropriate breeding strategies to solve the problems of poor yield as well as improve the nutritive quality of the crop.Effort was made to examine the genetic differences among the studied cultivars to group them into relatively homogenous groups of baseline parents for breeding purposes.
The aim of the study is to assess the genetic variability among Screened Water Stressed tolerant varieties of cowpea for improved quantitative traits in northern Guinea savannah zone of Nigeria.
A. Study Area
The experiment was conducted under screen house in the Department of Biological Sciences, Ahmadu Bello University, Zaria ( lat : 11 0 ,21 1 N and long : 7 0 ,37 1 E, Alt: 550-700m above sea level).
B. Sources of Materials
The experimental seeds were obtained from the cowpea unit of Institute of Agricultural Research (IAR), Samaru, Ahmadu Bello University, Zaria.
Screening for water stress.
Genetic Variability Studies of Some Quantitative Traits in Cowpea (Vigna unguiculata L.
[Walp.]) under Water Stress
The screening for water stress was conducted using box screening method in a completely randomized design (CRD) with three replications in a screen house.The box was half filled with a soil in a ratio of 1:1 of top soil and humus, it was watered to sufficiently moist it for planting, and two seeds were sown per hole.The watering continued regularly up to three weeks after sowing where a complete withdrawal of the water was applied.The data collected at28 days after sowing, 34 days after sowing and 40 days after sowing were the number of wilted plants, number of dead plants and the number of healthy plants.
Pot experiment for growth and yield Polythene bag was used in place of pot and were filled with a soil in a ratio of 1:1 humus and top soil, it was watered sufficiently moist it for planting, and were arranged in completely randomize design with three replications four seeds were planted in each polythene pot and watered regularly up to harvesting.The data collected include; Germination percent, Seedling height (cm),Plant height at maturity(cm),Number of branches per plant, Number of leaves per plant, Leaf area, number of days to fifty percent flowering, number of days to maturity, Number of pods per plant , Pod length , number of seeds per pod,100 seed weight and a dry weight of the plant.
C. Data Analysis
All the data collected were subjected to analysis of variance (ANOVA) with Duncan's Multiple Range Test (DMRT) used to separate the means.All tests of relationships were done using Pearson's Product Moment Correlation Co-efficient.
III. RESULTS
The result for the analysis of variance obtained due to the exposure of seven different cowpea varieties to water stress was presented in Table 1.The result indicated a highly significant difference (P≤0.01) in the effects of water stress on the number of wilted and dead plants at 40 days after sowing (DAS).Similar result was obtained in the number of healthy plants.But a significant difference was (P≤0.05) was found in the number of wilted plants at 34 DAS.While no significant difference was found in the effects of water stress from 28 DAS to 34 DAS in the remaining parameters.Furthermore no significant difference was found in the interaction off the varieties to water level.
However, Table 2 showed the results of the mean performance of the seven cowpea varieties to water stress.The result showed that sampea 10 has the highest mean performance in terms of number of wilted plants at 34 DAS while sampea 2 and IAR 1074 has the lowest mean performance.However sampea 7 was found to have the highest mean performance for the number of wilted plants at 40 DAS and sampea 2 is lowest.Meanwhile the number of dead plants at 40 DAS sampea 7 and sampea 10 has the highest mean performance while the lowest was found in sampea 2, 12 and IAR1074.But in the number of healthy plants sampea 1 showed high mean performance while sampea 7 has the lowest mean performance.At 34 DAS sampea 12 was found to be the highest.
The combine ANOVA of the mean performance for seven cowpea genotypes to water stress was presented in table 3. High mean performance was found in the water stressed plants for the number of wilted plants at 40 DAS while the lowest mean performance was found in the number of wilted plants at 34 DAS.But number of dead plants at 40 DAS a high mean performance was found.While in the un stressed plants the high mean performance was found in the number of wilted plants at 40 DAS and the lowest was obtained in the number of dead plants at 34 DAS.Similarly a high mean performance was found in the water stressed healthy plants while the lowest was found in the un stressed healthy plants.
The results for the relationships between the seven cowpea varieties to water stress was presented in table 4: the result indicated that positive relationship (P≤0.05)exist between the number of wilted plants at40 DAS and number of wilted plants at 34 DAS.Also positive relationships exist between number of dead plants at 40 DAS and the number of wilted plants at 40 DAS.But negative relationship was fund in the number of healthy plants and number of wilted plants at 34 DAS, also between number of wilted plants at 40 DAS and number of dead plants at 40 DAS.However no significant relationship was found in the remaining.
Table 5 showed the results of analysis of variance for genetic variability for growth and yield of seven cowpea varieties.The result indicated that a highly significant difference (P≤0.01) was found in the plant height, number of days to 50% flowering, number of days to maturity, number of pods per plant, pod length, number of seeds per plant and 100 seed weight.While significant difference (P≤0.05 ) was found seedling height and number of branches per plant.No significant difference was found in the remaining.
The mean performance of seven cowpea varieties was presented in table 6 which indicated that IAR1074 and sampea 2 has the highest mean performance in terms of germination percentage the lowest mean performance was found in sampea 8. Similarly IAR1074 was found to have the highest mean performance in the seedling height and least is found in sampea 10.But sampea 1 has the higher mean performance in terms of plant height and also sampea 10 is having lowest mean performance.Meanwhile the number of branches per plant the highest mean performance was obtained in sampea 10 and sampea 7 happened to be the least.In terms of number of leaves per plant the highest mean was found in sampea 1 while sampea 7 was the lowest.The highest mean performance in the leaf area was found in the sampea 7 and sampea1 has lowest mean performance.However IAR1074 has the highest mean in the number of days to 50% flowering and the number of days to maturity, while sampea 7 has the higher mean performance in terms of number of pod per plant and in the pod length the high mean was found in sampea 10 similarly in the number of seeds per plants.But sampea 12 was found to have highest mean performance in 100 seed weight and sampea 1 is found to be lowest in all the yield parameters. Meanwhile sampea 1 was found to be the highest interms of dry weight and sampea 10 was having lowest mean performance.
Table 7 showed the relationships between different parameters of the seven cowpea varieties.The result indicated a positive relationship between the number of days to 50% flowering and number of days to maturity, similar relationship exist between number of days to maturity and number of pod per plant and also between number of days to 50% flowering and number of pod per plant, pod length and number of days to 50% flowering , seed per pod and number of days to 50%flowernig, seed per pod and number of days to maturity, pod per plant and pod length a positive relationship also exist.Similarly in the result indicated that positive relationship exist between 100 seed weight and number of days to flowering, days to maturity, pod per plant, pod length and seed per pod.A positive relationship was also found between leaf area and seedling height, pod per plant and 100 seed weight.While no relationship was found germination percentage and other parameters.Similarly no relationship was found between leaf area and days to50% flowering days to maturity.However negative relationship was between plant and other parameters.While no relationship was in the remaining parameters.
IV. DISCUSSION
The screening for water stress tolerance in cowpea is a vital phenomenon that increases the potential of cowpea production in Nigeria especially in areas were drought is rampant.The significant difference and highly significant differences exhibited for number of wilted plants at 34 days and at 40 days.Number of dead plants at 40 days as well as number of healthy stands would be as an evident that values of all the growth parameters decreased with the period of growth as the water stress increased.This is in conformity with the findings of Okon (2013).The different (highest and lowest) mean performances obtained in the different varieties under water stress based on number of wilted, dead and healthy plants at different days interval could be as a result of variation that exist in the rate of decrease of growth parameters among the different varieties (in different fortnight) with respect to corresponding variation in water stresses.It was observed that, under intense water stress conditions, there was a sharp changes in the values obtained.Similar result was reported by Del Rosario et al. (2003) in soya bean seedlings.
High differences in the mean performance was observed in the different cowpea varieties to water stress at different days based on the wilted, dead and healthy plants, with the highest mean performance (8.97) obtained in the number of healthy plants and lowest (0.02) obtained in the number of dead plants at 34 days.This indicates the existence of a high degree of genetic variability in the different cowpea varieties.However, certain factors such as height of the culms, size of the leaves, the distance between the veins and the stomata openings are all affected when the varieties are developing under water stress.This is in line with findings of Ahmed and Khaliq (2007) who reported that water stress causes changes (significant difference) in the different varieties.
The positive relationships that exist between number of wilted plants at 40 days also between number of dead and wilted plants at 40 days showed that traits might not be independent in their action and are interlinked likely to bring simultaneous change for other characters.They can be effectively used as selection criteria for cowpea yields (varieties) under water stress conditions.
The negative relationships that exist between number of healthy plants and number of wilted plants at 34 days, number of wilted plants conditions can influence genetic interactions among the traits as well as genetic variance in the traits themselves.This is in line with the findings of Saro and Hoffman (2004) who suggested exposure to water stress conditions may induce positive relationships among the traits and expression of new gene will break negative correlations.The highly significant difference (P≤0.01) and significant difference (P≤0.05)exhibited for the characters (parameters) studied indicated the existence of sufficient genetic variability among the selected traits for improve yield in cowpea.While the non-significant difference (P≤0.05)exhibited for some few characters (parameters) is line with the report of Manggoel et al. ( 2012) who suggested that traits with such significant difference may be under genetic control rather than environmental influence.
The differences (highest and lowest) in mean performance of different cowpea varieties studies based on their different traits measured could be as a result of durations of the experiment which affects the differential changes that might occur in the morphological features of the varieties of the plants at a given time.The growth habits of different varieties studied also varied which result to differences in the mean performance, with highest mean performance in a particular trait such as maturity and lowest in another trait in a given variety.
Results from this study are similar to those found by Lobato and Costa (2011) where reduction in leaf relative to water content was reported.The study also recorded reduced vegetative growth due to water stress.This finding agrees with that of Aguyoh et al (2013).Similar results on decrease in growth and yield were reported by Aguyoh et al (2013) which can be attributed to the effects water has on the physiology of cowpea.The finding also agrees with that of Samson and Helmut (2007) on cowpea that reduction in leaf area in cowpea varieties (with sampea 1 had the lowest 34.47cm2) is a mechanism adapted to avoid higher rate of transpiration and reduced surfaces for radiation due to water deficit.The reduction in number of pods result of increase in with the lowest found in sampea 10 which is (4.33) could be as a result of increase in reduction of soil moisture, thereby reducing the number of seeds which may contribute to low yield in water stressed cowpea.This is in line with the findings of who reported reduction in number of pods in different cowpea varieties Abayomi and Abidoye (2009).
The positive relationships that exist between yield traits or parameters could be as a result of the fact that cowpea varieties height studied contributed to yield as it leads to resulting increase in the number of days to maturity, days to flowering, number of pods and other yield traits.The result obtained is in tine with the findings of taking (2002) who reported that plant height contribute positively to different yield parameters or traits.Also yield improvement would be possibly achieved by selecting for the number of pods per plant, since the study revealed that, number of pods per plants increased significantly.
Correlations has used in indirect selection for breeding characters (Lyman 1993).The highly significant difference (P≤0.01) and significant difference (P≤0.05)obtained in the ash, protein content and fibre contents showed that the range of values were within the recommended values, this range of values fall within the values reported for cowpea by Duke (1931) and Longe (1980).In this study the Ash having 2-3% protein: 20-27% and fibre 2-4%.
The negative relationship that exist showed that moisture content had the highest relationship (0.63) and the least values or relationships were found in fibre and protein (0.58).The significance of the result would be better interpreted to mean that the cowpea varieties cultivated under wide cultural conditions such as soil compositions climate and agronomic practices vary widely in moisture and carbohydrate, contents, followed by the fibre and protein.These components are important in determining nutritive quality and processing quality of cowpea seeds.The content of fat was the least with no relationship.The non-relationship that exist between fat content and carbohydrate is an advantage during processing to flour, since unlike other legumes such as soya bean, there is no need for a defatting stage in flour production.Similar finding was obtained by Henshaw (2008) who studied varietal differences in physical characteristics and proximate composition of cowpea (vigna unguiculata).
TABLE V :
MEAN SQUARE FOR THE GENETIC VARIABILITY STUDIES AMONG SEVEN COWPEA VARIETY Key: ns = No significant difference * = Significant difference (P≤0.05)** = Highly significant difference (P≤0.01) | 2019-04-03T13:09:51.985Z | 2019-02-28T00:00:00.000 | {
"year": 2019,
"sha1": "04803196d83a42cda0d1d274da644d0a1b7df2ef",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJPS/article-full-text-pdf/1730D8560136.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "04803196d83a42cda0d1d274da644d0a1b7df2ef",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
256432684 | pes2o/s2orc | v3-fos-license | Progress in tear microdesiccate analysis by combining various transmitted-light microscope techniques
Tear desiccation on a glass surface followed by transmitted-light microscopy has served as diagnostic test for dry eye. Four distinctive morphological domains (zones I, II, III and transition band) have been recently recognized in tear microdesiccates. Physicochemical dissimilarities among those domains hamper comprehensive microscopic examination of tear microdesiccates. Optimal observation conditions of entire tear microdesiccates are now investigated. One-μl aliquots of tear collected from individual healthy eyes were dried at ambient conditions on microscope slides. Tear microdesiccates were examined by combining low-magnification objective lenses with transmitted-light microscopy (brightfield, phase contrasts Ph1,2,3 and darkfield). Fern-like structures (zones II and III) were visible with all illumination methods excepting brightfield. Zone I was the microdesiccate domain displaying the most noticeable illumination-dependent variations, namely transparent band delimited by an outer rim (Ph1, Ph2), homogeneous compactly built structure (brightfield) or invisible domain (darkfield, Ph3). Intermediate positions of the condenser (BF/Ph1, Ph1/Ph2) showed a structured roughly cylindrical zone I. The transition band also varied from invisibility (brightfield) to a well-defined domain comprising interwoven filamentous elements (phase contrasts, darkfield). Imaging of entire tear microdesiccates by transmitted-light microscopy depends upon illumination. A more comprehensive description of tear microdesiccates can be achieved by combining illumination methods.
Background
Desiccation of microvolumes of tear fluid on a flat glass surface at ambient conditions followed by a morphological assessment by light microscopy of the non aqueous remains has been widely used as a laboratory test in the assessment of patients suspected of Dry eye disease [1][2][3][4][5][6]. To date, such characterization has almost exclusively consisted in the observation of the occurrence of fern-like crystalloids [6][7][8][9]. Such method gained popularity after Rolando included a four-level scoring scale, from I to IV, whereby score I stands for abundant fern-like crystalloids (healthy tears) and, at the other end, score IV stands for the absence of those fern-like crystalloids (altered tears) [10,11]. The method is commonly named as tear ferning test [1]. Recent studies have shown that tear microdesiccates are much more complex structures which are regularly formed by four main discrete concentrically organized morphological domains or zones. Fern-like crystalloids are just a part of such complexity [12]. A first domain (zone I), which is the one of earliest formation during desiccation, is formed by a hyaline material that surrounds the whole area of the tear microdesiccate and exhibits a variable number of transverse and highly refringent structures that resemble fractures or cracks. A second domain or zone II comprises a band of very homogeneous fernlike or leaf-like crystalloids emerging centripetally from regularly spaced points in proximity to zone I. A third domain (zone III) corresponds to the centermost area of the desiccate and is characterized by the presence of major crystalloid structures differing in robustness, length and branching. Finally, the transition band, is a morphologically distinct domain with the appearance of a narrow strip which is located along the entire interphase between zones I and II and whose relevance seems to be associated with the organization of the major morphological domains I and II [12,13]. All those morphological domains have been jointly described on the basis of single observations by transmitted-light microscopy, particularly the one corresponding to the dark-field variant [12,13]. Although such procedure is focused on the analysis of the whole tear microdesiccate, it probably misses objective and relevant data from particular domains of tear desiccates because it is based on a single setting of the light microscope. Such omission would be particularly relevant considering that the whole set of features of tear microdesiccates can be a direct reflection of the complex tear composition. In this regard, the assessment of tear desiccates should consider the examination of all four morphological domains instead of the sole consideration of fern-like crystalloids as it usually happens [6][7][8][9]. In the present study microscope settings were adjusted for the assessment of any single tear microdesiccate so that each of the main microdesiccate domains could be observed under optimized experimental conditions. In addition to confirm the occurrence of the four main morphological and structural domains in normal tear microdesiccates novel insights into the organization of some of these tear specimens have been gained.
Volume of tear fluid for viewing entire microdesiccates under the microscope
Conventional tear microdesiccates are produced when aliquots of 1-3 μL of tear fluid taken from a single eye are allowed to dry on a horizontal glass surface [12]. Volumes of tear fluid over 1.5 μL consistently generate microdesiccates covering a surface usually higher than the largest observation fields of standard light microscopes fitted with a 4-5× objective lens. Thus, for comparison purposes the present study was focused on microdesiccates produced from 1 μl of tear fluid whose circular images (about 3 mm diameter) were captured by using microscopes fitted with 10× eyepieces and a 2.5× objective lens (field of view 10.62 mm) (Fig. 1). Under current experimental conditions in this study, desiccation of those tear aliquots usually took place in about 7-8 min and tear microdesiccates were highly reproducible [13]. In effect, multiple tear microdesiccates produced from identical aliquots taken from a single sample of tear fluid showed marked similarities in terms of morphological features and distribution of their main morphological domains (zones I through III and transition band) (Fig. 2). By contrast, tear microdesiccates produced simultaneously from identical aliquots of tear fluid sampled from different healthy subjects usually presented marked differences from each other respecting morphological and structural features although they could exhibit a common design based on the occurrence of domains I through III and a transition band (see below).
Microscopic observation of single tear microdesiccates using different transmitted-light techniques
Every single tear microdesiccate was observed through an orderly sequence of transmitted-light brightfield (BF), Fig. 1 Image capture of entire tear microdesiccates by using low power objective lens. Digital image of an entire microdesiccate (about 3 mm diameter) produced from a 1 µL-aliquot of tear was captured by using a microscope fitted with 10× eyepieces and a 2.5× objective lens (left). Image capture was only partial when the objective lens was replaced by a conventional 5× objective lens (right) phase contrast (Ph1, Ph2, Ph3) and darkfield (DF) microscope techniques. As shown in Figs. 3, 4, 5, 6 and 7, each of those techniques provided different but complementary images. Transmitted-light brightfield microscopy of tear desiccates provided sufficiently contrasted images showing a regular mostly homogeneous bulky continuous structure (zone I) serving as an external boundary for the whole microdesiccate (Fig. 3). Such feature was better defined by lowering of light intensity. Toward the inside of the desiccate and close to zone I, poorly defined bunches of relatively small fern-like structures could be seen. These would represent zone II. A few major fernlike crystalloid structures in zone III at the center of the microdesiccate were hardly seen. The transition band was not seen at all. Transmitted-light phase contrast technique for observing tear microdesiccates was conducted using successively the phase stop positions Ph1, Ph2 and Ph3 of a universal 5-position condenser system turret. Using the stop position Ph1, a translucent perimetral band (zone I) was a remarkable feature of microdesiccates (Fig. 4). With this observation method, the external border of zone I represented a well-defined limit of the whole microdesiccate. Individual or interconnected filamentous structures crossing the whole width of zone I were visible. This was a highly variable feature among microdesiccates Morphological zones in a normal tear microdesiccate imaged with phase contrast (Ph3) microscopy. A hardly seen hyaline zone I Z1 surrounds the whole circular area of the body of the tear desiccate in close proximity to a clearly structured transition band Tb. At the centermost area of the desiccate (demarcated by a black circumference) abundant major fern-like crystalloids feature zone III Z3. A compact homogeneous band of short fern-shaped or leaf-shaped crystalloid structures located between the transition band and zone III represents zone II Z2. Some bright filamentous structures (f ) can be also seen as part of zone I Fig. 3 Representative image of a normal tear microdesiccate as observed by transmitted-light microscopy with bright-field illumination. A tear microdesiccate was produced from 1 µL of a tear sample and then observed with a microscope fitted with a 10× eyepiece and a 2.5× objective lens and a universal 5-position condenser system turret. Zone I is seen as a homogeneous bulky continuous structure that surrounds a mass of highly diverse poorly defined crystalloids Fig. 4 Representative image of a normal tear microdesiccate as observed by transmitted-light microscopy with phase 1 illumination. Both tear microdesiccate production and the lens system of the microscope were those described in the legend to Fig. 3. The bright halo of light on the border of the microdesiccate defines a thick morphological zone I surrounding a complex mass of tear crystalloids from different subjects. The internal border of zone I was now shown to be in contact with a distinctive structured transition band. Using Ph1, the transition band could be seen as composed of highly interwoven filamentous elements (invisible with the brightfield microscopy!) delimiting the whole mass of tear crystalloids. Ph1 microscopy also showed that fern-like and leaf-like centripetally oriented structures emerge from the transition band and seem to be anchored to it. Though individually those structures were not well-defined, as a group they configured a better defined zone II of tear microdesiccates. Likewise, the centermost part of the desiccate (zone III) could be identified by default after the identification of the inner limits of zone II but own structures of zone III could be seen without much structural details. When the stop position of the condenser was changed to Ph2, zone I became barely visible although some filamentous structures crossing it (whose number was dependent upon the particular tear sample) were easily seen (Fig. 5). By this observation method a well-defined transition band consisting of highly convoluted filamentous components with nitid limits both toward zone I and zone II could be appreciated. Toward the center of the microdesiccate, crystalloids of zones II and III became better resolved. Using the stop position Ph3 of the condenser, zone I fully Representative image of a normal tear microdesiccate as observed by transmitted-light microscopy with phase 2 illumination. Both tear microdesiccate production and the lens system of the microscope were those described in the legend to Fig. 3. Both the outer border of zone I and its close contact with a well-defined transition band are two main features of tear microdesiccates derived from Ph2 illumination. Fern-like crystalloids of zones II and III are also seen Fig. 6 Representative image of a normal tear microdesiccate as observed by transmitted-light microscopy with phase 3 illumination. Both tear microdesiccate production and the lens system of the microscope were those described in the legend to Fig. 3. Zone I is not visible but the transition band is well-defined. In addition, optimum contrast of fern-like crystalloids is achieved so that the border between zones II and III can readily be identified Fig. 7 Representative image of a normal tear microdesiccate as observed by transmitted-light microscopy with dark-field illumination. Both tear microdesiccate production and the lens system of the microscope were those described in the legend to Fig. 3 Zone I is not visible, the transition band becomes somewhat diffuse and fern-like crystalloids become the main visible structures as a consequence of light-scattering disappeared and filamentous elements previously seen as part of this zone were readily visible without a wrapping structure (Fig. 6). The transition band could be seen as an even more discrete zone of the microdesiccate displaying a clear distinction from zone II. Toward the center of the microdesiccate, Ph3 allowed to remark the minor and major crystalloids from zones II and III, respectively, and to set clear-cut differences between those zones. Fine structural details of those crystalloids, as well as their differential distribution in the microdesiccate, could be readily appreciated. Finally, using the transmitted-light darkfield technique and with a proper control of the intensity of light coming out of the condenser, zone I was mostly invisible but the presence of (a variable number of ) transverse filamentous structures revealed its presence (Fig. 7). Although less evident, the transition band was again the outermost structured visible component of the desiccate. In addition, crystalloids of zones II and III became properly resolved. With this observation technique, both main and secondary axes of major crystalloids occurring in zone II, and a fraction of those occurring in zone III, displayed strong light-scattering properties. A summary of the effects of illumination on the main features of tear microdesiccates is shown in Table 1.
Optimizing the observation of entire tear microdesiccates by transmitted-light microscopy
Apart from data collection from tear microdesiccates using the standard fixed positions of a universal 5-position condenser system turret, additional observations were conducted by setting the condenser in intermediate positions between some of the standard ones. This study aimed at getting single images of tear desiccates showing optimally their four main morphological domains. Because post-Ph2 positions in the turret disk, including Ph3 and DF, had resulted in a marked invisibilization of zone I, the new observations were focused on positions between BF and Ph1 as well as between Ph1 and Ph2. Turret positions between BF and Ph1 (Fig. 8) as well as others between Ph1 and Ph2 (Figs. 9, 10) produced microdesiccate images in which both zone I and the structured body of the tear desiccate could be jointly appreciated. Using these settings, zone I could be seen as a continuous seemingly compact cylindrical structure that borders the whole desiccate (Figs. 8,9,10). A highly variable number of filamentous substructures appeared as integral parts of zone I. Thus, condenser positions over the range Ph1/Ph2 but closer to Ph2 (Ph1/Ph2 + ) showed the filamentous substructures as carvings on zone I (Fig. 10). On the other hand, the transition band could be seen as a complex substructure displaying morphological differences when observed by setting the condenser at various positions over the intermediate ranges BF/Ph1 and Ph1/Ph2. In addition, the body of the desiccate displayed a variety of major and minor crystalloids, including fern-like structures. Altogether, positions over the range Ph1-Ph2, either closer to Ph1 (Ph1 + /Ph2) (Fig. 9) or closer to Ph2 (Ph1/Ph2 + ) (Fig. 10) proved successful in producing "balanced" images of desiccates in which the main features of both zone I, transition band, zone II and zone III could be jointly observed. Such images were highly reproducible, that is, microdesiccates produced out of several tear samples taken from single healthy subjects and analyzed using selected illumination conditions over the range Ph1/Ph2 showed with no exception highly similar morphological profiles (Fig. 11). Both tear microdesiccate production and the lens system of the microscope were those described in the legend to Fig. 3. Both a roughly cylindrical continuous zone I, a complex transition band and the structured body of the tear desiccate displaying a variety of major and minor crystalloids could be jointly appreciated. Some filaments can be seen as integral parts of zone I
Discussion
In this study we have identified experimental conditions that will support the assessment of tear microdesiccates by variants of light microscopy. To date, studies involving characterization of single tear microdesiccates have frequently used either darkfield microscopy or phase contrast microscopy and have been exclusively focused on assessing either presence or absence of fern-like crystalloids. Furthermore, consideration of any other structural element of microdesiccates being formed during tear water evaporation has been disregarded consistently [1,[8][9][10][11]. Recent studies using light microscopy have shown that a normal tear microdesiccate comprises several annularly distributed morphological domains or zones, namely zones II and III (two highly distinctive zones whose organization is based on fern-like crystalloids), a transition band (a narrow compact band whose structure is based on rope-like elements) and zone I (a translucent and barely visible outer circle of desiccates) [12]. By making adjustments in the method of focusing light onto the dry tear specimen by means of an Abbé condenser with 5-position turret (brightfield, phase contrasts 1, 2 and 3 and darkfield) we have now been successful in identifying optimal conditions for the differential (and simultaneous) observation of some of those structural domains of tear microdesiccates. Thus, by using illumination systems other than those of the usual darkfield or phase contrast microscopy, zone I became clearly noticeable as an architectural distinct component of normal tear microdesiccates. To date, zone I of tear microdesiccates has gone mostly unnoticed among tear fern analysts despite it was originally described in 1955 by Solé as an amorphous and barely visible structure which can be penetrated by tiny rod-like elements [14][15][16]. Furthermore, by means of finer adjustments consisting in positioning the condenser in intermediate positions between brightfield and phase 1, or between phases 1 and 2, both fern-like crystalloids and zone I, that is, the two most distinctive elements of a normal tear microdesiccate, could be seen simultaneously and with a properly balanced resolution. An analysis of the specialized literature shows that practically none of the reports concerning tear microdesiccates-with the exception of that of Horwath et al. [17]-present either whole tear microdesiccates or their outer zone I [17][18][19][20][21]. Moreover, descriptions in those reports are referred only to presence or absence of tear fern-like crystalloids. Such observational bias of researchers and clinicians using light microscopy of tear microdesiccates as a diagnostic test for the assessment of the ocular surface seems to derive from very different sources. Firstly, production of tear microdesiccates Fig. 9 Representative image of a normal tear microdesiccate as observed by transmitted-light microscopy with Ph1 + /Ph2 illumination. Both tear microdesiccate production and the lens system of the microscope were those described in the legend to Fig. 3. Again, both a roughly cylindrical continuous zone I, a complex transition band and the structured body of the tear desiccate displaying clearly defined major and minor crystalloids could be jointly appreciated. Filaments can be also seen as integral parts of zone I Fig. 10 Representative image of a normal tear microdesiccate as observed by transmitted-light microscopy with Ph1/Ph2 + illumination. Both tear microdesiccate production and the lens system of the microscope were those described in the legend to Fig. 3. Also, both a roughly cylindrical and well-defined continuous zone I, a complex transition band and the structured body of the tear desiccate displaying major and minor crystalloids could be jointly appreciated. Filaments in zone 1 look like well-defined carvings on glass surfaces and their analysis by light microscopy is widely known as the tear ferning test. Certainly, such nomenclature draws attention to a single goal [10,17,18]. On the other hand, most of reports on tear microdesiccates do coincide in documenting just a very minor area of every single microdesiccate [2,7,8,20,21]. Considering both variety and abundance of structures and domains that are usually present in normal tear microdesiccates, such selection is by far a confounding factor. Another also important factor accounting for the only partial use of the information derived from any tear microdesiccate is the lack of standardization among the observation procedures used in different studies. Thus, observations reported to have been made at magnifications of either 10× [21][22][23], 40× [8], 25 and 125× [17], 40-100× [24], 100× [25], or even 400× [9], are hardly comparable. Moreover, reports rarely indicate the particular combination of ocular and objective lenses used in the observations, which can also preclude comparisons and be of great significance for appropriate data collection. Quite often, the areas of the fields of view corresponding to the above-mentioned range of magnifications do oblige authors to select a fraction of the tear microdesiccate to be exhibited as representative of the whole specimen. A quite closely related aspect leading to the same biased result can derive from the use of relatively large tear volumes to produce microdesiccates, so that it becomes impractical or impossible to watch the whole specimen. In our experience, microdesiccates produced with tear volumes equal or higher than 2 µL can hardly be seen under a common light microscope whose lowest power objective lens is usually around 4-5× [12]. Unfortunately, data on the volume of tear used to produce microdesiccates are rarely communicated in specialized reports. Also in reference to methodological aspects that may restrict markedly the information provided by a tear microdesiccate is the use of some particular types of light microscope techniques. Among studies dealing with characterization of tear microdesiccates, a majority involved the transmitted-light darkfield microscopy variant [12,13,17], while others used either phase contrast microscopy [24,25], visible light microscopy [9, Fig. 11 Reproducibility of tear microdesiccates produced from single healthy subjects and imaged with Ph1 + /Ph2 microscopy. Microdesiccates produced from quadruplicate 1-µL aliquots of a single sample of tear taken from a healthy subject and observed by transmitted-light microscopy with Ph1 + /Ph2 illumination displayed marked similarities concerning features of the four main morphological domains. Digital images of microdesiccates were captured at 25× magnification (10× eyepiece and 2.5× objective lens) 10] or polarized-light microscopy [20]. Some reports do not provide sufficient technical data in this respect [15]. Morphological information obtained by using those different experimental approaches can differ markedly. Concerning tear microdesiccates we have now shown that darkfield microscopy enhances imaging of fernlike crystalloids but, in turn, makes zone I practically invisible.
In an already classical report aimed to systematize the assessment of tear microdesiccates, Rolando proposed the use of a 4-level numeric scale (I through IV) to evaluate the power of tear fluid to form fern-like crystalloids following spontaneous desiccation on a glass surface at ambient conditions [1,10]. In addition, those authors showed that levels I and II (higher fern-forming capability) were more frequent among tear fluids collected from normal eyes whereas levels III and IV (lower fernforming capability) were more common in tear fluids of patients with keratoconjunctivitis sicca [1,10]. Given the remarkable diversity of procedures to produce and evaluate tear microdesiccates, it has not been surprising that the Rolando's scale has been used or interpreted very differently by different authors (e.g. ref. [20] versus ref. [22]). Also, in a recent analytical study on typing tear microdesiccates in association with the tear ferning test, a new 5-point scale displaying improved discrimination, repeatability and reliability over the conventional Rolando's scale was proposed in order to provide a better support to researchers and clinicians using the test [18]. Such study was also focused only on the fern-like crystalloids with no consideration to any other structural element of tear microdesiccates [18]. Despite these various technical, methodological and interpretive limitations, acceptable sensitivity and specificity values of the tear ferning test in screening Dry eye have been reported [20,26,27]. Certainly, the properties displayed by a tear microdesiccate should account at least partly for the quality of the tear fluid from which it is produced. In this context, the link made by Rolando between a morphological feature of tear microdesiccates and tear quality is highly valuable and should be given first consideration. In accordance with that premise, our study was aimed at defining basic experimental conditions allowing the observer to characterize whole tear microdesiccates being produced under standard conditions. Thus, the combined use of a tear volume of 1-1.5 µL to produce a microdesiccate and a 2.5× objective lens for its analysis represented primary conditions to recognize a whole microdesiccate. To resolve and characterize the main morphological domains of a tear microdesiccate the use of alternative illumination settings, in reference to the basic positions of a standard 5-position turret condenser, was found to be equally important. In this study, some of the domains of a tear microdesiccate could be consistently resolved by using particular types of illumination. In agreement with a number of previous reports, the major tear fernlike crystalloids can be properly resolved using darkfield microscopy or some types of phase contrast illumination (Ph3). However, under this type of illumination zone I of tear microdesiccated specimens becomes practically invisible. Contrarily, by using some phase contrast illuminations (Ph1) the borders of zone I become clearly demarcated but resolution of the centermost fern-like crystalloids is reduced significantly. On the other hand, because of the consistent lack of use of stains in the assessment of tear microdesiccates, brightfield microscopy has not been exploited yet for the assessment of tear microdesiccates. Accordingly, none of the standard positions of the 5-position condenser by itself has allowed to describe comprehensively a whole tear microdesiccate. In order to attain views of microdesiccates in which both the zone I and the domains displaying fern-like crystalloids were jointly resolved, additional illumination settings provided by intermediate positions between the five fix positions in the turret condenser were explored. Thus, illuminations of tear microdesiccates from healthy subjects provided by intermediate positions between the standard brightfield and phase 1 positions or between the standard phase 1 and phase 2 positions resulted in whole tear microdesiccates showing simultaneously both fernlike crystalloids of zones II and III, a compact and structured zone I and a complex transition band. Recently reported studies from our laboratory have shown that the main domains of tear microdesiccates have distinctive physicochemical characteristics [28]. In that regard, using energy dispersive X-ray analysis (EDXA) coupled to scanning electron microscopy, Pearce and Tomlinson showed the presence of sulphur (together with K+ and Cl−) at the edge of the dried teardrop but not in the fern-like crystalloids [29]. Thus, different domains of tear microdesiccates may contribute with particular structural or functional properties to the tear film covering the eye surface [30,31]. In accordance with this postulate, both the occurrence of major crystalloids in zone III (a common feature among normal microdesiccates typed as Rolando's scores I or II), together with a seemingly structured zone I (a novel feature shown in this study) can be viewed as structural elements of normal tear microdesiccates whose scrutiny may shed some light on tear quality. Altogether, the assessment of whole tear microdesiccates may become a highly valuable source of information on normality or abnormality of the tear fluid. Far from contradicting the Rolando's link between an altered score in the tear ferning test and physiopathological abnormality of the tear fluid, our findings do complement, enrich and diversify the possibilities of linking advantageously the morphology of whole tear microdesiccates with structural, compositional and functional aspects of tear fluid in individual patients and eyes. Clinical research in that direction should shed important lights on this new consideration of tear microdesiccates.
Conclusions
Imaging of entire tear microdesiccates by transmittedlight microscopy depends upon illumination. A more comprehensive description of tear microdesiccates on the basis of structural domains can be achieved by combining illumination methods (bright-field, phases 1-3 and darkfield). Optimal conditions for the differential observation of structural domains of tear microdesiccates were identified. Thus, zones II and III (fern-like crystalloids) and zone I (the outermost homogeneous continuous structure) can be considered now as the two most distinctive elements of a normal tear microdesiccate, Both of them can be seen simultaneously and with a properly balanced resolution by transmitted-light microscopy.
Subjects
Fourteen subjects (10 men and 4 women; age range 18-27 years old) served as healthy volunteers. All of them fulfilled the following criteria: (a) Ocular Surface Disease Index (OSDI) score of 12 or less [32], (b) Schirmer I score of 10 mm or more at 5 min [33], (e) Fluorescein break-up time (FBUT) score of 5 s or more [34], (f ) Ferning score I or II [10,11], (g) tear osmolarity (Tear Lab Osmolarity System ® ) of 316 m Osm/L or less [35]. In addition, all subjects were neither contact lens wearers nor artificial tear users and had not taken any medication during the 3 months before tear assessment. All subjects acted as unilateral tear donor volunteers and signed an Informed Consent. The study was conducted according to the recommendations of the Declaration of Helsinki and approved by both the Ethics Committee of the Faculty of Medicine, University of Chile and the Ethics Committee of Fondecyt (Fondo de Desarrollo Científico y Tecnológico)-Chile.
Tear collection
From each eye a single 3-min tear sample was taken by using absorbing polyurethane mini sponges as detailed elsewhere [36]. Aliquots of each tear sample were taken for desiccation assays immediately after collection.
Tear desiccation and image capture
Unless otherwise specified, from each fresh tear sample 1.0-μL aliquots were taken using a 2-μL Gilson micropipette fitted with a ultrafine tip and placed sharply on the center of individual glass microscope slides that had been positioned horizontally. Tear aliquots were allowed to dry spontaneously at ambient conditions (temperature range of 18-25 °C, relative humidity range of 36-40 % and 570 meters above sea level (MASL). Micrographs of the dry specimens, named microdesiccates, were taken using a Zeiss Axiostar Plus microscope (objective lens = 2.5×, ocular lenses = 10×) fitted with a universal 5-position condenser system turret (bright-field, Phases 1, 2 and 3 and dark-field) and with a Canon Powershot G10 14.7 megapixel digital camera. Microdesiccates were routinely prepared in triplicate and classified as types I through IV according to Rolando's criteria [1,10]. | 2023-02-01T14:04:58.439Z | 2016-06-03T00:00:00.000 | {
"year": 2016,
"sha1": "56673bb6b430e677e5557871f93a2253bdd57b08",
"oa_license": "CCBY",
"oa_url": "https://biolres.biomedcentral.com/track/pdf/10.1186/s40659-016-0089-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "56673bb6b430e677e5557871f93a2253bdd57b08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
38280241 | pes2o/s2orc | v3-fos-license | Supermicrosurgical replantation of a small amputated nasal tissue in a child
Key Clinical Message This study reports a case of an 8‐year‐old boy who suffered from a dog bite injury to the nose. The amputated nasal tissue measured approximately 1.0 × 1.5 cm and included part of the tip, alar, and soft triangle subunits. Both ends of an artery of less than 0.5 mm were found, and replantation was performed. Chemical leeching was performed postoperatively. At 5‐year follow‐up, a good aesthetic result was achieved.
Case Report
An 8-year-old boy was brought to the Emergency Department at tertiary trauma hospital in New Zealand, one evening after a dog bite injury. The avulsed tissue was retrieved by family and kept cool in an artificial ice gel bag prior to being seen by the plastic surgery resident. At the time of review 6 h postinjury, however, it was noted that the artificial ice gel bag had become warm. Intravenous amoxicillin with clavulanic acid was given. Patient was up to date with his immunization schedule.
Gross inspection revealed a defect that included several nasal subunits (partial nasal tip, medial third of his right alar, and the soft triangle, Fig. 1). The contaminated amputated composite tissue measured approximately, 1 9 1.5 cm 2, was composed of skin, subcutaneous fat, cartilage, and mucosa. As the patient did not fast, the procedure was postponed until 8 a.m. the next day. The avulsed nasal tissue was kept at 4°C overnight. The initial surgical plan was to reapply the amputated part as a composite graft. Risk and benefits of the procedure including future reconstruction were discussed with the mother. The possibility of replantation was brought up but mentioned it would be subjected to the intraoperative findings.
At the time of surgery, both the wound and the graft were gently debrided under 2.59 loupe magnification.
During debridement, a pulsating vessel was observed at the junction of the tip and right alar at the wound edge (Fig. 2). The amputated part was carefully positioned within the defect and the mucosal surface repaired. After this, under surgical microscope magnification, an opposing vessel end was found on the amputated tissue where the pulsating vessel was noted. The wound bed end of the vessel was trimmed and irrigated with papaverine and intraluminally with heparinized saline solution until a pulsatile stream of blood was observed. After ascertaining the artery could be repaired without tension, anastomosis was attempted. Four interrupted 11-0 nylon sutures were used in a quadrangular fashion for an endto-end anastomosis of the artery (Fig. 3). The time taken to perform the microanastomosis was approximately 40 min. At the release of the vessel clamp, the vessel was observed to be patent. The replanted tissue initially became pink, though by the end of the procedure, had a blue hue (Fig. 2), with demonstrable capillary refill. The skin sutures were loosely, but accurately, tacked. During the first 24-h postoperative period, the operating surgeon manually induced bleeding from the wound by applying heparinized saline.
The patient was discharged the next day on oral cephalexin. There was no postoperative infection. At the onemonth follow-up appointment, the replanted tissue remained viable (Fig. 4). The outcome was excellent requiring no secondary reconstructive procedure with the last follow-up appointment being 5 years postoperatively (Fig. 5). Patient (13 years old) claimed no one ever noticed the injury on his nose nor is he conscious of it.
Discussion
In this study, we report an artery only nasal replantation of a 1.5 cm 2 amputated nasal tip/alar tissue using supermicrosurgery technique. A satisfactory aesthetic outcome was achieved obviating the need for subsequent secondary reconstruction. In this section, we discuss the historical outcomes of composite grafting in traumatic nasal amputation and how advancement in microsurgery challenges the conventional teaching of the size threshold to attempt replantation.
It is well accepted that successful replantation of the native amputated tissue will yield the best outcome, and therefore should be attempted whenever possible [26]. Most successful nasal replantations were reported in the last two decades. A significant portion of these reported cases were related to human [14,15,27] or dog bites [12,15,20,21], with avulsion of the vessels, crushing of the amputated parts, and contamination, all of which are predictors of poor outcome. Excellent aesthetic results achieved in these cases [12,20] demonstrate that suboptimal conditions should not preclude an attempt at replantation.
The size threshold for free composite graft figure is more arbitrary than scientific. Some [1] even suggested no part of the graft should be 0.5 cm from the viable cut edge of the wound. This recommendation may be in the context of reconstructing a defect created in elective procedures. As the technique of supermicrosurgery has shown to be feasible, the spectrum of replantable amputated parts has expanded [28]. This challenges the conventional threshold for composite grafting in traumatic nasal amputation [19,20]. Kim et al. [19] reported a successful case of nasal replantation of a 2.5 9 2.6 cm 2 amputated segment, anastomosing an artery and vein of around 0.6 mm diameter with six interrupted 11-0 sutures. While the size of our vessel was not exactly measured, only four interrupted 11-0 sutures were required, indicating a comparably smaller vessel.
We do not recommend attempt to replant all small nasal amputation part at all costs. We would, however, like to use this case to illustrate, when opportunity arises, supermicrosurgical replantation may be a better option to ensure survival of amputated part. This may only require a careful survey around the wound edge for pulsatile vessel. The effort is certainly justified, as the resultant defect, should the composite graft failed, would have required staged forehead flap reconstruction. Furthermore, the defect in this case is known to be very unfavorable for the survival of composite grafts. Chandawarkar et al. [29] in their experience with auricular composite grafting stated that at the columellar-lobular junction, alar rim and the soft triangle, partial composite graft loss is a rule rather than the exception.
In Kim et al.'s successful case, exploration and anastomosis of one artery and one vein of approximately 0.6 mm took a total operating time of under 3 h. The anesthetic time for this case was <2 h, illustrating that replantation can be performed within a reasonable duration [19]. Many similar cases reported in the literature have consistently reported success in replanting nasal subunits with only arterial repair [11, 12, 14, 15, 17-21, 24, 25], suggesting the need to survey for available artery only and hence obviating the need for time spent looking for vein.
Finally, we did not apply medicinal leeches in this case. For a pediatric patient, medicinal leeching on area such as the nose requires sedation and intubation. We do not feel it was justified here. Furthermore, leeches carried the risk of infection not only from its commensal organisms but also prion transmission. As for the duration of chemical leeching, the authors feel that if we were to attempt another similar replantation, we would continue with chemical leeching longer. However, the replanted nasal tissue survived in this case. Even if venous congestion was an issue in this case, in the face of various suboptimal conditions, the nourishing of the amputated part through the anastomosis probably would have had significant impact on its survival. Interestingly, in experimental animal studies, Nakajima observed vessels to have grown into the periphery of the skin flap by day 2 postoperation [30].
Ayurek et al. in a 2 cm 2 alar replant observed resolution of venous congestion by day 3 [11]. It could be that for a small volume facial tissue replant, shorter duration is required for inosculation.
In conclusion, we propose that in selected cases, rigid size threshold for composite graft should be put aside. If artery is found and flow is healthy, microsurgical replantation should be attempted. The success of such attempts avoids secondary staged reconstruction.
Authorship
RT: performed surgery, first follow-up, drafting of paper and revision, illustration. MW: oversee management of patient, follow-ups, helped in manuscript preparation, and revision of manuscript.
Conflict of Interest
None declared. | 2018-04-03T04:16:02.670Z | 2017-10-18T00:00:00.000 | {
"year": 2017,
"sha1": "bb9890ce9ec166a27450c41dd31a32ea6d44c071",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.1223",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb9890ce9ec166a27450c41dd31a32ea6d44c071",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17031072 | pes2o/s2orc | v3-fos-license | Downregulation of peptide transporter genes in cell lines transformed with the highly oncogenic adenovirus 12.
The expression of class I major histocompatibility complex antigens on the surface of cells transformed by adenovirus 12 (Ad12) is generally very low, and correlates with the high oncogenicity of this virus. In primary embryonal fibroblasts from transgenic mice that express both endogenous H-2 genes and a miniature swine class I gene (PD1), Ad12-mediated transformation results in suppression of cell surface expression of all class I antigens. Although class I mRNA levels of PD1 and H-2Db are similar to those in nonvirally transformed cells, recognition of newly synthesized class I molecules by a panel of monoclonal antibodies is impaired, presumably as a result of inefficient assembly and transport of the class I molecules. Class I expression can be partially induced by culturing cells at 26 degrees C, or by coculture of cells with class I binding peptides at 37 degrees C. Analysis of steady state mRNA levels of the TAP1 and TAP2 transporter genes for Ad12-transformed cell lines revealed that they both are significantly reduced, TAP2 by about 100-fold and TAP1 by 5-10-fold. Reconstitution of PD1 and H-2Db, but not H-2Kb, expression is achieved in an Ad12-transformed cell line by stable transfection with a TAP2, but not a TAP1, expression construct. From these data it may be concluded that suppressed expression of peptide transporter genes, especially TAP2, in Ad12-transformed cells inhibits cell surface expression of class I molecules. The failure to fully reconstitute H-2Db and H-2Kb expression indicates that additional factors are involved in controlling class I gene expression in Ad12-transformed cells. Nevertheless, these results suggest that suppression of peptide transporter genes might be an important mechanism whereby virus-transformed cells escape immune recognition in vivo.
Summary
The expression of class I major histocompatibility complex antigens on the surface of ceils transformed by adenovirus 12 (Ad12) is generally very low, and correlates with the high oncogenicity of this virus. In primary embryonal fibroblasts from transgenic mice that express both endogenous H-2 genes and a miniature swine class I gene (PD1), Ad12-mediated transformation results in suppression of cell surface expression of all class I antigens. Although class I mRNA levels of PD1 and H-2D b are similar to those in nonvirally transformed cells, recognition of newly synthesized class I molecules by a panel of monoclonal antibodies is impaired, presumably as a result of inefficient assembly and transport of the class I molecules. Class I expression can be partially induced by culturing cells at 26~ or by coculture of cells with dass I binding peptides at 37~ Analysis of steady state mRNA levels of the TAP1 and TAP2 transporter genes for Ad12-transformed cell lines revealed that they both are significantly reduced, TAP2 by about 100-fold and TAP1 by 5-10-fold. Reconstitution of PD1 and H-2D b, but not H-2K b, expression is achieved in an Ad12-transformed cell line by stable transfection with a TAP2, but not a TAP1, expression construct. From these data it may be conduded that suppressed expression of peptide transporter genes, especially TAP2, in Ad12-transformed cells inhibits cell surface expression of class I molecules. The failure to fully reconstitute H-2D b and H-2K b expression indicates that additional factors are involved in controlling class I gene expression in Ad12-transformed cells. Nevertheless, these results suggest that suppression of peptide transporter genes might be an important mechanism whereby virus-transformed cells escape immune recognition in vivo.
M HC class I genes play key roles in numerous immunological processes. Among these are the presentation of "foreign" antigens for recognition by CTLs (1). By this mechanism, the immune system is able to control infectious diseases and the growth of tumor cells (2,3). Indeed, tumors of various origin have been shown to express low levels of class I antigens, a characteristic that might contribute to their escape from immune surveillance (4). In support of this, Tanaka et al. (5) reported that the reexpression of class I antigens inhibited tumor growth. However, transfection of cell lines by class I genes does not always result in the expression of class I molecules on the cell surface. Weis and Seidman (6) reported that transfection of L cells by a class I gene resulted in increased levels ofdass I mRNA, but failed to significantly increase the cell surface expression of class I antigens. Thus, the suppressed expression of class I molecules by cells may R. Rotem-Yehudar and S. W'mograd made equal contributions to this paper. occur during transcription of the genes, or during synthesis, assembly, and transport to the cell surface.
Transport of class I molecules to the cell surface depends on their assembly with peptides, which are usually 8-9 amino acids long (7). Several researchers (8)(9)(10) have demonstrated that such peptides arise from cleavage of proteins in the cytosol and are actively transported into the endoplasmic reticulum (ER) 1, where assembly with class I molecules takes place. Presumably, if the antigen processing machinery of the cell is functioning normally, the tumor-associated antigens and antigens originating from oncogenic viruses generate such peptides (11)(12)(13). After assembly, the molecules are transported through the Golgi apparatus to the cell surface. Klar and HL, nmerling (14) showed that tumor cells, such as the lung carcinoma CMT 64.5 and the fibrosarcoma BC2, can synthesize class I heavy chains and 132microglobulin (/3zm), but that these are only assembled after treatment with 3,-interferon. Furthermore, mutant cells with deletions or mutations in the MHC locus, but with intact class I genes, such as the murine RMA-S and the human T2 or 721.174 cells, have drastically reduced cell surface levels of class I, as a result of a defect in peptide transport into the EK (10,15,16). Expression can be restored by transfecting the cells with the MHC-encoded peptide transporters TAP1 and TAP2 (17,18). tLecently, Kestifo et al. (19) demonstrated that tumor cell lines differ in their ability to process antigens, a phenomenon that is correlated with poor class I expression and low mRNA levels of both peptide transporters and proteasome components. However, this study did not address the question of whether poor antigen processing is associated with transformation, or represents the natural regulation of class I expression in the tissue of origin.
Tumors transformed by the highly oncogenic adenovirus 12 (Ad12) have been used as model systems to demonstrate a direct correlation between tumorigenicity and lack of host immune responses due to decreased class I expression (20)(21)(22). Whereas both Ad5 and Ad12 transform primary fibroblasts in vitro, tumors are generated in vivo only by Ad12-transformed cells that are deficient in class I expression. Downregulation of class I expression was reported to occur at both the transcriptional (23)(24)(25)(26) and posttranscriptional levels (27). In our previous reports we described a large panel of cell lines transformed by Ad12, and compared them to both AdStransformed cell lines and immortalized ("normal") cell lines derived from the same pool of fibroblasts (27). The parental fibroblasts were prepared from transgenic mice which express both endogenous class I genes and a miniature swine dass I transgene (PD1) that is not located in the MHC locus. Although all cell lines express dass I mRNA, the levels of class I antigens, but not other receptors (such as MAC-2 and CD44), are significantly decreased in most Ad12-transformed cell lines. Comparison of the three class I gene transcripts in Ad12transformed cells revealed that the levels of PD1 and H-2D are about normal, whereas the H-2K level is two-to fivefold lower (Winogard, S., tL. Kotem-Yehudar, and R. Ehrlich, manuscript in preparation). Immunoprecipitation analyses with or without endo-B-N acetylglucosaminidase H (endo H) treatment, revealed that Ad12-transformed cell lines are deficient in both class I molecule synthesis and the ability to transport these molecules through the Golgi apparatus (28). Both H-2 and PD1 transport were inhibited, implying the existence of a general mechanism that affects maturation of all class I products in these cells. These results raised the possibility that transformation by certain viral serotypes can affect peptide transport. In this paper, we pursue the causes of aberrant transport and expression of class I molecules in Ad12transformed cells.
Materials and Methods
Mice. C57B1/10 mice, which are transgenic for a miniature swine class I antigen, PD1 (PD1 transgenic mice), have been previously described (27,29). The mice, which are homozygnus for the transgene, were bred at the Tel Aviv University breeding facility.
Cell Cultures. All the cell lines were derived from cultured mouse embryonal fibroblasts (MEF) as previously described (27). M1 was derived from spontaneous in vitro immortalization of MEF after growth crisis (27). VAD12.78, VAD12.79, and VAD12.42 were transformed by infection of MEF with Ad12 (27). A501, A503, and A505 were transformed by transfection of MEF with Ad5-Xhol-C fragment in a PSV2Neo plasmid (30,31), followed by selection with 800 gg/ml of 0418 (Sigma Chemical Co., St, Louis, MO). The plasmids were a gift from Dr. A. Van der Eb (University of Leiden, Leiden, The Netherlands). ME1 and ME5 were derived from transformation of BALB/c-MEF by Ad5 and were a gift from Dr. A. M. Lewis (National Institute of Allergy and Infectious Disease, National Institutes of Health, Bethesda, ME)) (32).
The cell lines were maintained in DMEM, supplemented with 2 mM glutamine, 10% FCS, penicillin, streptomycin, gentamycin, and amphotericin B at the recommended concentrations (33). Media and supplements were purchased from Biological Industries, Bet Haemek, Israel.
Stable Transfection. Ad12-transformed cells (VAD12.79) were transfected by the calcium phosphate-DNA copredpitation method (28). The transfection cocktail contained 10 gg plasmid DNA and 5 ~g of carrier DNA (sheared salmon sperm DNA; Sigma Chemical Co.). 24 h after transfection, the cells were washed with PBS, and after a further 24 h the medium was supplemented with 800 mg/ml G418 (Sigma Chemical Co.). The transfected ceils were either grown to confluence and analyzed as a bulk mixture of cells, or individual clones were isolated and expanded in culture.
Probes and Plasraids. The PDl-specific probe is a SacI-BamHI fragment containing exon 2-7 of the PD1 gene (29); the H-2 probe is an EcoRI-HindIII fragment derived from pH-2a33 (H-2K d) (34); the 132m probe is a PstI-PstI fragment from ~2m eDNA cloned in pBR322 (35); the actin probe is a PstI-PstI fragment from chicken f3 actin eDNA cloned in pBR322 (36); the AdSEla probe is an EcolLI-EcoKI fragment containing the Ela of AdS cloned in RSVNen and was a gift from Dr. A. Van der Eb (University of Leiden); and the TAP1 and TAP2 probes are XbaI-HindIII and KpnI-KpnI fragments containing TAP1 and TAP'2 cDNAs, respectively cloned in pcDNAI Neo (Invitrogen, San Diego, CA), were a gift from Dr. J. J. Monaco (University of Cincinnati, Cincinnati, OH) (37).
Induction of Class IExpression. The cells were incubated for 18 h with the peptides listed in Table I in serum-free media (Bio MPM 1, multipurpose serum-free medium for adherent cells; Biological Industries). Peptides were synthesized, purified, and analyzed as previously described (42).
FACS ~ Analysis. The ceUs were harvested by mild trypsinization, followed by washing with media supplemented with 5% FCS and 0.01% sodium azide. Approximately 106 cells were incubated at 4~ with the appropriate concentration of the first antibody for 30 rain, washed, and then incubated in the dark for another 30 min with the second antibody. Controls were stained with a first nonrelevant antibody and a second antibody. The cells were washed with PBS and the fluorescence intensity analyzed by a cell sorter (Becton Dickinson & Co., Mountain View, CA). The hybridization solution contained 4 x SSC, 50% formamide, 0.2% SDS, 0.1% polyvinylpyrrolidone, and 100 #g/ml shea~ salmon sperm DNA. Hybridizations were carried out at 42~ followed by washes with 2 x SSC, 0.1% SDS at room temperature, and 0.2 x SSC in temperatures ranging between 55 and 65~ The blot was exposed to RX x-ray films (Fuji, Tokyo, Japan) and the resulting bands were scanned by a densitometer.
The blot, after stripping with a boiling solution of 0.1% SDS, was used seven times for hybridizations.
Cell Surface Expression of Class I Antigens Is Decreased After
Transformation with Adl2. The following studies were carried out with a representative normal cell line obtained by spontaneous immortalization of MEF from C57B1/10.PD1 mice (M1), Ad12-and Ad5El-transformed MEF from the same mice (27,28), and Ad5-transformed MEF from BALB/c mice (32). The cell surface level of expression and the percent positive cells for H-2 antigens and the transgene product PD1 were compared in the adenovirus-transformed and the normal cell lines ( Fig. 1 and Table 2). The Ad12-transformed cell lines VAD12.42 and VAD12.79 expressed very low levels of all class I antigens, as reflected both by the percent of positive cells in the population (Table 2) and the relative fluorescence per cell ( Fig. 1 and Table 2). Cell lines derived by transformation of MEF with Ad5E1 (A501 and A505) expressed higher levels of class I antigens than M1. For comparison, we ana- (58). To determine whether the level of dass I molecules on the surface of/kl12-transformed cells is increased after incubation at reduced temperatures, we cultured the cell lines for 18 h at 26~ and examined cell surface expression by FACS | analyses. As shown in Fig. 2, class I expression was not enhanced in the normal cell line (M1), whereas the expression of epitopes recognized by PT85A (anti-PD1) were enhanced in Ad12-transformed cell lines. A two-fold increase in the mean fluorescence/cell of PDl-positive cells was also observed (data not shown). Nevertheless, PD1 expression reached <50% of the maximal level expressed in M1 cells. Only one cell line (VAD12.79) demonstrated an increased number of H-2 positive cells after incubation at 26~ (Fig. 2), but even for this cell line the mean fluorescence/cell did not change significantly (data not shown).
Specific Peptides Induce Expression of Class I Molecules on the
Surface of Adl2-transformed Cells. The enhanced expression of PD1 at lower temperatures raised the possibility that peptide transport is inefficient in these cells. Class I molecules in cells lacking peptide transporters, and in turn, exhibiting low levels of the relevant peptides in the ER, may be stabilized by extracellular peptides (43,47). To determine whether this was the case with our cell lines, we cultured the cells in the presence of peptides known to bind H-2D b or H-2K b. To prevent enzymatic degradation, or effects mediated by bovine 32m, all the incubations were done in serum-free media. The peptides used are listed in Table 1 and three H-2Kb-restricted peptides were used; however, the H-2Kb-restricted peptides, Sendai NP 324-332 and Ova 257-264, have been also shown to somewhat stabilize H-2D b expression (54,55). Two H-2D a-, three H-2K d-, and one H-2La-specific peptides were used to analyze effects on class I expression by ME1 cells, which are of BALB/c origin.
Whereas Fig. 3, A and B show that the H-2Db-specific peptides enhanced H-2D b expression on VAD12.79 by two-to threefold, resulting in percent positive cells equal to that in M1 (data not shown), PD1 and H-2K h expression were not further enhanced by these peptides (Fig. 3, C and D). Similar results were observed with the addition of H-2Db-specific peptides to VAD12.42 cells ( Table 3). None of the peptides induced additional class I expression on M1 or E1Ad5transformed cell lines (Table 3). A two-to threefold enhancement of class I expression could be obtained for H-2K b, but required the addition of higher concentrations of H-2Kb-specific peptides (150 ~M) (Fig. 4). In contrast to the results obtained following treatment by H-2Db-specific peptides where more than 50% of the cells expressed detectable levels of class I antigens, cells expressing detectable levels of H-2K b after treatment with the relevant peptides consisted only 15-35% of the population. The fact that H-2K b expression could not be induced to the same extent as the H-2D b expression is consistent with the observed transcriptional downregulation of the H-2K b gene (Winograd et al., manuscript in preparation). Treatment with OVA 257-264 resulted in some induction of H-2D b expression as expected, but only at the highest peptide concentration (150/zM). H-2Kb-spedfic peptides did not affect PD1 expression. In contrast to the induction of cell surface expression of class I molecules after the treatment of Ad12transformed cells with spedfic peptides, the levels of dass I antigens on the surface of the Ad5-transformed cell lines, ME1 and ME5, the Ad5El-transformed and M1 cell lines, were not increased by incubation with the peptides listed in Table 1 (data not shown). Thus, decreased cell surface expression of dass I antigens in the Ad12-, but not in the AdS-transformed cell lines is apparently caused by a lack of peptides in the ER.
Preincubation of cells with exogenous B2m was previously shown to facilitate the association of peptides with class I heavy chains (58)(59)(60). To examine the effect of such treat-ment on our cell lines they were incubated with recombinant human B2m, or with a mixture of peptides and human B2m. Neither the addition of human B2m alone, nor human B2m plus the highest tested peptide concentration, resulted in further enhancement of class I expression (results not shown). Thus, exogenously added ~m did not facilitate class I expression in this system.
Suppression of Peptide Transporter Genes in Adl2-transformed
Cells. The apparent deficiency of peptides for assembly and transport of class I molecules to the cell surface, raised the possibility that peptide transporter genes are downregulated in Ad12-transformed cell lines. Hybridization of total RNA from seven individual Ad12-transformed call lines with TAP derived probes revealed that the levd of peptide transporter mRNA, particularly TAP2, is significantly decreased compared to that in Ad5-transformed cell lines. However, because the expression of both TAP1 and TAP2 was very low in MEF and MEF derived cell lines, poly A mRNA was isolated in order to quantitatively compare TAP expression in the different cell lines. Fig. 5 and Table 4 summarize the results of Northern hybridization analyses of poly A mRNA isolated from the various cell lines with class I, ~m, Ad5Ela, and peptide transporter-derived probes. The most marked difference between Ad12-transformed and the other cell lines was in the level of expression of the TAP2 gene. The hybridization signal with the TAP2 probe was very low in Ad12-transformed cells (at least a 100-fold decrease in TAP2 mRNA in VAD12.42 and VAD12.79 as compared with M1). In call lines transformed by AdSE1 (A501 and A505) Table 4.
with M1. ME1 and ME5, which originated from Ad5transformation of BALB/c fibroblasts and one of the AriSE1 transformed cell lines (A503), expressed similar levels of TAP2 mRNA to those in M1. TAP1 gene expression was suppressed by 5-10-fold in Adl2transformed, compared with M1 cells. By contrast, in two cell lines transformed by Ad5E1 (A501 and A505) there was a 10-fold increase in the steady state level of TAP1 transcripts in comparison to M1 cells. A two-to threefold induction in TAP1 expression also occurred in A503 and ME1. One of the major differences between ME1 and ME5 is the absence of Ela expression in ME5 (Fig. 5), although Elb mRNA is expressed (results not shown). The absence of Ela proteins in ME5 and the low level of Ela in A503 could relate to the difference in TAPl-gene expression between the cell lines.
Reconstitution of Class I Expression in an AdI2-transformed
Cell Line Stably Transfected with a TAP2 Expression Vector. To determine whether class I expression can be reconstituted by expression of peptide transporter genes, an Ad12 transformed cell line (VAD12.79), was transfected with either TAP1, TAP2, a mixture of TAP1 and TAP2 expression constructs, or a PSV2Neo control construct. TAP1 and TAP2 were transcribed from the CMV promoter in these constructs. Since the neomycin gene and the TAP cDNAs were located on the same plasmid, all transfectants were pooled and class I expression was analyzed by FACS | In addition, several individual dories were expanded in culture and analyzed. The FACS | analyses of the bulk transfected cultures is shown in Fig. 6, B-D. TAP1 transfection did not increase the expression of any of the class I molecules (Fig. 6 C compared with A and B) although the level of TAP1 transcripts was 10-fold higher in the transfected cells than in M1 (data not shown). TAP2 transfection partially reconstituted class I expression in the bulk culture cells (Fig. 6 D). PD1 expression was increased on 55% of the cells and mean fluorescence units (F.U.)/cell increased from 9 to 22 (Fig. 6/9/). The pattern of PD1 expression suggested the existence of three subpopulations in the pool of transfectants: a small subpopulation with negative PD1 expression, a large subpopulation with enhanced PD1 levels/call, and a population of *25% of the cells that demonstrated similar PD1 levels to those in M1 (compare with Fig. 6 F). The reconstitution of expression of H-2 antigens was less striking. TAP2 transfection increased H-2D b expression on 35% of the ceils with only a slight increase in mean F.U./cell (Fig. 6, D2 and D3), but did not induce the expression of H-2K b (Fig. 6 D4). Transfection of the cell line with a mixture of TAP1 and TAP2 did not induce higher levels of class I expression than transfection with TAP2 alone (data not shown). Fig. 6 E shows the FACS | analyses of an individual clone transfected with TAP2. In this done, PD1 expression was completely reconstituted (Fig. 6 El), and H-2D b molecules were expressed in 85% of the cells (Fig. 6, E2 and E3). However, the mean F.U. of H-2D b molecules/cell did not reach the same as M1 (27 F.U./ce11 in the TAP2 transfected clone and 92 F.U./cell in M1). The expression of H-2K b molecules was not reconstituted (Fig. 6 E4).
These data indicate that reduced TAP2 expression in Ad12- transformed cells partially accounts for the reduced levels of class I expression. However, the failure of TAP2 gene expression to enhance H-2K b expression and only partially restore H-2D b expression, indicate that other factors are also involved.
Discussion
MHC class I molecules are a family of highly polymorphic cell surface glycoproteins that have as a primary function the property of binding and presenting foreign peptides to CTL (61)(62)(63). After activation, CTL lyse cells that express such peptides. This mechanism is known to be important in controlling pathogenic infections and is also involved in tumor resistance. The lack of cell surface MHC class I proteins in human tumors, as well as in some virus-and carcinogeninduced tumors, undoubtedly interferes with this CTL recognition process (2,13,64,65).
Tumors transformed by the highly oncogenic Ad12 have been used as model systems for demonstrating the direct correlation between tumorigenicity and lack of host immune responses due to suppression of class I expression (20)(21)(22). In some of the tumors induced by the virus, downregulation of class I expression was shown to correlate with decreased levels of class I transcription (23)(24)(25)(26). However, characterization of a large panel of Ad12-transformed cells in our laboratory revealed that in most cell lines cell surface expression was significantly decreased, whereas the steady state levels of class I and ~2m mRNA were similar or moderately reduced compared to normal cell lines (27). It was further demonstrated that synthesis and transport of class I molecules to the cell surface was inefficient in these Ad12-transformed cell lines (28).
The transport rate of MHC class I molecules through the cell appears to be primarily controlled by their rate of egress from the ER to the c/s-Golgi compartment (66,67). The noncovalent interactions of the class I heavy chains with /32m (40) and peptide (15) combine to stabilize the complex. In cells lacking ~2m, newly synthesized class I heavy chains do not attain a mature structure (68,69), and are inefficiently transported to the cell surface (38,40,(70)(71)(72)(73). Similar effects were observed in cells with mutated peptide transporter genes that are deficient in ER peptides (10,15,16).
The present paper demonstrates that the highly oncogenic Ad12 affects class I antigen levels by downregulating the expression of the peptide transporter genes, thereby leading to reduced availability of relevant peptides for stable assembly and transport of class I molecules to the cell surface. For the Ad12-transformed cells, PD1 expression could be enhanced by incubation at 26~ (Fig. 2), and H-2K and, especially H-2D expression could be enhanced by incubation with relevant peptides (Figs. 3 and 4). Addition of/32m to the culture media did not induce the expression of class I antigens, and ~2m in combination with micromolar concentrations of peptides did not further enhance the expression of class I antigens. These results support the conclusion that in these cell lines, the limited amount of peptides available for binding to class I molecules contributes to the greatly reduced levels of expression of these molecules. However, it can not be ruled out that limited amounts of endogenous B2m, along with possible differences in the affinity of PD1 and H-2 antigens for/32m, might also contribute to the reduced levels of expression of class I molecules. Thus, ifPD1 has a higher affinity than H-2 for a limited supply of/~2m molecules, it would attain a conformation that is transported, albeit more slowly, to the cell surface even in the absence of peptides. This could explain why PD1 expression was enhanced in Ad12-transformed cell lines at 26~ whereas in general H-2 expression was not. This possibility is further supported by the results obtained with TAP2 transfection. It was shown (Fig. 6) that TAP2 transfection nearly fully reconstituted PD1 expression, partially reconstituted H-2D expression, and failed to reconstitute H-2K expression. Further experiments, including the transfection of/32m expressing plasmid vectors, are required to determine whether the inability of TAP2 gene expression to fully reconstitute cell surface class I expression is related to limited supply of intracellular ~2m.
The expression of both TAP1 and TAP2 was suppressed in Ad12-transformed cell lines, suggesting that they are regulated by common transcriptional mechanisms. The fact that TAP2 mRNA levels were more markedly decreased by about 10-fold than TAP1 mRNA levels suggests either the presence of multiple suppressive mechanisms or that some factor(s) has a more pronounced effect on the expression of TAP2 than TAP1.
Ad5El-transformed cell lines expressed higher levels of TAP1 and TAP2 mRNA than the normal cell line. A threefold induction in TAP1 mRNA was also seen in the Ad5-transformed cell line ME1, whereas such an effect was not observed in another Ad5-transformed cell line, ME5. The differences between the cell lines maybe related to the absence of Ela-mRNA in ME5 cells and suggests that Ad5Ela has a transactivating effect on the expression of both TAP1 and TAP2. This conclusion is further supported by the observation that in the Ad5El-transformed cell A503 there is low level of Ela, the expression of TAP1 is activated only by twofold, and the expression of TAP2 is not activated. These observations indicate that gene-specific sequence elements associated with TAP1 and TAP2 can be either up-or downregulated by viral oncogenes.
Peptide transporter and class I genes are not activated or suppressed to the same extent in adenovirus-transformed cells. The difference in TAP1 and TAP2 mRNA levels between Ad5E1-and Ad12-transformed cells ranges between 50 and 100 for TAP1 and more than 500-fold for TAP2, whereas B2m and H-2 mRNA levels vary by a maximum of 15-fold, and PD1 mRNA level varies by a maximum of 5-fold. Both class I and peptide transporter genes were induced by ot//3 and 3' interferons, but whereas the levels of class I mRNA were only induced 2-4-fold, TAP1 and TAP2 mRNA levels were induced over 100-fold (Winograd et al., manuscript in preparation). These results suggest that different transcriptional mechanisms are likely to be involved in the regulation of class I and peptide transporter genes in this system and emphasize the conclusion that the suppression of peptide trans-porter gene expression is the critical factor that limits cell surface expression of PD1 and H-2D b in Adl2-transformed cells.
The fact that class I-specific peptides did not enhance class I molecule expression in any of the other cell lines, including the Ad5-transformed cell line ME1, which expresses lower levels of class I antigens than ME5, is consistent with normal levels of peptide transporter gene expression in these cells. The relatively low expression of class I antigens in the ME1 cell line is most likely related to the low level of dass I mKNA in these ceils.
Transfection of TAP1, alone, or in combination with TAP2 did not enhance the expression of class I molecules in an Adl2-transformed cell line. These data dearly indicate that levels of TAP2, and not TAP1, limit the transport of PD1 and H-2D b molecules; however, the inability of TAP2 expression to enhance H-2K b expression and only to partially reconstitute H-2D b expression indicates that a limited peptide supply is not the only explanation for reduced class I expression in these cell lines.
The effects of H-2D b-and H-2Kb-specific peptides were quantitatively different. H-2D b molecules were induced to higher levels, and by lower concentration of peptides, as compared with H-2K b molecules. There are several possible explanations for this effect: first, the differences between the two class I antigens might indicate that the conformation of H-2K b heavy chains is more dependent on B2m and/or peptide than H-2D b heavy chains, leading to less efficient transport and less accumulation on the cell surface. This possibility is supported by observations that H-2D b heavy chains do not require/~2m for cell surface expression (38). Another possibility is that the relatively lower levels of H-2K b mRNA leads to a decrease in the number of H-2K b molecules available for peptide binding. Since the transfection of the ceils by TAP2 leads to the reconstitution of H-2D b and PD1 expression but does not affect the expression of H-2K b molecules, the latter possibility seems more likely.
Kestifo et al. (19) recently identified human cancers with defective antigen processing capacities that do not express TAP1 and TAP2 mRNA. The authors did not determine whether this deficiency is the result of the transformation event, since the extent of the processing capacity and peptide transporter expression in the normal counterparts of these cell lines was not established. Loss of TAP1 and HLA expression was also reported in cervical carcinomas (74). Our results suggest that the suppression of peptide transporter genes may be caused by the transformation event. Moreover, only certain oncogenic viruses seem to have the capacity to cause such suppression. Transformed cells that do not express peptide transporter genes and, as a result, do not transport class I molecules, may undergo selection in vivo and develop into nonimmunogenic tumors with oncogenic potential. This system provides a model for evaluating the roles of various factors in the class I biosynthetic pathway in normal and malignant cells. | 2014-10-01T00:00:00.000Z | 1994-08-01T00:00:00.000 | {
"year": 1994,
"sha1": "0a8452c01cf5a1f1064dc48ac013efc3a5355da8",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/180/2/477.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a8452c01cf5a1f1064dc48ac013efc3a5355da8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
270256942 | pes2o/s2orc | v3-fos-license | Primary pulmonary meningioma: a case report
Abstract An asymptomatic 68-year-old woman, with a history of breast cancer 19 years ago, was unexpectedly found to have primary pulmonary meningioma during medical evaluation. This discovery is exceedingly rare, with only about 70 cases reported worldwide. Following uncomplicated surgical removal of the mass, the patient was discharged in good health on the third day after the procedure. Notably, initial analysis of a frozen tissue sample indicated hamartoma, but subsequent immune-histochemical pathological examination confirmed the presence of meningioma. Given the uncommon nature of this tumor, it is essential to report such cases to raise awareness about pulmonary meningioma as a potential cause of solitary lung nodules. This awareness can help prevent unnecessary chemotherapy or surgical interventions.
Introduction
Meningioma, the most common primary central nervous system (CNS) tumor, characteristically arises from the meninges of the brain and spinal cord and accounts for ∼37.6% of all cases [1,2].Characteristically, it arises from the meningeal layers of the brain or the spinal cord [3,4].However, primary pulmonary meningioma (PPM) is incredibly rare, with ∼70 cases being reported worldwide [5].
Herein, we report such a case of PPH.
Case report
A 68-year-old woman was referred to our clinic for surgical evaluation after abnormal findings on a computed tomography (CT) scan (Fig. 1A).She had a history of breast cancer for which she underwent surgical resection and local lymphadenectomy in 2005.Despite being asymptomatic, and nearly two decades later in 2024, her CT scan revealed a well-defined 1.5-cm lesion in segment VI of the left lower lobe.A subsequent PET-CT scan (Fig. 1B) showed minimal metabolic activity in this area, and no further abnormalities.Initially, the lesion was suspected to be a hamartoma, with lung carcinoma considered as a potential alternative diagnosis.
During the interdisciplinary meeting, various options were discussed, including direct diagnostics, therapeutic resection, or simply monitoring the dynamics of the lesion through radiological means.Among these options, the patient decided to undergo surgical resection to obtain clarity.The diagnostics operation was scheduled as a robot-assisted thoracoscopy for March 2024.
During surgery, the lesion was neither direct nor indirectly visible; therefore, the decision for a direct oncological resection of the segment VI along with systematic lymphadenectomy was performed.The entire surgical procedure lasted 53 minutes, with minimal intraoperative blood loss totaling only 30 mL.The recovery after surgery proceeded without any issues, and the patient was released from the hospital on the third day postoperation, without encountering any complications and in a generally favorable state of health.
The excised tumor measured 1.5 cm in its largest dimension.Final pathology confirmed negative margins.Notably, while the initial intraoperative diagnosis based on frozen section analysis indicated a hamartoma, the subsequent comprehensive pathological evaluation revealed the presence of a transitional type of pulmonary meningioma with evidence of numerous psammoma corpuscles on microscopic examination.Further analysis showed that the tumor tested positive for epithelial membrane antigen (EMA) and progesterone (PR), with a low proliferation rate (<5%), confirming the diagnosis of a pulmonary meningioma (Fig. 2).There were no signs of abnormal cell features or increased cell
Discussion and conclusions
PPM is an exceedingly rare finding, characterized by the presence of meningioma in the lungs without any concurrent meningioma in the CNS.Globally, only about 70 cases have been reported over the past four decades [6].Most PPMs are benign, typically ranging from 0.4 to 6.5 cm in diameter, although a handful of malignant cases have been documented [6], with sizes ranging from 1.5 to 15 cm [7].
As in our reported case, PPMs often manifest asymptomatically, typically appearing as incidental solitary lesions on imaging studies [7,8].However, some cases may present as ground-glass density nodules [7] or multiple solid nodules, with or without nonspecific symptoms [9].
Accurate diagnosis is crucial due to reported high rates of misdiagnosis, which can lead to unnecessary extensive pulmonary resections and chemotherapy [8].
Since pulmonary metastases from benign meningiomas are exceedingly rare but have been documented [10], confirming a diagnosis of PPM involves ruling out cerebral meningiomas along with obtaining pathological confirmation.In our patient, primary cerebral meningioma was excluded using postoperative brain MRI.Immuno-histological staining is often utilized to verify diagnosis and distinguish PPMs from other lung tumors histologically.Characteristic markers include positivity for somatostatin receptor 2 (SSTR2A), EMA, and PR [11], as our case which was positive for both, EMA and PR.Additional negativity for signal transducer and activator of transcription (STAT6) and SOX may be seen supportively [11].Benign PPMs display histology and immunophenotypes consistent with their primary intracranial counterpart [12].However, there are no histological features specific to malignant PPMs that would facilitate differentiation between the two tumor types.Therefore, it is advisable to rely on the diagnosis and grading of CNS meningiomas in cases of malignant PPM [4].
The exact cause of PPM remains unclear, but it is proposed to originate from the proliferation of ectopic embryonic nests of arachnoid cells, minute pulmonary meningothelial-like nodules [13,14], or pluripotent subpleural mesenchyme [15].
Surgical resection is the primary treatment for PPM, with wedge [7] or segmental resection being common approaches.Intraoperative pathological examination is essential for determining the required scope of surgery.The prognosis for PPM is generally excellent, with cases of no recurrence reported even after 20 years following resection with tumor-free margins [12,15].
Figure 1 .
Figure 1.Pulmonary meningioma in segment VI of the left lower lobe in axial view in (A) CT scan showing a well-circumscribed/bordered homogenous, solid, noncalcified lesion; (B) PET-CT scan showing minimal metabolic activity. | 2024-06-06T06:17:20.102Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "d8139e42b20235af33db10c8cd738dd7d3c2b958",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jscr/article-pdf/2024/6/rjae406/58082979/rjae406.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8088bd928befff2b010dcd9eaf1bf2cbdad9ca86",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204029040 | pes2o/s2orc | v3-fos-license | Cardiorespiratory factors related to the increase in oxygen consumption during exercise in individuals with stroke
Background Understanding the cardiorespiratory factors related to the increase in oxygen consumption (V˙O2) during exercise is essential for improving cardiorespiratory fitness in individuals with stroke. However, cardiorespiratory factors related to the increase in V˙O2 during exercise in these individuals have not been examined using multivariate analysis. This study aimed to identify cardiorespiratory factors related to the increase in V˙O2 during a graded exercise in terms of respiratory function, cardiac function, and the ability of skeletal muscles to extract oxygen. Methods Eighteen individuals with stroke (aged 60.1 ± 9.4 years, 67.1 ± 30.8 days poststroke) underwent a graded exercise test for the assessment of cardiorespiratory response to exercise. The increases in V˙O2 from rest to first threshold and that from rest to peak exercise were measured as a dependent variable. The increases in respiratory rate, tidal volume, minute ventilation, heart rate, stroke volume, cardiac output, and arterial-venous oxygen difference from rest to first threshold and those from rest to peak exercise were measured as the independent variables. Results From rest to first threshold, the increases in arterial-venous oxygen difference (β = 0.711) and cardiac output (β = 0.572) were significant independent variables for the increase in V˙O2 (adjusted R2 = 0.877 p < 0.001). Similarly, from rest to peak exercise, the increases in arterial-venous oxygen difference (β = 0.665) and cardiac output (β = 0.636) were significant factors related to the increase in V˙O2 (adjusted R2 = 0.923, p < 0.001). Conclusion Our results suggest that the ability of skeletal muscle to extract oxygen is a major cardiorespiratory factor related to the increase in V˙O2 during exercise testing in individuals with stroke. For improved cardiorespiratory fitness in individuals with stroke, the amount of functional muscle mass during exercise may need to be increased.
Introduction Individuals with stroke have reduced cardiorespiratory fitness compared with age-and sexmatched healthy individuals [1,2]. Cardiorespiratory fitness reduction in individuals with stroke is potentially related to walking disability [3,4], poor cognitive performance [5], and limitations in activities of daily living [6][7][8]. Low levels of cardiorespiratory fitness following stroke may lead to avoidance of physical activity, which causes further deconditioning [9,10]. Therefore, understanding the cardiorespiratory factors related to cardiorespiratory fitness in individuals with stroke is essential for the development of appropriate therapies to improve physical activity levels and prevent further deconditioning.
From the physiological point of view, three phases and two thresholds can be defined with increasing exercise intensity [11]. Phase I is between rest and a first threshold. With increasing exercise intensity above the first threshold, the lactate production rate is higher than the metabolizing capacity of the muscle cell. Phase II is between the first threshold and a second threshold. Phase III is above the second threshold. With further increase in the workload above a second threshold, the muscular lactate production rate exceeds the systemic lactate elimination rate.
Oxygen consumption ( _ VO 2 ) at first threshold and that at peak exercise measured during a graded exercise test are used to assess cardiorespiratory fitness in individuals with stroke [1,[12][13][14]. The cardiorespiratory factors that potentially limit _ VO 2 at peak exercise are respiratory and cardiac functions to supply oxygen, and the ability of skeletal muscles to extract oxygen [13,15,16]. In healthy adults, _ VO 2 at peak exercise is limited by oxygen utilization among untrained individuals, while among trained individuals it is limited by oxygen supply [15][16][17]. Since the amount of active muscle mass determines whether the increase in _ VO 2 during exercise is either centrally or peripherally limited (e.g. during exercise recruiting smaller muscle mass), the increase in _ VO 2 may be limited by oxygen utilization, even in trained individuals [18]. The decrease in functional muscle mass due to paralysis may be one of the causes limiting cardiorespiratory responses to exercise [18]. Previous studies [13,[19][20][21] reported that tidal volume, heart rate, and arterial-venous oxygen difference at peak exercise are significantly lower in individuals with stroke than those in age-and sex-matched healthy adults, which may lead to the deterioration of cardiorespiratory fitness after stroke. Tomczak et al. [21] reported a significant difference between individuals with stroke and healthy adults in _ VO 2 at peak exercise, but not in _ VO 2 at rest. Thus, identifying the cardiorespiratory factors related to the increase in _ VO 2 during exercise contributes to understanding the mechanisms of decrease in cardiorespiratory fitness in individuals with stroke. However, in individuals with stroke, the cardiorespiratory factors related to the increase in _ VO 2 during exercise have not been examined using multivariate analysis.
Cross-sectional and longitudinal studies found a relationship between _ VO 2 and arterialvenous oxygen difference at peak exercise in individuals with stroke [20,22]. Therefore, we hypothesized that arterial-venous oxygen difference is a major cardiorespiratory factor related to the increase in _ VO 2 during exercise in these individuals. In this study, we aimed to explore the cardiorespiratory factors related to the increase in _ VO 2 from rest to first threshold and that from rest to peak exercise among individuals with stroke. The secondary aim was to determine the cardiorespiratory factors related to the increase in _ VO 2 from first threshold to peak exercise, that from first threshold to second threshold, and that from second threshold to peak exercise in these individuals.
Study design
This study used a cross-sectional observational design. The study protocol was approved by the appropriate ethics committees of the Tokyo Bay Rehabilitation Hospital (approval number: 172-2) and the Shinshu University (approval number: 3813). All participants provided written informed consent prior to study enrollment. The study was conducted in accordance with the regulations of Declaration of Helsinki of 1964, as revised in 2013.
Participants
Participants were recruited from a convalescent rehabilitation hospital between November 2017 and November 2018. The inclusion criteria for the study were as follows: (1) age 40-80 years, (2) being within 180 days after first-ever stroke, (3) ability to maintain a target cadence of 50 rpm during exercise, and (4) a Mini-Mental State Examination score [23] of 24 or more. The exclusion criteria were as follows: (1) limited range of motion and/or pain that could affect the exercise test, (2) unstable medical conditions such as unstable angina, uncontrolled hypertension, and tachycardia, (3) use of beta-blocker, and (4) any comorbid neurological disorder.
Exercise testing
Participants were instructed to refrain from eating for 3 hours and to avoid caffeine and vigorous physical activity for at least 6 and 24 hours, respectively, before the exercise test [24]. All participants performed a symptom-limited graded exercise test on a recumbent cycle ergometer (Strength Ergo 240; Mitsubishi Electric Engineering Co., Ltd., Tokyo, Japan) that can be precisely load-controlled (coefficient of variation, 5%) over a wide range of pedaling resistance (0-400 W). The distance from the seat edge to pedal axis was adjusted so that the participant's knee flexion angle was 20˚when extended maximally. The backrest was set at 20˚reclined from the vertical position. Additional strapping was attached to secure the paretic foot to the pedal as needed. Following a 3-min of rest period (in sitting position) on the recumbent cycle ergometer to establish a steady state, a warm-up was performed at 0 W for 3 min followed by 10 W increments every minute [24,25]. Participants were instructed to maintain a target cadence of 50 rpm throughout the exercise [24,25]. Blood pressure was monitored every minute from the non-paretic arm using an automated system (Tango; Sun Tech Medical Inc., NC, USA). The test was terminated if the participants showed signs of angina, dyspnea, inability to maintain cycling cadence more than 40 rpm, hypertension (more than 250 mmHg systolic or more than 115 mmHg diastolic), or a drop in systolic blood pressure of more than 10 mmHg despite an increase in work load [25,26]. Participants provided their ratings of perceived exertion (6 = no exertion at all, 20 = maximal exertion) [27] for dyspnea and leg effort at the end of the test. Work rate at peak exercise was defined as the peak wattage on test termination [22]. To identify whether maximal effort was reached during the exercise test, at least 1 of the following criteria had to be met: (1) _ VO 2 increased less than 150 mL�min -1 for more than 1 min despite increased work rate, (2) respiratory exchange ratio achieved greater than 1.10, (3) or heart rate achieved 85% of the age-predicted maximal heart rate (210 minus age) [28][29][30]. Participants rested for 5 min prior to obtaining the measurements. Cardiorespiratory variables were measured at rest for 3 min and continuously during exercise test. _ VO 2 , respiratory rate, tidal volume, and minute ventilation were measured on a breath-by-breath basis using an expired gas analyzer (Aerosonic AT-1100; ANIMA Corp., Tokyo, Japan). Carbon dioxide output, the ventilatory equivalents of oxygen and carbon dioxide, and the end-tidal oxygen and carbon dioxide fractions were also measured using the expired gas analyzer to determine first and second threshold. Heart rate, stroke volume, and cardiac output were measured on a beatby-beat basis using a noninvasive impedance cardiography device (Task Force Monitor model 3040i; CN Systems Medizintechnik GmbH., Graz, Austria). The impedance cardiography method is a valid and reliable method for measuring cardiac hemodynamics at rest and during exercise [31][32][33]. The reproducibility of two consecutive measurements of stroke volume with the device was confirmed by the correlation coefficient, r = 0.971 [33]. The mean and standard deviations of the differences between two consecutive measurements of stroke volume are 0.845 ± 2.549 mL [33]. Measurement values of cardiorespiratory variables were interpolated to 1-s intervals, time-aligned, and averaged into 5-s bins. This approach was used to assess the cardiorespiratory responses to exercise in healthy adults [34] and in individuals with stroke [21]. We could derive arterial-venous oxygen difference on a second-by-second basis by converting the breath-by-breath expired gas data and the beat-by-beat cardiac hemodynamics data into second-by-second data. Arterial-venous oxygen difference was calculated as the ratio between _ VO 2 and cardiac output according to the Fick's equation: _ VO 2 = cardiac output × arterial-venous oxygen difference [35].
Cardiorespiratory variables at rest were defined as the average value obtained during 1 min before exercise onset, and those at peak exercise were defined as the average value obtained during the last 30 s of exercise test [21,24]. The first threshold was determined using a combination of the following criteria: (1) the time point where the ventilatory equivalent of oxygen reached its minimum or started to increase, without an increase in the ventilatory equivalent of carbon dioxide; (2) the time point at which the end-tidal oxygen fraction reached a minimum or started to increase, without a decline in the end-tidal carbon dioxide fraction; and (3) the time point of deflection of carbon dioxide output versus _ VO 2 [11]. The first two methods were prioritized in case the three methods presented different results [36,37]. The second threshold is the intensity at which the muscular lactate production rate begins to exceed the systemic lactate elimination rate [11]. The second threshold was determined by: (1) the minimal value or nonlinear increase in the ventilatory equivalent of carbon dioxide; (2) the time point at which the end-tidal carbon dioxide fraction started to decline; and (3) the time point of deflection of minute ventilation versus _ VO 2 [11]. The first two criteria were prioritized in case the three methods presented different results [36]. The first and second threshold were determined as an average based on the values provided by two independent raters (NI and YS), when the difference in _ VO 2 values of the corresponding points as determined by the two raters was less than 100 mL�min -1 [37,38]. In case of any discrepancy, a third experienced rater (KO) judged the time point, and either the first or second threshold was used as the average of the two closest values [36,37].
Functional impairment assessment
A lower extremity motor subscale of Fugle-Meyer Assessment [39] was used to assess functional impairment in the paretic lower extremity. The possible score ranged from 0 to 34 points.
Statistical analysis
The G Power computer program version 3.1.9.2 (Heinrich Heine University, Dusseldorf, Germany) [40] was used to calculate the sample size required for multiple regression analysis. If up to seven variables (respiratory rate, tidal volume, minute ventilation, heart rate, stroke volume, cardiac output, and arterial-venous oxygen difference) would be modeled at an effect size of 0.49 (very large), α level of 0.05 and power of 0.80, a minimum of 13 participants would be required [40,41].
Normality of distribution was tested using the Shapiro Wilk test. One-way repeated-measures analysis of variance or Friedman test with exercise period as a factor was used to examine whether cardiorespiratory variables changed during exercise. Post hoc analyses were performed using the Bonferroni multiple comparison test.
The increase in _ VO 2 from rest to first threshold and that from rest to peak exercise were calculated as the dependent variables. The increase in _ VO 2 from first threshold to peak exercise, that from first threshold to second threshold, and that from second threshold to peak exercise were also calculated as the dependent variables. Pearson's product moment correlation coefficient or Spearman's rank correlation coefficient was used to test the correlations between the increases in _ VO 2 and other cardiorespiratory variables. Pearson's product moment correlation coefficient or Spearman's rank correlation coefficient was also used to examine if age, functional impairment, and other anthropometric characteristics including height, body mass, and body mass index were related to the increase in _ VO 2 during exercise testing [25,26,37,42]. We performed these correlation analyses to identify independent variables that were entered in the stepwise multiple regression analysis. Variables that significantly correlated with the increase in _ VO 2 during exercise testing were then entered in the stepwise multiple regression analysis to identify the cardiorespiratory factors related to the increase in _ VO 2 , while considering multicollinearity. When age, functional impairment, and/or anthropometric characteristics were selected as independent variables for multiple regression analysis, we found potentially confounding effects of age, functional impairment, and/or anthropometric characteristics on the relationships between the increases in _ VO 2 and other cardiorespiratory variables during exercise testing. Statistical analyses were performed using the Statistical Package for the Social Sciences software version 24.0 (International Business Machines Corp., NY, USA). Any p values less than 0.05 were considered statistically significant.
Results
A flow diagram of study participants is shown in Fig 1. Eighteen individuals with stroke participated in the study. Table 1 shows the characteristics of the participants.
No significant adverse events occurred during or after the exercise test. All participants stopped the exercise test due to their inability to maintain cycling cadence more than 40 rpm. With respect to each of the 3 criteria for reaching maximal effort, 16 participants (89%) showed the increase in _ VO 2 less than 150 mL�min -1 for more than 1 min despite increased work rate, 4 participants (22%) achieved a respiratory exchange ratio value greater than 1.10, and 9 participants (50%) reached 85% of the age-predicted maximal heart rate. All participants met at least one of the criteria for reaching maximal effort. One and nine participants met three and two criteria for reaching maximal effort, respectively. Median (interquartile range) values of the ratings of perceived exertion for dyspnea and leg effort at the end of the test were 13 (13-15) and 15 (13)(14)(15), respectively. Mean ± standard deviation of respiratory exchange ratio and work rate at peak exercise were 0.98 ± 0.13 and 69.4 ± 30.6 W, respectively. Measurement values at rest, first threshold, second threshold, and peak exercise are shown in Table 2 and Fig 2. The first threshold was determined in all participants, while the second threshold could be determined in only 11/18 participants. Therefore, we excluded the data of second threshold from statistical analyses. We observed a main effect of exercise period on all cardiorespiratory variables (p < 0.001). All cardiorespiratory variables at first threshold were significantly higher than those at rest (p < 0.001). From first threshold to peak exercise, cardiorespiratory variables, except for stroke volume and arterial-venous oxygen difference increased significantly (p < 0.001). From rest to first threshold, correlations between the increases in _ VO 2 and other cardiorespiratory variables are shown in Table 3 and Fig 3. The increase in _ VO 2 significantly correlated with the increases in tidal volume (Fig 3B), minute ventilation (Fig 3C), heart rate ( Fig 3D), cardiac output (Fig 3F), and arterial-venous oxygen difference (Fig 3G). The increases in _ VO 2 did not significantly correlate with age, Fugl-Meyer lower extremity motor scores, and anthropometric characteristics (Table 3). Stepwise multiple regression analysis revealed that the increases in arterial-venous oxygen difference (β = 0.711) and cardiac output (β = 0.572) were the significant independent variables for the increase in _ VO 2 (adjusted R 2 = 0.877, p < 0.001) ( Table 4). The increase in arterial-venous oxygen difference was a major cardiorespiratory factor related to the increase in _ VO 2 from rest to first threshold. From rest to peak exercise, correlations between the increases in _ VO 2 and other cardiorespiratory variables are shown in Table 5 and Fig 4. The increases in _ VO 2 significantly correlated with the increases in tidal volume (Fig 4B), minute ventilation (Fig 4C), heart rate ( Fig 4D), cardiac output (Fig 4F), and arterial-venous oxygen difference (Fig 4G). The increases in _ VO 2 also significantly correlated with body mass (Table 5). Stepwise multiple regression analysis revealed that the increases in arterial-venous oxygen difference (β = 0.665) and cardiac output (β = 0.636) were significant factors related to the increase in _ VO 2 (adjusted R 2 = 0.923, p < 0.001) ( Table 6). The increase in arterial-venous oxygen difference was a major cardiorespiratory factor related to the increase in _ VO 2 from rest to peak exercise. From first threshold to peak exercise, correlations between the increases in _ VO 2 and other cardiorespiratory variables are shown in Table 7 and Fig 5. The increases in _ VO 2 significantly correlated with the increases in respiratory rate (Fig 5A), tidal volume (Fig 5B), minute ventilation (Fig 5C), heart rate (Fig 5D), and arterial-venous oxygen difference (Fig 5G). The Values are presented as mean ± SD or median (interquartile range). The data of second threshold were excluded from the statistical analyses.
p value represents a significant main effect of the exercise period. � , a significant difference between first threshold and rest (p < 0.05). † , a significant difference between peak exercise and rest (p < 0.05). ‡ , a significant difference between peak exercise and first threshold (p < 0.05).
NA, not applicable. https://doi.org/10.1371/journal.pone.0217453.t002 Factors related to the increase in oxygen consumption Factors related to the increase in oxygen consumption increases in _ VO 2 also significantly correlated with Fugl-Meyer lower extremity motor scores and body mass (Table 7). Stepwise multiple regression analysis revealed that the increases in minute ventilation (β = 0.584) and arterial-venous oxygen difference (β = 0.389) were significant independent variables associated with for the increase in _ VO 2 (adjusted R 2 = 0.786, p < 0.001) ( Table 8).
The cardiorespiratory factors related to the increase in _ VO 2 from first threshold to second threshold and then from second threshold to peak exercise were determined in 11 participants who reached second threshold. The increase in _ VO 2 from first threshold to second threshold was not correlated with the increases in other cardiorespiratory variables (Table 9). From second threshold to peak exercise, the increases in _ VO 2 significantly correlated with the increases in respiratory rate, tidal volume, minute ventilation, heart rate, and arterial-venous oxygen difference (Table 10). The increases in _ VO 2 did not significantly correlate with age, Fugl-Meyer lower extremity motor scores, and anthropometric characteristics (Table 10). Stepwise multiple regression analysis revealed that the increase in minute ventilation (β = 0.850) was a significant independent variable associated with the increase in _ VO 2 (adjusted R 2 = 0.691, p = 0.001) ( Table 11).
Discussion
This is the first study to explore cardiorespiratory factors related to the increase in _ VO 2 during graded exercise in individuals with stroke. This study demonstrated that the increase in arterial-venous oxygen difference was a major cardiorespiratory factor related to both the increases in _ VO 2 from rest to first threshold and that from rest to peak exercise. Our results also demonstrated no significant confounding effects of age, functional impairment, and anthropometric characteristics on the relationships between increases in _ VO 2 and other cardiorespiratory variables during exercise testing. These findings suggest that the impaired ability of skeletal muscles to extract oxygen is a main cardiorespiratory factor related to the decrease in cardiorespiratory fitness in individuals with stroke. The decrease in functional muscle mass due to paralysis can limit the increases in _ VO 2 and other cardiorespiratory variables during exercise testing. The influences of the amount of active muscle mass on cardiorespiratory responses to exercise have been investigated by comparing cardiorespiratory outcomes during one-legged and two-legged cycling exercises in healthy people [18]. _ VO 2 at first threshold and that at peak exercise are lower during one-legged cycling exercises compared to those during two-legged cycling exercises [43][44][45][46][47][48][49][50]. Minute ventilation and arterial-venous oxygen difference responses are also lower during one-legged cycling exercises [45,46,51]. Furthermore, the level of catecholamines is lower during one-legged cycling exercises compared to that in two-legged cycling exercises [46,52]. As catecholamines stimulate cardiorespiratory functions, lower levels of catecholamines during onelegged cycle exercises explain the lower cardiorespiratory responses [18]. Although we did not assess the amount of functional muscle mass during exercise, the above studies suggest that the decrease in functional muscle mass due to paralysis can explain the relationships between increases in _ VO 2 and other cardiorespiratory variables during exercise testing observed in this study.
The increase in arterial-venous oxygen difference was a major independent variable for the increases in _ VO 2 from rest to first threshold and that from rest to peak exercise. From first threshold to peak exercise, the increase in arterial-venous oxygen difference was also an independent variable for the increases in _ VO 2 , while cardiac output was not. These results support the findings of Jakovljevic et al. [20] and Moore et al. [22] who reported that oxygen extraction rather than oxygen supply is related with cardiorespiratory fitness in individuals with stroke. Skeletal muscle changes after stroke, such as muscle atrophy and shift of muscle fiber type (from type I slow-twitch muscle fibers to type II fast-twitch muscle fibers) particularly in the paretic lower extremity, are observed in individuals with stroke [53]. The impaired vasodilatory function and reduction in blood flow in the paretic lower extremity have also been reported [54,55]. In addition to the decrease in functional muscle mass during exercise, these changes in skeletal muscles after stroke can reduce their ability to extract oxygen. This may further increase the dependence on anaerobic glycolysis for energy output, thus increasing the output of lactate [56,57]. However, from our respiratory exchange ratio data, we expected blood lactate concentration to remain low in this study. These findings support the relationships between the increases in _ VO 2 and arterial-venous oxygen difference during exercise testing observed in this study.
Furthermore, this study demonstrated that the increase in cardiac output was related to the increases in _ VO 2 from rest to first threshold and that from rest to peak exercise irrespective of the increase in arterial-venous oxygen difference. Tomczak et al. [21] reported that the impaired increase in _ VO 2 during exercise testing in individuals with stroke is attributed to the impaired increase in cardiac output, which in turn could be attributed to the impaired increase in heart rate. From first threshold to peak exercise, we observed the significant increases in heart rate and cardiac output, but not in stroke volume. These results suggest that the increase in heart rate contributed to the increase in cardiac output in this phase. In addition, our correlational analysis indicated that the increase in _ VO 2 during exercise testing was related with the increase in heart rate, but not the increase in stroke volume. These results of our study support the findings of Tomczak et al. [21]. As mentioned above, the increases in heart rate and cardiac output during exercise testing may be limited by lower levels of catecholamines due to the decrease in functional muscle mass after stroke [18]. In addition, the decreased functional Factors related to the increase in oxygen consumption muscle mass in individuals with stroke can reduce the increases in heart rate and cardiac output, just matching the needs of the lower muscle mass [18]. These findings can explain the Factors related to the increase in oxygen consumption relationship between the increases in _ VO 2 and cardiac output during exercise testing observed in this study.
Sisante et al. [19] and Tomczak et al. [21] reported that tidal volume, minute ventilation, and _ VO 2 at peak exercise were significantly lower in the stroke group than in control, while there was no significant difference in respiratory rate at peak exercise between the groups. Therefore, the decrease in tidal volume is believed to limit minute ventilation and _ VO 2 at peak exercise in individuals with stroke [19,21]. The paralysis of expiratory muscles on the affected side, decreased motion of the diaphragm, and reduced chest wall excursion may limit the increases in tidal volume during exercise [13,58]. These findings support the relationship between increases in _ VO 2 and the increases in tidal volume and minute ventilation during exercise testing. Both from rest to first threshold and from rest to peak exercise, there was no significant correlation between the increases in _ VO 2 and respiratory rate. Therefore, the tidal volume response may be related to _ VO 2 response irrespective of respiratory rate response during exercise testing. The increase in minute ventilation was a major independent variable for the increase in _ VO 2 from first threshold to peak exercise, and then from second threshold to peak exercise, but not from rest to peak exercise. There was no occurrence of cardiorespiratory factors related to the increase in _ VO 2 from first threshold to second threshold, which may be attributed to low increment of _ VO 2 in this phase. The increment of _ VO 2 from rest to first Factors related to the increase in oxygen consumption threshold accounted for approximately 75% of the increment of _ VO 2 from rest to peak exercise. This may explain why the increases in arterial-venous oxygen difference and cardiac output rather than the increase in minute ventilation were selected as the independent variables for the increases in _ VO 2 from rest to peak exercise. These results suggest that the ability of skeletal muscles to extract oxygen and cardiac function rather than respiratory function are related to cardiorespiratory fitness in individuals with stroke.
Considering the influences of the amount of active muscle mass on cardiorespiratory responses to exercise [18], it is important to increase the amount of functional muscle mass during exercise for enhancing the cardiorespiratory responses in individuals with stroke. Therefore, exercises that recruit more muscle mass, such as combined arm and leg exercises could be beneficial to improve cardiorespiratory fitness in these individuals.
Studies reported that functional impairment is related to cardiorespiratory responses during exercise testing in individuals with stroke [25,37,59]. However, we found no significant confounding effects of functional impairment on the relationships between the increases in _ VO 2 and other cardiorespiratory variables during exercise testing, which may be attributed to the fact that our study participants presented with relatively mild functional impairment. Although all participants stopped the exercise test due to their inability to maintain cycling cadence, seven participants did not reach the second threshold during exercise testing. In addition, 14 participants did not achieve a respiratory exchange ratio value greater than 1.10. The relatively low ratings of perceived exertion at the end of the test may be explained by the low number of participants who reached the second threshold and/or respiratory exchange ratio value greater than 1.10. Although a respiratory exchange ratio greater than 1.10 is generally considered an indication of excellent subject effort during exercise testing [35], as several studies also reported, individuals with hemiparetic stroke find it difficult to reach the respiratory exchange ratio greater than 1.10 during exercise testing on a recumbent cycle ergometer [24,37,60]. In individuals with stroke, impairments in strength, coordination, muscle endurance, and sensorimotor control contribute to difficulties in pedaling at a high work rate [60]. The _ VO 2 reserve and heart rate reserve percentages were recommended for prescribing aerobic Factors related to the increase in oxygen consumption exercise intensity for individuals with stroke [61]. Therefore, it is probably difficult for individuals with stroke to achieve sufficient intensity of exercise prescription during exercise testing using a recumbent cycle ergometer. An exercise test, using the combined arm and leg modality such as the total-body recumbent stepper, may be useful for guiding exercise prescription in these individuals [42]. This study had several limitations. First, all participants were in the subacute stages of recovery from stroke. Therefore, generalization of the findings to individuals with chronic stroke should be made with caution. Second, the sample size was relatively small, although that was determined based on power analysis. Therefore, we could not perform subgroup analyses to determine whether cardiorespiratory variables related to the increase in _ VO 2 during exercise testing would be different between participants who reached and those who failed to reach the second threshold or participants who reached and those who failed to reach a respiratory exchange ratio value greater than 1.10. Third, we used a recumbent cycle ergometer. A treadmill [6], a total-body recumbent stepper [42], a robotics-assisted tilt table [36], and an arm crank ergometer [37] are also used to assess cardiorespiratory fitness in individuals with stroke. Differences in the amount of active muscle mass among exercise devices may affect the relationships observed between the increases in _ VO 2 and other cardiorespiratory variables during exercise testing. Further studies are warranted to examine whether the major cardiorespiratory factors related to the increase in _ VO 2 during exercise differ among different exercise devices. Finally, as this study used a cross-sectional observational design, the factors related to the temporal changes in _ VO 2 at first threshold and at peak exercise could not be examined. Further longitudinal studies are needed to examine whether impairments in arterial-venous oxygen difference and cardiac output affect the temporal changes in _ VO 2 for the development of appropriate therapies to improve cardiorespiratory fitness in individuals with stroke.
Conclusions
Our results suggest that the ability of skeletal muscles to extract oxygen is a major cardiorespiratory factor related to the increase in _ VO 2 during exercise testing in individuals with stroke. Our findings could potentially contribute to the development of appropriate therapies to improve cardiorespiratory fitness in individuals with stroke. Factors related to the increase in oxygen consumption | 2019-10-10T09:22:44.972Z | 2019-10-09T00:00:00.000 | {
"year": 2019,
"sha1": "b51bd58a2fec966d0d9306a9a91b0d5026154e65",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0217453&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "829da3b8306612513a0cee67531a9ba16de4f677",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249596732 | pes2o/s2orc | v3-fos-license | One in 10 Virally Suppressed Persons With HIV in The Netherlands Experiences ≥10% Weight Gain After Switching to Tenofovir Alafenamide and/or Integrase Strand Transfer Inhibitor
Abstract Background We determined the frequency of and factors associated with ≥10% weight gain and its metabolic effects in virally suppressed people with human immunodeficiency virus (PWH) from the Dutch national AIDS Therapy Evaluation in the Netherlands (ATHENA) cohort switching to tenofovir alafenamide (TAF) and/or integrase strand transfer inhibitor (INSTI). Methods We identified antiretroviral therapy–experienced but TAF/INSTI-naive PWH who switched to a TAF and/or INSTI-containing regimen while virally suppressed for >12 months. Individuals with comorbidities/comedication associated with weight change were excluded. Analyses were stratified by switch to only TAF, only INSTI, or TAF + INSTI. Factors associated with ≥10% weight gain were assessed using parametric survival models. Changes in glucose, lipids, and blood pressure postswitch were modeled using mixed-effects linear regression and compared between those with and without ≥10% weight gain. Results Among 1544 PWH who switched to only TAF, 2629 to only INSTI, and 918 to combined TAF + INSTI, ≥10% weight gain was observed in 8.8%, 10.6%, and 14.4%, respectively. Across these groups, weight gain was more frequent in Western and sub-Saharan African females than Western males. Weight gain was also more frequent in those with weight loss ≥1 kg/year before switching, age <40 years, and those discontinuing efavirenz. In those with ≥10% weight gain, 53.7% remained in the same body mass index (BMI) category, while a BMI change from normal/overweight at baseline to obesity at 24 months postswitch was seen in 13.9%, 11.7%, and 15.2% of those switching to only TAF, only INSTI, and TAF + INSTI, respectively. PWH with ≥10% weight gain showed significantly larger, but small increases in glucose, blood pressure, and lipid levels. Lipid increases were limited to those whose switch included TAF, whereas lipids decreased after switching to only INSTI. Conclusions Weight gain of ≥10% after switch to TAF and/or INSTI was common in virally suppressed PWH, particularly in females and those starting both drugs simultaneously. Consequent changes in metabolic parameters were, however, modest.
INTRODUCTION
Weight gain has been frequently reported in people with human immunodeficiency virus (PWH) after commencing antiretroviral therapy (ART) that includes tenofovir alafenamide (TAF) and/or integrase strand transfer inhibitors (INSTIs). The ADVANCE study, conducted in ART-naive individuals, demonstrated the most pronounced increase in weight (median of +7.1 kg at 96 weeks) when treatment included both TAF and dolutegravir [1]. One limitation of assessing the effect of TAF or INSTIs on weight in those initiating ART is that weight gain, in part, may reflect "return to health" from the initial suppression of viral replication. This may be absent in PWH who are already virally suppressed when switching to TAF and/or INSTIs, yet the effect of discontinued antiretrovirals (ARVs) on weight needs to be considered.
Black race and female sex have been associated with more absolute weight gain after starting TAF and/or INSTIs, with effects being most pronounced in Black females and when TAF and an INSTI were combined [2][3][4][5][6][7]. Other factors associated with greater weight gain are baseline weight (both lower weight [7][8][9][10] and obesity [4,6]), CD4 count ,200 cells/µL [3,7,8], and age (both younger [10,11] and older [2,4] age). In addition, 1 study including both ART-naive and ART-experienced PWH starting dolutegravir found that ART-naive individuals had a greater risk of ≥10% weight gain [8]. Last, certain polymorphisms in the CYP2B6 gene, leading to differences in efavirenz (EFV) plasma levels [12], can affect weight in PWH starting EFV-based ART. Weight loss was observed in CYP2B6 slow metabolizers on EFV and weight gain similar to dolutegravir in those with an extensive metabolizer phenotype [13]. This association was also found when switching from EFV to an INSTI, with slow metabolizers gaining significantly greater amounts of weight [14]. This suggests that EFV levels may mitigate weight changes, whereby discontinuing efavirenz could lead to a gain in weight among slow metabolizers.
There may also be a subset of individuals prone to gaining more extreme amounts of weight when exposed to TAF and/ or an INSTI, and who may possibly suffer greater metabolic consequences. Studies examining weight gains of more than 7% or 10%, for example, have not all been restricted to virally suppressed individuals [7,8,15] or to those being exposed to only TAF, only an INSTI and both simultaneously [10,15,16].
The aim of our study therefore was to determine, in a nationally representative population of ART-experienced and virally suppressed PWH in the Netherlands, the proportion of individuals who gained ≥10% in weight after switching to only TAF, only an INSTI, or both combined. We also assessed the factors associated with ≥10% weight gain and examined the impact of weight gain on glucose, lipids, and blood pressure.
Study Population
Human immunodeficiency virus (HIV) care in the Netherlands is provided by 24 treatment centers. The HIV Monitoring Foundation (https://www.hiv-monitoring.nl/en) has been prospectively collecting data on demographics, ART, and other clinically relevant characteristics from PWH in the Netherlands from 1998 onward, known as the AIDS Therapy Evaluation in the Netherlands (ATHENA) cohort [17]. Data collection is continuous and all data until 27 February 2020 (first COVID-19 diagnosis in the Netherlands) were used.
We identified ART-experienced adults (≥18 years) who were virally suppressed for ≥12 months (isolated HIV type 1 [HIV-1] RNA ,200 copies/mL allowed) and switched to TAF-and/or INSTI-containing ART. Participants were TAFand INSTI-naive prior to switch, while allowing ,90 days of previous exposure. Individuals with .90 days' exposure to TAF and/or INSTI and with at least 1 available weight measurement ≤24 months prior to switching and 1 weight measurement ≥3 months after switching, but prior to censoring, were included. Individuals who at the time of switch used medication (corticosteroids, antidepressants, or antipsychotics) or developed conditions associated with weight gain (hypothyroidism, Cushing's syndrome, congestive heart failure, renal failure [including those on hemo-or peritoneal dialysis] or liver cirrhosis) were excluded [18]. We also excluded individuals in whom any of these conditions were diagnosed after switching ART. These conditions could predispose individuals to weight gains prior to switch that are unlikely to be attributed to switching ART. Females who were pregnant at the time of switch were also excluded.
Data Collection
Baseline demographic characteristics included sex at birth, age, region of origin, and years since HIV diagnosis and since first initiation of ART. Time-updated data from routine care included height, weight, blood pressure, smoking behavior, alcohol consumption, CD4/CD8 cell counts, HIV-1 RNA, glucose and lipid levels (total cholesterol, high-density lipoprotein [HDL] and low-density lipoprotein [LDL] cholesterol, and triglycerides), any changes in ART, comedication, and incident relevant comorbidities.
Region of origin was categorized as Europe, North America, Australia, and New Zealand (recategorized as "Western"); sub-Saharan Africa; Latin America and the Caribbean; East and Southeast Asia; and other. Weight gain prior to switch was calculated as the mean change in weight prior to switch in kilograms per year. Smoking and alcohol consumption were categorized as never/former/current and changes as no change/stopped/started. Hypertension, diabetes mellitus, and metabolic syndrome were defined according to criteria established by the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study [19]; diagnosis of lipodystrophy was reported by a healthcare provider.
Patient Consent
At its inception, the ATHENA cohort was approved by the institutional review boards of all participating HIV treatment centers. After being informed by their treating physician of the purpose of data and sample collection, individuals can opt out. For our analysis, only existing data have been used and therefore no additional review or consent was required [17].
Statistical Analysis
Analyses were stratified according to whether individuals (1) switched to a TAF-containing regimen; (2) switched to an INSTI-containing regimen; or (3) switched simultaneously to both a TAF-and INSTI-containing regimen (TAF + INSTI). Baseline was defined as the date of switching. Two follow-up periods were defined: (1) prebaseline (,36 months prior to baseline while on suppressive ART); and (2) postbaseline (from baseline until the date of pregnancy, virological failure, use of medication associated with weight gain, discontinuing ART .3 months [including discontinuation of either TAF or INSTI for individuals who switched to TAF and INSTI], switching to INSTI-based ART [for those in the TAF-containing group], switching to TAF-containing ART [for those in the INSTI-containing group], last available weight measurement, death, or 24 months, whichever occurred first). Switches within the INSTI class were allowed. Time was analyzed in discrete intervals of 6 months before and after baseline, with each visit assigned to the closest 6-month date from switch. Missing data were imputed with the most recent values (ie, last observation carried forward).
Participant characteristics at baseline were compared using Pearson χ 2 test for categorical variables or Kruskal-Wallis test for continuous variables. The most recent weight, blood pressure, and laboratory values obtained ≤24 months prebaseline period were used as baseline measurements. Mean weight change in kilograms per year prebaseline was calculated from a linear regression model fit to each individual, including weight measurements during a maximum period of 36 months prior to baseline. The main outcome was ≥10% gain in weight postbaseline compared to baseline. Weight change in those with ≥10% weight gain was modeled using mixed-effects linear regression, with random intercept for individuals and random slope for discrete time (modeled using restricted cubic splines with 5 knots). Mean weight changes were adjusted for baseline age, baseline weight, sex, and region of origin, and mean change at 24 months postbaseline was compared between groups using a 2-sample t test.
The discrete time to ≥10% weight gain postbaseline was modeled using a parametric survival model with Weibull survival distribution and robust variance estimations. The univariable hazard ratio and 95% confidence intervals (CIs) were calculated across levels of time-constant and time-updated covariates. All univariable analyses were adjusted for baseline weight. The multivariable model was built using backward elimination, including all variables associated with a P , .20 in univariable analyses and subsequently removing all those with a P ≥.05, while forcing baseline body mass index (BMI), age, region of origin, and sex. Biologically plausible interactions between variables were also assessed. Interaction terms were only included in the final model if their addition resulted in better fit, based on the likelihood-ratio test.
Last, the postbaseline changes in glucose, lipids, and blood pressure were determined in all included participants. Mean changes within 24 months postbaseline were modeled using linear regression with random intercept for individuals and random slope for discrete time, and a priori adjusted for baseline age; baseline weight; sex; region of origin; baseline glucose, lipids, or blood pressure; and use of antidiabetics, lipid-lowering agents, or antihypertensive agents at baseline and initiation or discontinuation of these medications postbaseline. Predicted values and standard errors were calculated and used to report mean change in glucose, lipid levels, and blood pressure at 24 months postbaseline with 95% CI. Mean changes at 24 months were compared between those with or without ≥10% weight gain using a 2-sample t test.
Statistical significance was defined as 2-sided P , .05. Statistical analyses were carried out using Stata/IC version 15.1 (StataCorp, College Station, Texas) and R version 4.1.1 (R Project for Statistical Computing, Vienna, Austria) software.
Description of the Study Population
A total of 6324 ART-experienced PWH without prior exposure to TAF and INSTI switched to a TAF-and/or INSTI-based regimen between May 2007 and November 2019, while having been virally suppressed for at least 12 months. Of those, 829 people were excluded because of using predefined comedications or having predefined comorbidities at moment of switch or developing such comorbidities after switch. Another 404 were censored prior to their first weight measurement after switch and therefore also excluded (Supplementary Figure 1 The baseline characteristics per group are reported in Table 1. Overall, 84.3% were males, 71.8% originated from Western countries, median age was 49.3 years, and median BMI was 24.2 kg/m 2 . The median time since HIV diagnosis and since start of first ART was 10.7 and 8.6 years, respectively.
Weight Gain of ≥10%
Weight gain of ≥10% within 24 months after switch occurred in 136 (8.8%), 279 (10.6%), and 132 (14.4%) of individuals switching to a regimen that included TAF, an INSTI, or TAF + INSTI, respectively. Median time to reaching a weight .250 a gain of ≥10% was 12 months (IQR, 6-18 months) in each of the 3 groups. Generally, in each group a higher proportion of females than males experienced ≥10% weight gain, which was the case similarly for females from Western or sub-Saharan African regions (Table 2).
Absolute Change in Weight in Those With ≥10% Weight Gain Postbaseline
As shown in Figure 1, the adjusted mean weight gain at 24 months postbaseline was +9. Supplementary Figure 2.
Determinants of ≥10% Weight Gain
Among all switch groups, females from Western or sub-Saharan African regions were at increased risk compared to males from Western regions. In addition, weight loss of ≥1 kg/year prebaseline (compared to ≥1 kg/year weight gain) and age ,40 years (compared to age ≥60 years) were each also associated with a significantly higher risk of ≥10% weight gain (Table 3; univariable analyses in Supplementary Tables 1-3). There were no significant interactions in any of the final multivariable models. In addition, among individuals switching to only TAF, being underweight at baseline (compared to normal weight) and having a higher baseline CD8 cell count were also associated with a higher risk of ≥10% weight gain.
Among individuals switching to only an INSTI, being a current smoker or starting smoking again were additional risk factors, as were discontinuation of EFV or lopinavir.
Among individuals switching to TAF + INSTI, discontinuation of EFV compared to discontinuation of atazanavir was again significantly associated with a higher risk of ≥10% weight gain. There was no association between any particular INSTI and weight increase of ≥10% in univariable analysis, either in the only INSTI group or the TAF + INSTI group.
In a sensitivity analysis, risk factors for ≥7% weight gain were largely similar to those identified for ≥10% weight gain (Supplementary Table 4), although change in smoking behavior in individuals switching to an INSTI-based regimen was no longer associated with ≥7% weight gain in the multivariable model. Discontinuation of abacavir (compared to discontinuing tenofovir disoproxil fumarate [TDF]) was associated with a higher risk of ≥7% weight gain in those switching to only TAF or only an INSTI, although CIs were wide. In individuals switching to TAF + INSTI, use of dolutegravir vs elvitegravir was associated with an increased risk of ≥7% weight gain.
In an additional sensitivity analysis, we excluded 103 individuals with limited exposure of ,90 days to TAF (n = 23) and/or INSTI (n = 92) prior to the switch date in our analyses. We found no differences in the results as compared to the primary analyses.
Changes in Glucose, Lipids, and Blood Pressure
At 24 months after switching, minor increases in nonfasting glucose were observed across all 3 switch groups, being more pronounced when the regimen switch included TAF (Table 4). These increases were similar regardless of the degree Kruskal-Wallis test. c BMI was categorized as underweight (,18.5 kg/m 2 ), normal weight (18.5-24.9 kg/m 2 ), overweight (25.0-29.9 kg/m 2 ), or obese (≥30.0 kg/m 2 ). of weight change, except for a statistically significantly greater rise in glucose in those who switched to only TAF without ≥10% weight gain (P = .043).
Significant increases in nonfasting total and LDL cholesterol as well as triglycerides for those with ≥10% weight gain were only observed when the switch included TAF, but not when switching to only INSTI. After switching to only INSTI, these lipids declined and significantly more so in those who did not gain ≥10% weight. Similar minor increases in systolic and diastolic blood pressure were seen, which, with the exception of diastolic blood pressure in those switching to combined TAF + INSTI, were all statistically significantly greater in those with ≥10% weight gain.
DISCUSSION
In this nationally representative observational cohort study of virally suppressed PWH in the Netherlands, as many as 1 in 10 individuals had ≥10% weight gain after switch to TAF and/or INSTI, and more frequently so in those switching to TAF and INSTI simultaneously. Both Western and sub-Saharan African females were at increased risk after switch to either TAF, INSTI, or TAF + INSTI. Finally, changes in metabolic parameters including blood pressure, albeit modest, were greater in those with than without ≥10% weight gain.
Our study is the first to report occurrence of ≥10% weight gain separately for PWH switching to only TAF, only an INSTI, and combined TAF + INSTI. Previous studies examining the effects on weight in virally suppressed PWH switching to TAF or an INSTI have reported similar overall percentages of ≥10% weight gain as observed in our study during a maximum of 24 months follow-up [10,20]. Weight gain of ≥10% was not associated with any particular INSTI in our study. However, the distribution of INSTIs switched to was different in the only INSTI group vs the combined TAF + INSTI group, making it difficult to generalize on the activity of any specific INSTI agent.
In line with results from a number of other studies [8,10,11,15], Western and sub-Saharan African females, those younger than 40 years, and those previously losing weight had a significantly higher risk of ≥10% weight gain after switch to either TAF, INSTI, or both. Moreover, those discontinuing EFV (which was the case for approximately one-third of participants starting an INSTI) were at increased risk of ≥10% weight gain after switch to INSTI or TAF + INSTI. This could potentially be driven by individuals with a slow metabolizer phenotype associated with certain CYP2B6 polymorphisms. As previously reported, such individuals are prone to lose weight on EFV and to gain weight after switching away from EFV [13,14]. The proportion of EFV slow metabolizers is known to be higher among individuals of African descent compared to Caucasians [13,21], but no correlation has been found between sex and slow metabolizer phenotype [13,22]. Of note, we did not observe an interaction between ethnicity or sex and discontinuation of EFV in our analyses. In our study we did not find switching from TDF to TAF to be independently associated with an increased risk of ≥10% weight gain, possibly because the large majority (.90%) who started TAF discontinued TDF. Whether weight gain is related to the removal of the weight-suppressive effect of TDF and/or to a weight-increasing effect of TAF remains unclear. The use of TDF as preexposure prophylaxis was associated with greater odds of 5% weight loss compared to placebo [23], whereas the use of TAF as preexposure prophylaxis was associated with modest weight gain of 1.1 kg after 48 weeks [24]. This suggests that weight gain after switch to TAF may well reflect the composite of both of these independent features.
It remains to be clarified why females in particular are at increased risk of weight gain after start of TAF and/or INSTIs compared to males. Previous studies showed that females have higher plasma concentrations of both EFV [22,25] and TDF [26,27], which could cause more weight suppression and subsequently a more pronounced increase in weight when these compounds are discontinued. Higher plasma concentrations in females have also been described for dolutegravir [28] and raltegravir [29], but such data are lacking for other INSTIs. Proposed mechanisms by which INSTIs could increase weight include an effect on adipocyte differentiation, adiponectin, and the central melanocortin system [30,31].
Mechanisms resulting in higher concentrations of ARVs in females are unclear but could be caused by differences in absorption, distribution, metabolism, and elimination of ARVs, due to sex-specific differences in body composition, sex-related differences in activity of CYP450 enzymes, or lower renal clearance rates in females compared to males [32,33]. Females generally have a lower resting energy expenditure (REE, ie, the energy required to keep the body functioning at rest) compared to males [34][35][36]. The major factor determining REE is lean body mass, which may explain why females-who generally have a lower lean mass and higher fat mass-have lower REE [35][36][37]. If the effect of TAF and/or INSTIs on weight were to be driven by increased calorie intake, the lower REE in females could explain why weight gain in females would be more pronounced.
Although individuals with ≥10% weight gain in our study showed statistically significantly larger mean changes in blood pressure and lipids than those without ≥10% weight gain, changes were only modest. Similar observations have been reported in individuals with and without ≥10% weight gain commencing dolutegravir [8]. In line with our results, small increases in triglycerides, total cholesterol, and LDL cholesterol following switch to TAF-either with or without an INSTI and irrespective of weight change-have been reported previously [3,38]. The impact of such minor changes in metabolic parameters on the long-term risk of cardiovascular disease and diabetes remains to be determined.
Our study has a number of strengths. First, the extensive data collection made it possible to analyze a large number .984 Values represent mean change with 95% confidence interval at 24 months after switch. Values were predicted using mixed-effects linear regression and are adjusted for baseline age; baseline weight; sex; region of origin; baseline glucose, lipid, or blood pressure values; and use of antidiabetics, lipid-lowering agents, or antihypertensive agents at baseline and initiation and discontinuation of these medications postbaseline.
a P values were calculated using 2-sample t test.
of individuals, while applying strict exclusion and censoring criteria. It also allowed us to separately address ≥10% weight gain in individuals with similar observation time and demographic characteristics who switched to either only TAF, only an INSTI, or to both simultaneously. To the best of our knowledge, this is the first study to do so.
Our study also has several limitations. Weight was measured as part of routine clinical assessment, rather than in a standardized manner and frequency, nor were we able to adjust for lifestyle changes. Finally, adjustment for changes in smoking and alcohol use was imperfect given that time-updated data were missing in a majority of participants.
In conclusion, our nationwide representative cohort of virally suppressed PWH confirmed that a ≥10% gain in weight after switching to TAF and/or an INSTI is common and occurs in approximately 1 in 10 individuals. The pathophysiology underlying such degree of weight gain appears to be multifactorial, with age, sex, the particular drugs being initiated and those being switched away from, and genetic factors influencing drug exposure and metabolism all contributing. Studies focused on individuals susceptible to or experiencing excessive weight gain and that include detailed assessments of body composition and metabolism should be undertaken to unravel its pathophysiology. Long-term follow-up will be required to assess cardiometabolic risk and the benefit of interventions aimed at reversing the weight gain and its metabolic consequences.
Supplementary Data
Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
Notes
Financial support. The AIDS Therapy Evaluation in the Netherlands (ATHENA) cohort is managed by the HIV Monitoring Foundation (Stichting HIV Monitoring) and supported by a grant from the Dutch Ministry of Health, Welfare and Sport through the Center for Infectious Disease Control of the National Institute for Public Health and the Environment (RIVM).
Potential conflicts of interest. F. W. N. M. W. has served on scientific advisory boards for ViiV Healthcare and Gilead Sciences. P. R. through his institution has received independent scientific grant support from Gilead Sciences, Janssen Pharmaceuticals, Merck & Co, and ViiV Healthcare; and has served on scientific advisory boards for Gilead Sciences, ViiV Healthcare, and Merck & Co, honoraria for which were all paid to his institution. M. v. d. V. through his institution has received independent scientific grant support and consultancy fees from AbbVie, Gilead Sciences, MSD, and ViiV Healthcare, for which honoraria were all paid to his institution. All other authors report no potential conflicts of interest. | 2022-06-12T15:15:55.272Z | 2022-06-10T00:00:00.000 | {
"year": 2022,
"sha1": "b5dc985216e2d80ce342158931c2b5b06b54891d",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ofid/advance-article-pdf/doi/10.1093/ofid/ofac291/44016170/ofac291.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d6da406decf84ea9ce9341762ed620beab4b257",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
208497921 | pes2o/s2orc | v3-fos-license | Access to Novel Drugs for Non‐Small Cell Lung Cancer in Central and Southeastern Europe: A Central European Cooperative Oncology Group Analysis
Abstract Background Treatment of non‐small cell lung cancer (NSCLC) improved substantially in the last decades. Novel targeted and immune‐oncologic drugs were introduced into routine treatment. Despite accelerated development and subsequent drug registrations by the European Medicinal Agency (EMA), novel drugs for NSCLC are poorly accessible in Central and Eastern European (CEE) countries. Material and Methods The Central European Cooperative Oncology Group conducted a survey among experts from 10 CEE countries to provide an overview on the availability of novel drugs for NSCLC and time from registration to reimbursement decision in their countries. Results Although first‐generation epidermal growth factor receptor tyrosine kinase inhibitors were reimbursed and available in all countries, for other registered therapies—even for ALK inhibitors and checkpoint inhibitors in first‐line—there were apparent gaps in availability and/or reimbursement. There was a trend for better availability of drugs with longer time from EMA marketing authorization. Substantial differences in access to novel drugs among CEE countries were observed. In general, the availability of drugs is not in accordance with the Magnitude of Clinical Benefit Scale (MCBS), as defined by the European Society for Medical Oncology (ESMO). Time spans between drug registrations and national decisions on reimbursement vary greatly, from less than 3 months in one country to more than 1 year in the majority of countries. Conclusion The access to novel drugs for NSCLC in CEE countries is suboptimal. To enable access to the most effective compounds within the shortest possible time, reimbursement decisions should be faster and ESMO MCBS should be incorporated into decision making.
reimbursement. There was a trend for better availability of drugs with longer time from EMA marketing authorization. Substantial differences in access to novel drugs among CEE countries were observed. In general, the availability of drugs is not in accordance with the Magnitude of Clinical Benefit Scale (MCBS), as defined by the European Society for Medical Oncology (ESMO). Time spans between drug registrations and national decisions on reimbursement vary greatly, from less than 3 months in one country to more than 1 year in the majority of countries. Conclusion. The access to novel drugs for NSCLC in CEE countries is suboptimal. To enable access to the most effective compounds within the shortest possible time, reimbursement decisions should be faster and ESMO MCBS should be incorporated into decision making. The Oncologist 2020;25:e598-e601
INTRODUCTION
Lung cancer is the most frequent cause of cancer-related mortality worldwide, with high incidence and mortality rates in Central and Eastern Europe (CEE) [1]. Most patients are diagnosed with advanced disease, resulting in poor survival rates [2]. However, there is a trend toward better outcomes in developed countries mostly because of improved systemic treatment strategies introduced in the beginning of this century [2][3][4].
Nowadays, treatment strategy in advanced non-small cell lung cancer (NSCLC) mainly depends on molecular markers. The discovery of oncogene drivers such as epidermal growth factor receptor (EGFR) mutations and ALK and ROS1 rearrangements paved the way to effective targeted therapies, whereas immunotherapy with checkpoint inhibitors (CPIs) became the standard treatment for the majority of patients with advanced NSCLC without oncogenic drivers [3,4].
Access to novel therapies is one of the major factors contributing to disparities in cancer care [5]. Limited drug availability remains a prominent aspect of cancer care in CEE countries, still struggling with both financial and organizational shortages [6]. The Central European Cooperative Oncology Group (CECOG) created a network of activities to improve quality of cancer care in the region. The most recent CECOG initiative consisted of two surveys on NSCLC. The first survey on molecular testing has recently been published [7]. The aim of the present survey was to investigate access to novel anticancer drugs for NSCLC and time from marketing authorization to national reimbursement.
MATERIALS AND METHODS
A panel of NSCLC experts from 10 CEE countries (Austria, Bulgaria, Croatia, Czech Republic, Hungary, Poland, Romania, Serbia, Slovenia, and Slovakia, each country represented by one expert, respectively) participated in the survey. Color Key Registered and available for the majority of patients through governmental/private insurance Registered and available for only a minority of patients with special insurance/other c Registered, but not yet available/reimbursed Not yet registered at data cut-off a Refers to use of crizotinib in ALK+ NSCLC. b Refers to crizotinib in ROS1+ NSCLC.
Novel drugs with European Medicines Agency (EMA) marketing approval (MA) for particular indication and recommended by European Society for Medical Oncology (ESMO) guidelines [3] were included. In a majority of countries (9 out of 10, i.e., European Union [EU] members), the time from marketing approval was the same, as a result of EMA licensing. Only in Serbia, a national approval procedure was still in place, with 5 out of 17 drugs without national MA at the time of survey.
The obtained answers were further verified on the official websites of National Drug Agencies, National Insurance Houses, and Ministries of Health. The data lock was March 31, 2018.
Each drug was identified by one of three categories: (a) the drug is registered and available for the majority of patients through established governmental or private insurance; (b) the drug is registered and available only to a minority of patients with special insurance or other access programs; or (c) the drug is registered by EMA, but neither reimbursed nor available in the country.
ESMO Magnitude of Clinical Benefit Scale (MCBS) scores available at the moment of survey [3,8] were included.
RESULTS
Major gaps and differences in the availability of novel anticancer drugs for NSCLC in the CEE region were recorded (Table 1), with the most profound lack of access observed in countries with lower levels of economic development, such as Serbia and Romania [7]. Although first-generation EGFR tyrosine kinase inhibitors (TKIs) were reimbursed and available in all countries, there were apparent gaps in access to ALK TKIs and CPIs in first-line. There was a trend for better availability of compounds with longer intervals from EMA MA to the survey. It is quite obvious that availability of drugs was not in accordance with the ESMO MCBS. Drugs with high scores, like crizotinib for ALKpositive disease or nivolumab (MCBS 4 and 5, respectively) were not available in a number of countries even after a long interval of 2 years from MA.
Time from MA to reimbursement differed between <3 months in a striking minority of countries to >12 months needed for most novel drugs to get reimbursement in a vast majority of countries (Fig. 1). In Croatia and Serbia, the lag time between registration and reimbursement was more than 1 year for all drugs. Almost no reimbursement decision for any novel drug has been made in any country except Austria within a period of <3 months, thus precluding rapid access to effective compounds with high ESMO MCBS.
DISCUSSION
Based on our survey, the access to novel anticancer drugs for NSCLC in the CEE region is far from satisfactory. Notably, a vast majority of drugs being approved by EMA for 2 years or more and recommended by current ESMO treatment guidelines [3] were not available to CEE patients with NSCLC at the time of our survey. The major reason for poor availability seems to be a long lag interval between EMA or national MA and national reimbursement decisions, which is particularly worrisome for drugs with high ESMO MCBS scores [8]. Despite some recent optimistic reports of decreasing time intervals between EMA registrations and national reimbursement decisions of anticancer drugs in Western and Northern European countries [9], our results are not in line with those encouraging data.
The first comprehensive analysis on the availability of anticancer drugs for major cancers in Europe was performed by ESMO in 2014 [5]. With novel and effective drugs entering the market, the proportion of nonreimbursed and thus unavailable novel drugs for NSCLC has even increased in some CEE countries, based on our observation. This is particularly worrisome for NSCLC, which constitutes a paradigmatic driver of cancer-related morbidity and mortality in the CEE region. It has been shown that economic disparities, differences in health care systems, and reimbursement decisions are the main reasons for inequalities in access to novel anticancer drugs across Europe [5,6,10]. The existing gaps are certainly due to disparities in gross domestic product (GDP), with CEE countries spending about 2.5 times less on anticancer drugs than Western European countries despite using a higher share of their GDP [10]. However, more funds do not seem to be the ultimate answer; to retain a sustainability of system and to close the gap in access to novel anticancer drugs, more rational, value-oriented uptake of novel drugs should be implemented.
CONCLUSION
With lung cancer representing a major burden in the CEE region, the data of the current survey indicate not only that time intervals between drug registrations on the EU level and reimbursement decisions on the national level should be shortened but also that value scores, like ESMO MCBS, should be taken into account in order to enable patient access to the most effective compounds in the shortest possible time. | 2019-12-01T14:02:53.784Z | 2019-11-29T00:00:00.000 | {
"year": 2019,
"sha1": "f3ba5c82d4ae2a57c7d3d4c1451936e671a83ad5",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1634/theoncologist.2019-0523",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "32c87972387a8b2fe2698fbad8a7525870d413e1",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.